Thursday Mar 30, 2023

Digital Leader Show: Regulating Innovation & AI

This week we're discussing increased pressure from regulators on the tech industry, including AI, TikTok and other apps, PLUS headlines for Digital Leaders from the world of enterprise technology.

SHOW NOTES BELOW

About the Show:

Each week, we explore some of the key issues defining our ongoing transformation into digital enterprises and societies. Join us for a unique, unfiltered discussion of business and social leadership at the intersection of technology and the humanities, helping you to become a Digital Leader.

About the Speakers:

Daniel Goodstein runs multiple technology associations with over 130,000 member executives worldwide and is a go-to resource for enterprises across the globe looking for assistance with digital transformation, AI, automation and outsourcing. He is also a seasoned go-to-market expert helping drive sales, marketing and media strategies for several Fortune 500 tech firms and startups alike.

Carlos Alvarenga is a researcher, author & coach, and is an educator with Georgetown University and Senior Research Fellow & Adjunct Professor at the Robert H. Smith School of Business at the University of Maryland. He was previously the Executive Director of World 50 Labs, the member-innovation team at World 50, Inc., a Principal in Ernst & Young’s Advisory Practice and a Managing Director at Accenture.

SHOW NOTES:

0:00 Regulating AI innovation is a complex issue with no established precedents : Governments tend to regulate inputs, processes, and outputs, but none of these are established for AI innovation

Various experts have conflicting opinions on how to regulate AI innovation

AI bias and lack of trackability could lead to potential lawsuits : Regulators may require AI to have transparent decision-making processes

Canceling AI altogether could hinder technological advancements

11:46 AI has the potential to create more problems than it solves due to the lack of structure and regulation. : Innovation often leads to unintended consequences, as shown by past technologies.

The potential risks posed by AI are significant and unknown, requiring careful consideration and potential regulation.

18:16 OpenAI lacks proper public discourse and ethical regulations. : Major AI should go through ethical regulations and reviews before implementation.

OpenAI's lack of public consultation is concerning for AI safety and liability issues.

24:24 US regulators focus on banning TikTok instead of larger tech issues : US government is buying spyware applications to spy on government employees

Commercial spyware is sold mostly to governments and is being used to steal data on journalists, lawyers and human rights defenders

There are larger tech issues being ignored while regulators focus on TikTok

30:25 Chinese-manufactured surveillance cameras in New York City are insecure : Manufacturer, Hikvision, is Chinese-owned and the least secure among camera manufacturers

Regulation is needed to ensure secure and safe release of AI technology into consumer and critical spaces

36:52 The lack of AI regulation poses a risk of unintended consequences : There needs to be a governing body composed of practitioners in technology and industry-based regulators to ensure ethical standards

The risk of unintended consequences is not due to purposely designing harmful systems, but rather the byproduct of what the system does

49:51 Refocus on bigger issues than AI startups : AI startups need to be managed and steered towards social good

The incentives for AI companies need to be right in order to solve bigger problems

55:57 AI Industrial Complex is an opportunity for companies to solve societal issues and thrive : Companies building AI machines focus on serious public problems

Governments need to hire these AI companies for public issues than competing with them

Comments (0)

To leave or reply to comments, please download free Podbean or

No Comments

Copyright 2025 All rights reserved.

Podcast Powered By Podbean

Version: 20241125