In episode 39 of the Growth Swarm podcast, John Siefert, Bob Evans, Tony Uphoff, and Scott Vaughan discuss the recent open letter that Elon Musk, Steve Wozniak, Andrew Yang, and others released calling for a six-month pause, or slow-down, on generative AI and ChatGPT to evaluate if it’s right for society and culture.
This episode is sponsored by Acceleration Economy’s Generative AI Digital Summit. View the event, which features practitioner and platform insights on how solutions such as ChatGPT will impact the future of work, customer experience, data strategy, cybersecurity, and more, by registering for your free on-demand pass.
00:42 — John introduces today’s topic, and wonders if it’s too late to slow down. What would be accomplished with six months of pause? Is the genie out of the bottle?
02:32 — Yes, the genie is out of the bottle, says Tony, who typically approaches broad statements about technology with a skeptical eye around the motivations of the people who make the statements. In this case, many of the people behind the open letter “are interested in staying in the news cycle,” even though there is a legitimate point to be made about generative AI and where we are going with it. He cites similar attempts to reign in Microsoft’s dominance with Windows in the computer market in the ‘80s and ‘90s and how, in the end, the company stepped back voluntarily without any government decree.
04:42 — John cites the topic of misinformation in the context of generative AI, which was mentioned in the letter, and asks Bob what he thinks.
06:42 — Bob says that whatever anybody is worried about concerning machines is already happening, and that this is the price of being human in a highly advanced society. He thinks the idea of pausing that was proposed in the letter is preposterous. “It’s just removed from reality,” he says. “What are you going to do in six months?” He admits that this might be one of the most challenging, powerful, and profound advances we have experienced in human history.
08:31 — John brings up the topic of fake news being propagated by AI during the recent election cycle as a way to demonstrate that the scourge of misinformation is already here, in a big way. The discussion then turns to the question of whether all jobs, including fulfilling ones, should be automated away and whether non-human minds should replace humans.
09:59 — Scott agrees that the genie is out of the bottle regarding AI and automation, and suggests that rather than calling for a pause, there should be a call for academic and educational institutions to come together to set standards and guidelines for AI development. He argues that it’s important to appeal to the “human-ness” of the situation and bring together a group of thinkers to navigate the complexities.
12:10 — Bob admits that he is getting into “opinion” here. “I see more and more of this stuff at Davos, and it makes my skin crawl,” he says. “because you get these people up there lecturing the rest of us about global warming, and they fly in on their private jets….Who put them in this position?” He says that many people have been working on these sorts of ethical issues around AI specifically for years already, and that these are the sorts of people we should be looking to instead, not “some of these eggheads swirling around outside and telling the rest of us what to do.”
15:14 — John reminds everyone that the open letter did include a call for experts to work together to develop and implement safety protocols for AI. However, the letter also suggested that such protocols should be audited and overseen by independent outside experts, which could be problematic. John argues that the experts involved in such audits should truly understand the technology behind AI, rather than just being, say, policymakers who make policy for the sake of making policy.
16:18 — John raises two other points for discussion. The first is that the development of AI raises questions about the “survival of the fittest” and what it means to be human. He urges people to read Darwin’s “On the Origin of Species” to better understand the implications of AI. The second point is that the development of AI is a moment for humanity to reconsider how we think critically and to strive for even greater intelligence.
Which companies are the most important vendors in AI and hyperautomation? Check out the Acceleration Economy AI/Hyperautomation Top 10 Shortlist.
17:57 — Scott agrees, noting that critical thinking is key to advancing our understanding of AI and its potential. He has confidence in humanity’s ability to navigate the challenges posed by AI and encouraged everyone to bet on us. Bob agrees.
19:08 — Tony also agrees. The horse and buggy had been the dominant form of transportation for 5,000 years, and then suddenly a car drove down the street and it changed everything — infrastructure, cities, and created new jobs. Today, it’s a little frightening to think about how much some of the latest advances in generative AI will change everything, but past behavior is an accurate predictor of future behavior. The horse and buggy story shows that human beings understand how to use technology to augment society, work, and human life.
20:38 — John agrees, too. “The extreme opportunity that sits in front of us as a species is amazing,” he says.
Want more tech insights for the top execs? Subscribe to the Leadership channel: