Shock and Awe: How AI is Sidestepping Regulation

33
shock-and-awe:-how-ai-is-sidestepping-regulation

The recent actions of tech moguls and CEOs of AI (Artificial Intelligence) companies have left many feeling whiplashed. Several months ago, Open AI launched its generative language model ChatGPT. This was soon followed by a flurry of AI activity as start-ups and established tech giants such as Google and Microsoft flooded the market with AI tools. Next, Generative AI such as ChatGPT and DALL-E began to dominate news cycles with journalists, news pundits and politicians all commenting on the societal impact of the ‘Age of AI’. Many news reports focused on the sophistication of AI tools such as ChatGPT that were “smart” enough to pass medical exams, author papers and essays for Harvard Business School, pass the bar exam in different states and draft legal documents.

The ‘mystification’ of AI in the media led to a surprising outcome- tech moguls and high profiled CEOs such as Elon Musk called for a moratorium on AI development stating that AI could have dramatic, negative impacts on societies. This prompted immediate action from legislators, policy makers and world leaders who all agreed that AI must be regulated. The EU sought to advance its framework for ‘ethical’ development of AI; the US Congress held sessions on how to regulate AI, while UNESCO published its own roadmap for AI development.

Yet as soon as the moguls and CEOs called for a halt on AI development, they also announced plans to launch new AI companies. In some cases, less than a week separated moguls’ call for an AI moratorium and their announcements of new AI start-ups. Notably, the call for a moratorium on AI development led to negative media discourses as news sources warned that without regulation, AI could threaten societies everywhere be it due to the collapse of reality, the reshaping of labor markets, the end to authorship and intellectual property or the advent of AI systems “smart” enough to destroy mankind.

Rather than negate media hyperboles and call for calm, the CEOs of AI companies joined the choir with the head of Open AI labeling artificial intelligence “an existential” threat to mankind. Other tech leaders noted that the AI threat is far greater than the threat of pandemic, climate change and the atom bomb. In fact, one group of tech CEOs published a short press release warning that humanity now faced the possibility of extension while a former executive in Google suggested that AI could surpass human intelligence leading to an unprecedented crisis.

And just as hysteria reached fever pitch, AI CEOs began travelling the world, meeting with world leaders, and discussing what AI regulation might look like in terms of legislation and restrictions on AI development. For instance, Open AI’s CEO has met with EU leaders in Brussels and with the leaders of European countries.

So, in the space of a few months, tech moguls and AI CEOs have developed AI, warned against AI, called for a halt on AI, created new AIs and labelled AI as an existential risk. The question that comes to the fore is whether these contradictory actions are the result of media coverage, political action, societal pressure, or part of a deliberate strategy?

I would argue that tech moguls and AI CEOs are following a specific playbook, a pre-existing roadmap that has been used in the past by innovators and disruptors be they tech magnets or oil barons. This playbook always has the same end goal- ensuring that an industry is allowed to regulate itself and that governments do not interfere in the industry’s development and its ability to generate capital. In this sense, there is little difference between Steve Jobs, Bill Gates, Elon Musk, John D. Rockefeller or Henry Ford. Disruptors and innovators of every age have sought to limit the reach of government and prevent any forms of regulation.

The disruptor/innovator playbook follows four clear steps:

1.      Mystify innovation and ensure that it is seen as incredibly sophisticated and that only “experts” can grasp its workings and effects.

2.      Stoke fear of innovation and ensure negative media discourse. Next, spread doomsday scenarios warning of an end to a way of life, or an end to tradition or an end to life itself.

3.      Go on tour and meet policy makers. Assure them that the innovation is so complex, and that the risks are so great, that only the innovator can regulate the innovation. In the case of AI, convince leaders that only tech can regulate tech.

4.      Sit back and generate billions of dollars in revenue.   

This playbook, which has proven effective in the past, follows the logic of shock and awe, of fear and confusion. Whether or not governments and states will regulate AI technology remains to be seen. What is clear is that AI CEOs are following a familiar script, and that the feeling of whiplash felt by many is deliberate as tech seeks to regulate tech and squash all forms of regulation.   

Read More