MacDirectory Magazine

Dmitry Marin

MacDirectory magazine is the premiere creative lifestyle magazine for Apple enthusiasts featuring interviews, in-depth tech reviews, Apple news, insights, latest Apple patents, apps, market analysis, entertainment and more.

Issue link: https://digital.macdirectory.com/i/1500862

Contents of this Issue

Navigation

Page 76 of 189

Calls to regulate AI are growing louder But how exactly do you regulate a technology like this? By Stan Karanasios, Associate professor, The University of Queensland; Olga Kokshagina, Associate Professor - Innovation & Entrepreneurship, EDHEC Business School; Pauline C. Reinecke, Assistant researcher, University of Hamburg Special thanks to The Conversation for republishing permission. Recently, artificial intelligence pioneers and experts urged major AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. An open letter penned by the Future of Life Institute cautioned that AI systems with “human-competitive intelligence” could become a major threat to humanity. Among the risks, the possibility of AI outsmarting humans, rendering us obsolete, and taking control of civilisation. The letter emphasises the need to develop a comprehensive set of protocols to govern the development and deployment of AI. It states: These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities. Typically, the battle for regulation has pitted governments and large technology companies against one another. But the recent open letter – so far signed by more than 5,000 signatories including Twitter and Tesla CEO Elon Musk, Apple co-founder Steve Wozniak and OpenAI scientist Yonas Kassa – seems to suggest more parties are finally converging on one side. Could we really implement a streamlined, global framework for AI regulation? And if so, what would this look like? What regulation already exists? In Australia, the government has established the National AI Centre to help develop the nation’s AI and digital ecosystem. Under this umbrella is the Responsible AI Network, which aims to drive responsible practise and provide leadership on laws and standards. However, there is currently no specific regulation on AI and algorithmic decision-making in place. The government has taken a light touch approach that widely embraces the concept of responsible AI, but stops short of setting parameters that will ensure it is achieved.

Articles in this issue

Archives of this issue

view archives of MacDirectory Magazine - Dmitry Marin