MacDirectory Magazine

Dmitry Marin

MacDirectory magazine is the premiere creative lifestyle magazine for Apple enthusiasts featuring interviews, in-depth tech reviews, Apple news, insights, latest Apple patents, apps, market analysis, entertainment and more.

Issue link: https://digital.macdirectory.com/i/1500862

Contents of this Issue

Navigation

Page 77 of 189

Similarly, the US has adopted a hands-off strategy. Lawmakers have not shown any urgency in attempts to regulate AI, and have relied on existing laws to regulate its use. The US Chamber of Commerce recently called for AI regulation, to ensure it doesn’t hurt growth or become a national security risk, but no action has been taken yet. Leading the way in AI regulation is the European Union, which is racing to create an Artificial Intelligence Act. This proposed law will assign three risk categories relating to AI: • applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China • applications considered “high-risk”, such as CV-scanning tools that rank job applicants, will be subject to specific legal requirements, and • all other applications will be largely unregulated. Although some groups argue the EU’s approach will stifle innovation, it’s one Australia should closely monitor, because it balances offering predictability with keeping pace with the development of AI. China’s approach to AI has focused on targeting specific algorithm applications and writing regulations that address their deployment in certain contexts, such as algorithms that generate harmful information, for instance. While this approach offers specificity, it risks having rules that will quickly fall behind rapidly evolving technology. The pros and cons There are several arguments both for and against allowing caution to drive the control of AI. On one hand, AI is celebrated for being able to generate all forms of content, handle mundane tasks and detect cancers, among other things. On the other hand, it can deceive, perpetuate bias, plagiarise and – of course – has some experts worried about humanity’s collective future. Even OpenAI’s CTO, Mira Murati, has suggested there should be movement toward regulating AI. Some scholars have argued excessive regulation may hinder AI’s full potential and interfere with “creative destruction” – a theory which suggests long-standing norms and practices must be pulled apart in order for innovation to thrive. Likewise, over the years business groups have pushed for regulation that is flexible and limited to targeted applications, so that it doesn’t hamper competition. And industry associations have called for ethical “guidance” rather than regulation – arguing that AI development is too fast-moving and open-ended to adequately regulate. But citizens seem to advocate for more oversight. According to reports by Bristows and KPMG, about two-thirds of Australian and British people believe the AI industry should be regulated and held accountable. What’s next? A six-month pause on the development of advanced AI systems could offer welcome respite from an AI arms race that just doesn’t seem to be letting up. However, to date there has been no effective global effort to meaningfully regulate AI. Efforts the world over have have been fractured, delayed and overall lax. A global moratorium would be difficult to enforce, but not impossible. The open letter raises questions around the role of governments, which have largely been silent regarding the potential harms of extremely capable AI tools. If anything is to change, governments and national and supra-national regulatory bodies will need take the lead in ensuring accountability and safety. As the letter argues, decisions concerning AI at a societal level should not be in the hands of “unelected tech leaders”. Governments should therefore engage with industry to co-develop a global framework that lays out comprehensive rules governing AI development. This is the best way to protect against harmful impacts and avoid a race to the bottom. It also avoids the undesirable situation where governments and tech giants struggle for dominance over the future of AI.

Articles in this issue

Archives of this issue

view archives of MacDirectory Magazine - Dmitry Marin