894 Words | 3 min 43 Sec Read Welcome to another issue of Geek AI. Today we'll be discussing the European Union's newly released AI Act. In particular, what it got right, what it got wrong, and how this piece of legislation could be used to spy on you. Let’s dive in. This issue is sponsored by us - Geek AI Overwhelmed by all this AI stuff? No idea where to start? Meet AI Beginner Bootcamp: A once/week live training program where we'll show you how to use today's most important artificial intelligence apps. From ChatGPT to Midjourney, our coaches will take you from beginner to advanced using simple, clear language. No tech jargon allowed. Interested? Click here to learn more! Earlier this week, the European Union approved legislation designed to regulate the artificial intelligence industry in Europe. While this may sound like a boring topic, it carries huge implications for the future of AI in both the United States and the rest of the world. Mainly because the US government has yet to release any AI regulations of its own. Nor has Australia, Canada or the UK. Meaning, this new act from the EU is the first piece of concrete AI legislation from a large, first-world body. And we'd bet money the countries above are paying attention. How does it work? Without boring you to death with minor details (you can read the whole thing here), the Act establishes a risk-based approach to regulating AI. In particular, it involves categorizing AI systems based on levels of perceived risk, ranging from Unacceptable to Minimal. At the top of the pyramid, AI systems that present an "unacceptable" risk will be outright prohibited. In terms of how 'unacceptable' is defined, Article 5 of the Act outlines eight use cases, most of which are related to manipulation with the intent to harm, the use of biometric data, and racial profiling. At the next level, the much longer Title 3 outlines a variety of "high-risk" use cases. While we can't cover them all here, the most obvious one relate to critical infrastructure, educational systems, law enforcement and more. Last, the legislation doesn't include as many examples or direct discussions of app that would fall into the Limited Risk category. Instead, it emphasizes the need for transparency. Mainly as it relates to labeling AI-generated content (including deep fakes), making users aware when they're interacting with an AI instead of a human, and other situations where someone might behave differently (or believe something different) based on the AI's output. Last, the Act does not regulate 'Minimal Risk' AI apps whatsoever. Interestingly, the regulation claims most AI apps currently being used in Europe fall into the Minimal Risk category. In our opinion, this is an interesting conclusion. While European governments are known for over-regulating to the point of killing innovation, it appears they're hip to the reality most consumer-facing AI applications - including everything from text generators to XXX deep fake apps - pose "minimal risk" to society as a whole. Instead, the legislation is heavily focused on preventing crimes, pre-empting disasters, and ensuring AI is not used to further inequality. In our opinion, this is a great step in the right direction. But it's not perfect. While the Act makes it clear governments cannot use AI to spy on their citizens, it includes a variety of broadly worded loopholes. Mainly as it relates to the use of facial recognition to identify criminals, suspected criminals, terrorists, or anyone they believe is about to commit a crime. Which, per Minority Report, is a terrifying and highly dystopian possibility. As we saw with the Patriot Act in the US, governments can abuse these loopholes to spy on innocent citizens without a warrant or court mandate. And they can do so without any repercussions. Admittedly, the European Union isn't known for abusing power to the same degree the United States is. But make no mistake about it: If a more aggressive country like the US, Israel or Russia adopts similar legislation, it's safe to assume they'll use AI to spy on their citizens. And because of that, if privacy is an issue you care about, we highly recommend you stay up-to-date on AI regulations in your home country. And not just stay up-to-date, but support organizations that protect citizens' rights while contacting relevant politicians to voice your concerns. 💡Wrap Up: While libertarians and techno-optimists believe AI should be open source and allowed to flourish on its own, there are a variety of use cases where - if the general public understood the risks - they would probably agree on the need for government regulation. At the same time, the EU AI Act contains loopholes European member countries can take advantage of to spy on innocent civilians. And because of that, the implementation of this legislation - which is far from perfect - should be monitored closely. 🤔Thought-Provoking Question: Given how many open-source LLMs are available on the web, do you think it's even possible for governments to regulate such an advanced and rapidly growing technology? Interesting Tool ⚙️: Want to build a No Code AI app, but don't know how? While their service is more geared towards people with a bit of developer experience, MindStudio also offers No Code resources for building AI apps without you having to write a single line of code. |
Join 40,000+ daily readers and get our 3 min newsletter on what matters in AI.