This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 1 minute read
Reposted from Taylor English Insights

EU Passes World's First Comprehensive AI Law

In mid-March, the members of the EU voted to approve a bloc-wide AI law that will take effect over the next six months to three years. The new AI law is expected to influence AI regulations in other countries when they catch up to the EU, much as the EU's privacy law has led the way for consumer privacy laws in other jurisdictions. The AI law is based on a framework of ranked risk: the higher the risk to humans, the more regulated the particular AI system will be. Some uses of AI will be prohibited entirely as “unacceptable” risks, such as certain uses of AI for “predictive policing" and using systems that can recognize emotions in the workplace.

Why It Matters

Most companies implementing AI will not be affected by the most onerous requirements of the law, which can require internal audit-like functions such as impact/risk assessments and risk mitigation. Many companies will have to comply with the regulations applied to lower-risk AI systems, including disclosing its use and the sources used to train it.  

The EU's Act is largely consistent with the few instances of AI regulation we have seen in the US. It is reasonable to expect that the EU's rules will bleed into other jurisdictions as AI becomes increasingly regulated. Fortunately, the specific tasks that will be required in connection with AI compliance will feel familiar to companies that have up-to-date privacy programs. These will include internal assessment of what systems are in use, how they work (what data they ingest, what output they produce), the risk those systems can pose to humans, designing for minimal impact where possible, and being transparent with individuals about how and why systems are used. Now is the time to understand the requirements and begin thinking about how to apply them.  

The riskier an AI application, the more scrutiny it faces. The vast majority of AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct. High-risk uses of AI, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements like using high-quality data and providing clear information to users.

Tags

data security and privacy, hill_mitzi, risk ranking, insights, ai and blockchain, artificial intelligence, ai regulations