On March 13, 2024, the European Union (EU) adopted a new law governing the creation and marketing of AI applications throughout the nations within its jurisdiction. Likely to be a model for future legislation in other countries, a summary review of this law shows both its promise and a need for additional rules.
Belatedly in the opinion of some, governments are starting to realize that legislation and regulations will likely be necessary to prevent artificial intelligence (AI) applications from sometimes being used unscrupulously or carelessly. Although evaluating AI apps legally is still in its early stages, a periodic review of laws and rules regarding AI use will likely be a permanent part of the landscape for this class of applications as they evolve in coming years. Not the least of the considerations in this process is the prospect of both legislators and citizens looking beyond the landscape of generative AI apps to view AI apps as a genre.
The European Union Artificial Intelligence Act (EUAIA), originally proposed by the European Commission in April 2021, passed its plenary vote on March 13, 2024, and became law. The legislation is the result of a process that began in 2018, when the European Commission formed the High Level Expert Group on Artificial Intelligence (HLEG), a committee of 52 experts from a variety of backgrounds tasked with implementing the EU Communication on Artificial Intelligence, a list of recommendations on developing an AI implementation strategy. The recommendations were developed by the European AI Alliance, an online forum of more than 4,000 experts and stakeholders from "academia, business and industry, civil society, EU citizens and policymakers," according to the European Commission. In addition, there were two assemblies of forum members to discuss ethics guidelines and multiple other community documents. HLEG produced several documents itself, including the Assessment List for Trustworthy AI (ALTAI), which is a self-assessment checklist for AI app producers based on the ethics guidelines worked out by the AI Alliance. In April 2021, based on this aggregated work, the European Commission presented the "Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence," which in its finished form is the EUAIA itself.
Although the law is now in effect, there is a delay in implementing many of its provisions as a means of letting companies producing AI apps adapt to the new legal requirements. Although the new law directly affects only EU nations, it indirectly affects any enterprises worldwide that are producing AI apps intended for sale anywhere in the EU.
AI's Risk Categories
The staged nature of the law's effectiveness is based on five levels of risk into which the law categorizes all AI apps. Although most of these categories were covered in an earlier article, a new category has been added, so a review here of the new category list is in order.
At the top are "unacceptable risk" apps, which are AI apps that pose the greatest potential threat to human beings if misused. These include biometric tools that identify and categorize human beings (e.g., facial-recognition systems), apps that could result in behavioral manipulation of vulnerable persons (e.g., voice-activated toys that suggest actions to children), and applications involving social scoring (i.e., classification of people based on race, religion, assorted other personal characteristics, behavior, or socioeconomic status), although the law makes exceptions for law enforcement purposes. Apps in this category are totally banned, but there is a six-month moratorium on enforcement, so existing apps in this category won't become illegal until September 2024.
The second most severe category is "high risk," which means apps that could create significant risks to any human being's health, safety, or fundamental rights. This group is further divided into two subcategories: 1) systems that are covered by existing EU safety legislation (e.g., medical devices and means of transportation) and 2) a catchall classification for apps charged with critical infrastructure operation and management, education and vocational training, the handling of worker hiring and self-employment opportunities, access to "essential" public and private services and benefits, law enforcement use, and national border control and related activities. Apps in this category must be registered in a special database and evaluated for factors such as transparency, human control, and security throughout their entire lifecycle. The enforcement moratorium on this category doesn't take effect until March 2027.
Next most severe is a category called "General-Purpose AI (GPAI)." This is a new category that was added in the discussions about the initial four categories proposed in an earlier EU policy statement on AI liability rules. Apps in this category include chat apps like ChatGPT and are defined as apps trained with some self-supervision that can carry out a wide range of functions that could be integrated into downstream systems or applications. These apps must carry out a model evaluation process. Here again, there are two subcategories: 1) GPAI apps using 10 to the 25th power floating-point operations per second (FLOPS) or 2) GPAI models designated by the EU AI Office as posing a "systemic risk." GPAI apps meeting neither of these criteria must meet transparency requirements regarding how they are to be used and how they operate. These requirements won't be enforced until April 2025.
"Limited risk" is the next category and includes apps that help users generate or alter videos, still images, and sound recordings. These apps must be transparent to users in the sense that they must state clearly that they are AI apps or, in the case of images, must clearly show they are AI-generated. Among other problems, this is one potential remedy for "deepfake" images that have already started appearing as a means of confusing people about what's real and what isn't. This labeling is considered part of a "code of conduct" to which AI vendors must begin adhering no later than the end of 2024.
The final category is "minimal risk." These are AI apps such as spam filters or video games and are not covered by the EUAIA, except that individual member states are prohibited from regulating them outside of the EUAIA and the EUAIA supersedes any existing laws of individual member states that do regulate these kinds of AI apps.
Vendor and National Obligations Under the EUAIA
The requirements the EUAIA places on AI app vendors once the law is fully in force consist of publicly classifying all "high-risk" apps as such and registering them in the database the law mandates, formulating and including in AI products risk-management measures to allay identified risks, establishing means of confirming that AI app training took place using high-quality training data, assurance that training datasets are unbiased and relevant to app functions, and extensive documentation of development steps. This documentation includes proof of compliance with requirements pertinent to the app's risk categorization, the app's general capabilities and characteristics, and the way in which the risk management features were formulated and incorporated. This documentation is required to be periodically updated to reflect changes to the product or changes to risk mitigation requirements as time passes and loopholes to exploit are inevitably discovered. The app must also include user tools to help prevent or minimize risks, with clear instructions on how to use them, documented cybersecurity measures to prevent tampering with the app itself or unauthorized access to the data it generates, and a similarly well-documented quality-management system for the app. The vendor must also make a "declaration of conformity" certifying it has met all the requirements (which must be updated for any new versions of the app for the next 10 years), conform to a product-labeling requirement, and establish a means of reporting "serious incidents" (in which security or risk safeguards failed) to market surveillance authorities of each member state affected by the incident within 15 days of the provider becoming aware of the incident. Finally, the provider must submit a statement that specifies how the app might affect EU citizens under the Charter of Fundamental Rights of the European Union.
For GPAI apps, only some of these same requirements exist. Providers must notify the European Commission if a version of their AI model exceeds the FLOPS limit, must set up a process to "constantly assess and mitigate the risks they pose and ensure cybersecurity protection," must track and report incidents of violation of EU citizens' fundamental rights and implement corrective measures, and must maintain "codes of practice and presumptions of conformity" to "demonstrate compliance with the European Parliamentary Research Service obligations set under the legislation." A specific code of practice isn't specified in the EUAIA other than this notation, although the European Commission reserves the right to formulate and legally codify one sometime in the future. The European Commission also reserved the right to reclassify a GPAI app as "high risk" without further amending the EUAIA.
For "limited risk" apps, the requirements are even fewer. "Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, effective, interoperable, effective and robust techniques" to indicate the "output has been generated or manipulated by an AI system and not a human." These would include image watermarks or other prominent declarations about an app's output. In addition, employers using AI systems in the workplace "must inform the workers and their representatives" about such systems being in use. Otherwise, apps here only need to conform to the EU's GDPR data privacy law.
To support the required testing, member states are asked to create at least one national AI regulatory sandbox apiece to "facilitate the development and testing" of new AI systems. Each sandbox must "enable, where appropriate, testing of AI systems in real-world training, testing and validation of innovative AI systems before their placement on the market or entry into service." High-risk systems can be tested in real-world conditions without participating in a national regulatory sandbox if they respect certain guarantees and conditions (those cited being asking for specific consent and approval of their respective national "market surveillance" authorities).
Criticisms of the EUAIA
As groundbreaking as its intent (taking a risk-based approach to categorizing AI systems and spelling out bundles of producer requirements) might appear, the EUAIA must be frankly assessed as being merely a first cut at tackling the problems posed to commerce and society by AI. Criticisms acknowledged by the EU itself include that the requirements may be too burdensome to AI software producers, the act relies too heavily on a self-assessment by vendors of what risk category their products belong in, and, unlike under the EU's General Data Protection Regulation (GDPR), individual citizens who feel their rights may have been violated by an AI system have no means of filing direct complaints because the aggrieved party is considered to be the EU itself. There's a lot of disagreement over what really should be defined as biometric data and what constitutes "high-quality" training data. Questions of whose rights have been violated and, for example, how someone's thoughts and opinions might have been affected by a "deepfake" is hard to quantify and is often colored by the perceptions of the beholders of such impacts. Leaving such questions as how much testing is enough and what is a sufficiently dangerous enough incident to ban a product from the market to rulings by the various international standards organizations raises questions of letting these essentially non-democratic institutions make such binding decisions. These and other questions will no doubt be tackled in future revisions to the EUAIA.
Progress on similar legislation in the U.S. seems far away. President Biden issued an executive order on October 30, 2023, on use of artificial intelligence, but it mainly calls for policy documents from various executive departments on ensuring safe and secure development and use of AI. Most of these documents won't appear until later in 2024, and none of them will be binding if there is a change of U.S. Presidents next January.
Twenty companies, including Adobe, Google, Meta, Microsoft, OpenAI, and TikTok, issued The Accord to Combat Deceptive Use of AI in 2024 Elections, but it's primarily a list of goals. The only action commitment is to "publish the policies that explain how we will address such content, providing updates on provenance research, or informing the public about other actions taken in line with these commitments," an unenforceable promise that will only get as much attention as commercial news media happen to give it if it should fail to take place in certain incidents.
A news story by RStreet published last October mentions nearly 20 federal bills introduced to provide various means of regulating AI in the U.S. and mentions that, as of that date, nearly 200 bills regulating AI have been introduced in state legislatures in 2023 alone. However, each of the federal bills has a different method for regulation or else simply sets up an agency or committee to make recommendations. Barring some landmark incident that brings AI into the public consciousness as something more than a toy or a species-killer, added to the stubborn polarization of Congress on most issues, effective legislation about AI in the U.S. could well be years away. Lobbying resistance to a national data privacy law such as the example of the GDPR makes that timeframe even more likely.
In contrast, it's even more impressive that the EU has managed to pass legislation that takes a significant first step at producing some law that tries to comprehensively address AI as a software genre. The EUAIA is at least a useful blueprint for what will likely be a ground floor for a significant body of law over the coming decades, a topic we will all need to keep an eye on.
LATEST COMMENTS
MC Press Online