15
Mon, Jul
4 New Articles

European Law Establishes a Blueprint for AI Control Legislation

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

On March 13, 2024, the European Union (EU) adopted a new law governing the creation and marketing of AI applications throughout the nations within its jurisdiction. Likely to be a model for future legislation in other countries, a summary review of this law shows both its promise and a need for additional rules.

Belatedly in the opinion of some, governments are starting to realize that legislation and regulations will likely be necessary to prevent artificial intelligence (AI) applications from sometimes being used unscrupulously or carelessly. Although evaluating AI apps legally is still in its early stages, a periodic review of laws and rules regarding AI use will likely be a permanent part of the landscape for this class of applications as they evolve in coming years. Not the least of the considerations in this process is the prospect of both legislators and citizens looking beyond the landscape of generative AI apps to view AI apps as a genre.

The European Union Artificial Intelligence Act (EUAIA), originally proposed by the European Commission in April 2021, passed its plenary vote on March 13, 2024, and became law. The legislation is the result of a process that began in 2018, when the European Commission formed the High Level Expert Group on Artificial Intelligence (HLEG), a committee of 52 experts from a variety of backgrounds tasked with implementing the EU Communication on Artificial Intelligence, a list of recommendations on developing an AI implementation strategy. The recommendations were developed by the European AI Alliance, an online forum of more than 4,000 experts and stakeholders from "academia, business and industry, civil society, EU citizens and policymakers," according to the European Commission. In addition, there were two assemblies of forum members to discuss ethics guidelines and multiple other community documents. HLEG produced several documents itself, including the Assessment List for Trustworthy AI (ALTAI), which is a self-assessment checklist for AI app producers based on the ethics guidelines worked out by the AI Alliance. In April 2021, based on this aggregated work, the European Commission presented the "Proposal for a Regulation Laying Down Harmonized Rules on Artificial Intelligence," which in its finished form is the EUAIA itself.

Although the law is now in effect, there is a delay in implementing many of its provisions as a means of letting companies producing AI apps adapt to the new legal requirements. Although the new law directly affects only EU nations, it indirectly affects any enterprises worldwide that are producing AI apps intended for sale anywhere in the EU.

AI's Risk Categories

The staged nature of the law's effectiveness is based on five levels of risk into which the law categorizes all AI apps. Although most of these categories were covered in an earlier article, a new category has been added, so a review here of the new category list is in order.

At the top are "unacceptable risk" apps, which are AI apps that pose the greatest potential threat to human beings if misused. These include biometric tools that identify and categorize human beings (e.g., facial-recognition systems), apps that could result in behavioral manipulation of vulnerable persons (e.g., voice-activated toys that suggest actions to children), and applications involving social scoring (i.e., classification of people based on race, religion, assorted other personal characteristics, behavior, or socioeconomic status), although the law makes exceptions for law enforcement purposes. Apps in this category are totally banned, but there is a six-month moratorium on enforcement, so existing apps in this category won't become illegal until September 2024.

The second most severe category is "high risk," which means apps that could create significant risks to any human being's health, safety, or fundamental rights. This group is further divided into two subcategories: 1) systems that are covered by existing EU safety legislation (e.g., medical devices and means of transportation) and 2) a catchall classification for apps charged with critical infrastructure operation and management, education and vocational training, the handling of worker hiring and self-employment opportunities, access to "essential" public and private services and benefits, law enforcement use, and national border control and related activities. Apps in this category must be registered in a special database and evaluated for factors such as transparency, human control, and security throughout their entire lifecycle. The enforcement moratorium on this category doesn't take effect until March 2027.

Next most severe is a category called "General-Purpose AI (GPAI)." This is a new category that was added in the discussions about the initial four categories proposed in an earlier EU policy statement on AI liability rules. Apps in this category include chat apps like ChatGPT and are defined as apps trained with some self-supervision that can carry out a wide range of functions that could be integrated into downstream systems or applications. These apps must carry out a model evaluation process. Here again, there are two subcategories: 1) GPAI apps using 10 to the 25th power floating-point operations per second (FLOPS) or 2) GPAI models designated by the EU AI Office as posing a "systemic risk." GPAI apps meeting neither of these criteria must meet transparency requirements regarding how they are to be used and how they operate. These requirements won't be enforced until April 2025.

"Limited risk" is the next category and includes apps that help users generate or alter videos, still images, and sound recordings. These apps must be transparent to users in the sense that they must state clearly that they are AI apps or, in the case of images, must clearly show they are AI-generated. Among other problems, this is one potential remedy for "deepfake" images that have already started appearing as a means of confusing people about what's real and what isn't. This labeling is considered part of a "code of conduct" to which AI vendors must begin adhering no later than the end of 2024.

The final category is "minimal risk." These are AI apps such as spam filters or video games and are not covered by the EUAIA, except that individual member states are prohibited from regulating them outside of the EUAIA and the EUAIA supersedes any existing laws of individual member states that do regulate these kinds of AI apps.

Vendor and National Obligations Under the EUAIA

The requirements the EUAIA places on AI app vendors once the law is fully in force consist of publicly classifying all "high-risk" apps as such and registering them in the database the law mandates, formulating and including in AI products risk-management measures to allay identified risks, establishing means of confirming that AI app training took place using high-quality training data, assurance that training datasets are unbiased and relevant to app functions, and extensive documentation of development steps. This documentation includes proof of compliance with requirements pertinent to the app's risk categorization, the app's general capabilities and characteristics, and the way in which the risk management features were formulated and incorporated. This documentation is required to be periodically updated to reflect changes to the product or changes to risk mitigation requirements as time passes and loopholes to exploit are inevitably discovered. The app must also include user tools to help prevent or minimize risks, with clear instructions on how to use them, documented cybersecurity measures to prevent tampering with the app itself or unauthorized access to the data it generates, and a similarly well-documented quality-management system for the app. The vendor must also make a "declaration of conformity" certifying it has met all the requirements (which must be updated for any new versions of the app for the next 10 years), conform to a product-labeling requirement, and establish a means of reporting "serious incidents" (in which security or risk safeguards failed) to market surveillance authorities of each member state affected by the incident within 15 days of the provider becoming aware of the incident. Finally, the provider must submit a statement that specifies how the app might affect EU citizens under the Charter of Fundamental Rights of the European Union.

For GPAI apps, only some of these same requirements exist. Providers must notify the European Commission if a version of their AI model exceeds the FLOPS limit, must set up a process to "constantly assess and mitigate the risks they pose and ensure cybersecurity protection," must track and report incidents of violation of EU citizens' fundamental rights and implement corrective measures, and must maintain "codes of practice and presumptions of conformity" to "demonstrate compliance with the European Parliamentary Research Service obligations set under the legislation." A specific code of practice isn't specified in the EUAIA other than this notation, although the European Commission reserves the right to formulate and legally codify one sometime in the future. The European Commission also reserved the right to reclassify a GPAI app as "high risk" without further amending the EUAIA.

For "limited risk" apps, the requirements are even fewer. "Providers of AI systems that generate large quantities of synthetic content must implement sufficiently reliable, effective, interoperable, effective and robust techniques" to indicate the "output has been generated or manipulated by an AI system and not a human." These would include image watermarks or other prominent declarations about an app's output. In addition, employers using AI systems in the workplace "must inform the workers and their representatives" about such systems being in use. Otherwise, apps here only need to conform to the EU's GDPR data privacy law.

To support the required testing, member states are asked to create at least one national AI regulatory sandbox apiece to "facilitate the development and testing" of new AI systems. Each sandbox must "enable, where appropriate, testing of AI systems in real-world training, testing and validation of innovative AI systems before their placement on the market or entry into service." High-risk systems can be tested in real-world conditions without participating in a national regulatory sandbox if they respect certain guarantees and conditions (those cited being asking for specific consent and approval of their respective national "market surveillance" authorities).

Criticisms of the EUAIA

As groundbreaking as its intent (taking a risk-based approach to categorizing AI systems and spelling out bundles of producer requirements) might appear, the EUAIA must be frankly assessed as being merely a first cut at tackling the problems posed to commerce and society by AI. Criticisms acknowledged by the EU itself include that the requirements may be too burdensome to AI software producers, the act relies too heavily on a self-assessment by vendors of what risk category their products belong in, and, unlike under the EU's General Data Protection Regulation (GDPR), individual citizens who feel their rights may have been violated by an AI system have no means of filing direct complaints because the aggrieved party is considered to be the EU itself. There's a lot of disagreement over what really should be defined as biometric data and what constitutes "high-quality" training data. Questions of whose rights have been violated and, for example, how someone's thoughts and opinions might have been affected by a "deepfake" is hard to quantify and is often colored by the perceptions of the beholders of such impacts. Leaving such questions as how much testing is enough and what is a sufficiently dangerous enough incident to ban a product from the market to rulings by the various international standards organizations raises questions of letting these essentially non-democratic institutions make such binding decisions. These and other questions will no doubt be tackled in future revisions to the EUAIA.

Progress on similar legislation in the U.S. seems far away. President Biden issued an executive order on October 30, 2023, on use of artificial intelligence, but it mainly calls for policy documents from various executive departments on ensuring safe and secure development and use of AI. Most of these documents won't appear until later in 2024, and none of them will be binding if there is a change of U.S. Presidents next January.

Twenty companies, including Adobe, Google, Meta, Microsoft, OpenAI, and TikTok, issued The Accord to Combat Deceptive Use of AI in 2024 Elections, but it's primarily a list of goals. The only action commitment is to "publish the policies that explain how we will address such content, providing updates on provenance research, or informing the public about other actions taken in line with these commitments," an unenforceable promise that will only get as much attention as commercial news media happen to give it if it should fail to take place in certain incidents.

A news story by RStreet published last October mentions nearly 20 federal bills introduced to provide various means of regulating AI in the U.S. and mentions that, as of that date, nearly 200 bills regulating AI have been introduced in state legislatures in 2023 alone. However, each of the federal bills has a different method for regulation or else simply sets up an agency or committee to make recommendations. Barring some landmark incident that brings AI into the public consciousness as something more than a toy or a species-killer, added to the stubborn polarization of Congress on most issues, effective legislation about AI in the U.S. could well be years away. Lobbying resistance to a national data privacy law such as the example of the GDPR makes that timeframe even more likely.

In contrast, it's even more impressive that the EU has managed to pass legislation that takes a significant first step at producing some law that tries to comprehensively address AI as a software genre. The EUAIA is at least a useful blueprint for what will likely be a ground floor for a significant body of law over the coming decades, a topic we will all need to keep an eye on.

John Ghrist

John Ghrist has been a journalist, programmer, and systems manager in the computer industry since 1982. He has covered the market for IBM i servers and their predecessor platforms for more than a quarter century and has attended more than 25 COMMON conferences. A former editor-in-chief with Defense Computing and a senior editor with SystemiNEWS, John has written and edited hundreds of articles and blogs for more than a dozen print and electronic publications. You can reach him at This email address is being protected from spambots. You need JavaScript enabled to view it..

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: