21
Sat, Dec
3 New Articles

Putting the Brakes on the AI “Runaway Train” Myth

Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The sensational media stories following the release of Open AI’s GPT-4 continue, but the theme that it's about to—or already has—plunged the world into the "Singularity" is being quietly undercut by efforts in the EU and the US to regulate AI in general.

Many popular and business media outlets have recently gone overboard in predicting the end of civilization at the hands of this useful but humble chatbot. While it’s fun to scare ourselves with spooky stories from time to time, this deluge is starting to give AI in general an undeserved bad rep. A useful antidote is to look at the slew of laws and regulations—some already on the books, some merely pending—that can assure the sober-minded that responsible people are aware of the dangers and are acting to do something about it.

The European Union’s Take

The EU can claim to be the world leader in anticipating the societal problems that may occur when technology outstrips legal systems. Perhaps because Europe experienced the dangers of totalitarianism leading to war within living memory of some of its citizens, its people are more sensitive to certain kinds of "what-ifs," as illustrated by the Charter of Fundamental Rights of the European Union. The 2016 General Data Protection Regulation (GDPR), for example, provides privacy protections from exploitation of personal data by Big Anything. Its equivalent will likely be a long time coming in the US, where the GDPR is sometimes viewed as a restraint on advertisers’ First Amendment rights to make unlimited sales offers. However, the direction of thinking the EU is following is a path others will likely emulate eventually.

The EU’s policy statement on liability rules for AI outlines an effort to legislate safeguards against unbridled AI apps. It starts out by postulating four categories of AI application risk.

Unacceptable risk is the highest category of apps, which the EU will prohibit completely. Examples of this risk type are children’s toys with voice assistance (which might suggest dangerous behavior) and social scoring by governments (based on credit ratings, comments by acquaintances, or other factors). (Although not explicitly mentioned now, presumably this would include other actions considered dangerous to society at large.)

High risk is the next most severe classification. Such apps must be strictly controlled via a combination of human oversight, systems of risk assessment and mitigation, high standards for app training data sets, traceability of results via activity logging, detailed documentation for users and oversight authorities, and overall requirements for accuracy, security, and robustness. App types in this category would include infrastructures that could affect the life and health of users (e.g., self-driving cars or medical lifting devices), product-safety components (e.g., robotic surgical systems), apps affecting worker employment or management (e.g., resume-sorting or exam-scoring apps), significant private or public services (e.g., credit scoring), border-control management (e.g., evaluation of travel docs), and administration of justice or democratic processes (e.g., evaluating evidence, voting, applying the law to a set of facts, or using remote biometric identification methods for law enforcement). Biometrics would still be permitted (with court approval) in cases such as finding lost children, identifying a specific perpetrator as part of a trial, or responding to a terrorist threat.

Limited risk would include generative apps like GPT-4 that would simply have transparency requirements, such as indicating to users that they are working with a machine, specifying that resulting content would be AI-generated, making available public summaries of the app’s training data, and having built-in restrictions against producing illegal content, such as previously copyrighted material.

Minimal risk apps would be unrestricted and would include such examples as spam filters and video games.

The EU moved on to an Artificial Intelligence Act proposal in 2021 that, to start, would impose a regulatory framework for high-risk AI systems only," but prescribes a voluntary "code of conduct" for producers of AI apps in the other risk categories.

"The requirements will concern data, documentation and traceability, provision of information and transparency, human oversight, and robustness and accuracy" for high-risk apps, the proposal states. The proposal goes on to say this approach was considered the best balance between violations of fundamental rights, people’s safety, assuring that deployment and enforcement costs would remain reasonable, and giving member states "no reason to take unilateral actions that would fragment the single market" (in Europe for AI apps). The proposal also points out that "an emerging patchwork of potentially divergent national rules will hamper the seamless circulation of products and services related to AI systems" and that "national approaches in addressing the problems will only create additional legal uncertainty and barriers and will slow market uptake of AI." The same logic implies—probably correctly—that the most effective parallel effort in the US will have to be federal laws or regulations, rather than leaving regulation to individual states.

AI systems that are safety components of products "will be integrated into the existing sectoral safety legislation." High-risk AI systems integrated into other products (e.g., machinery, medical devices, toys) "will be checked as part of the existing conformity assessment procedures" under separate legislation relevant to the individual product types.

This general position was agreed to by member states in 2021, but the legislation was tabled indefinitely until June of this year, when the requirements were added that apps like ChatGPT would have to disclose when content is AI-generated, image-related apps would have to include a method of distinguishing between “deep-fake” images and real ones, and all apps would have to provide safeguards against illegal materials (i.e., content previously copyrighted by others). Perhaps most significantly, social media platforms with more than 45 million users that use AI systems to influence election outcomes would be classified as "high-risk" systems—which would explicitly include Meta and Twitter/X.

Far from heeding the voices of those who think a moratorium on AI development should be observed until new standards are worked out, Thierry Breton, current Commissioner of the Internal Market for the EU, said in June that, "AI raises a lot of questions—socially, ethically, economically. But now is not the time to hit any 'pause button.' On the contrary, it is about acting fast and taking responsibility."

The proposed law is currently being debated by politicians and industry alike, but the EU is hopeful of passing the legislation by year’s end. Afterward, there will be an approximate grace period of two years before the rules take full effect.

Some Laws Already on the Books

Legally speaking, it’s not exactly as Wild West in the US AI market as some might have you believe. A 2020 blog on the US Federal Trade Commission (FTC) website points to two laws in force since the 1970s that have a bearing on AI. The Fair Credit Reporting Act (1970) prohibits use of algorithms to determine credit scoring for loan applications. The Equal Credit Opportunity Act (1974) "prohibits discrimination on the basis of race, color, religion, national origin, sex, marital status, age, receipt of public assistance, or good faith exercise of any rights," a challenge for an AI that might have been trained with data that isn't properly curated to filter out potential bias in any of these areas. The blog goes on to point out that any deceptive use of "doppelgangers" (e.g., fake dating profiles, phony followers, deepfakes, AI chatbots), secretly collected or sensitive data for AI training, and failure to consider outcomes as well as inputs could result in a company facing an FTC enforcement action. In addition, section 5 of the FTC Act (1914) itself bans unfair or deceptive practices such as, the blog notes, "sale or use of racially based algorithms."

The National Artificial Intelligence Initiative Act (NAIIA) of 2020, passed as part of the 2021 National Defense Authorization Act, directs the President, the interagency Select Committee on AI, and agency heads to support AI interdisciplinary research programs, AI R&D, AI education and workforce training programs; establishes seven governmental AI research institutes; and directs the Office of Management and Budget (OMB) to issue guidance on regulation of AI in the private sector.

Although the NAIIA doesn’t define AI app risk tiers as the EU legislation does, the mandate for OMB guidance does include the admonishment that, "for higher risk AI applications, agencies should consider, for example, the effect on individuals, the environments in which the applications will be deployed, the necessity or availability of redundant or back-up systems, the system architecture or capability control methods available when an AI application makes an error or fails, and how those errors and failures can be detected and remediated." Although vague, such governmental strictures mean any AI app producer in the US already needs to be aware of the potential impacts of its systems.

As directed by the NAIIA, the National Institutes of Standards and Technology (NIST) has issued the Artificial Intelligence Risk Management Framework, intended to be a "living document" that provides information on assessing and managing risk in AI apps. The publication’s goals are "to equip organizations and individuals with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems over time." It further states that the framework is intended to be practical and will be updated "as AI technologies continue to develop, and to be operationalized by organizations in varying degrees and capacities so society can benefit from AI while also being protected from its potential harms."

In May, the US and EU announced joint work on a voluntary code of conduct for AI applications to apply until the EU law takes full effect. Then, on July 21, the Biden administration announced an agreement with seven major companies to ensure their AI products are safe before they’re released, along with some unspecified third-party oversight. The agreement included a pledge by the companies to start including watermarks on all AI-generated images.

Progress on More Local Fronts

US states and municipalities have also already passed some laws pertaining to AI.

Alabama passed a law in April 2021, that sets up a council on AI to advise the governor and legislature. Another Alabama law passed in 2022 prohibits the use of facial recognition technology as the sole evidence for an arrest.

Colorado passed a law in 2022 setting up a task force to investigate and restrict use of facial recognition technologies by state and local bodies, and to prohibit public schools from entering into contracts for facial recognition services. The city of Baltimore and the city of Bellingham, WA, have passed ordinances similarly restricting facial recognition use by their police departments.

Illinois has passed an act requiring employers that make hiring decisions solely based on AI evaluations of video interviews to report those instances to the state Department of Commerce. New York City has passed a similar local ordinance.

In May 2022, Vermont created a division of its Agency of Digital Services to investigate all uses of AI by the state government.

Keeping Eyeballs on AI

Is all this activity absolute insurance against an eventual takeover of humanity by AI? No, but it illustrates that many people are already watching and listening for trouble. It shows that there’s no reason for outright alarm. While it’s true that AI is a young enough technology that humans haven’t even scratched the surface of coming up with ways to misuse it, the moratoriums and outright bans that some are calling for are misguided. A sharpened pencil thrust into a vulnerable part of the human body can kill, but saving the world from that possibility with laws against all writing utensils is not a reasonable response. Like most of life’s dangers, AI apps can be a safe and useful tool set if rational people continue to give careful ongoing thought to its ramifications.

BLOG COMMENTS POWERED BY DISQUS

LATEST COMMENTS

Support MC Press Online

$

Book Reviews

Resource Center

  • SB Profound WC 5536 Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application. You can find Part 1 here. In Part 2 of our free Node.js Webinar Series, Brian May teaches you the different tooling options available for writing code, debugging, and using Git for version control. Brian will briefly discuss the different tools available, and demonstrate his preferred setup for Node development on IBM i or any platform. Attend this webinar to learn:

  • SB Profound WP 5539More than ever, there is a demand for IT to deliver innovation. Your IBM i has been an essential part of your business operations for years. However, your organization may struggle to maintain the current system and implement new projects. The thousands of customers we've worked with and surveyed state that expectations regarding the digital footprint and vision of the company are not aligned with the current IT environment.

  • SB HelpSystems ROBOT Generic IBM announced the E1080 servers using the latest Power10 processor in September 2021. The most powerful processor from IBM to date, Power10 is designed to handle the demands of doing business in today’s high-tech atmosphere, including running cloud applications, supporting big data, and managing AI workloads. But what does Power10 mean for your data center? In this recorded webinar, IBMers Dan Sundt and Dylan Boday join IBM Power Champion Tom Huntington for a discussion on why Power10 technology is the right strategic investment if you run IBM i, AIX, or Linux. In this action-packed hour, Tom will share trends from the IBM i and AIX user communities while Dan and Dylan dive into the tech specs for key hardware, including:

  • Magic MarkTRY the one package that solves all your document design and printing challenges on all your platforms. Produce bar code labels, electronic forms, ad hoc reports, and RFID tags – without programming! MarkMagic is the only document design and print solution that combines report writing, WYSIWYG label and forms design, and conditional printing in one integrated product. Make sure your data survives when catastrophe hits. Request your trial now!  Request Now.

  • SB HelpSystems ROBOT GenericForms of ransomware has been around for over 30 years, and with more and more organizations suffering attacks each year, it continues to endure. What has made ransomware such a durable threat and what is the best way to combat it? In order to prevent ransomware, organizations must first understand how it works.

  • SB HelpSystems ROBOT GenericIT security is a top priority for businesses around the world, but most IBM i pros don’t know where to begin—and most cybersecurity experts don’t know IBM i. In this session, Robin Tatam explores the business impact of lax IBM i security, the top vulnerabilities putting IBM i at risk, and the steps you can take to protect your organization. If you’re looking to avoid unexpected downtime or corrupted data, you don’t want to miss this session.

  • SB HelpSystems ROBOT GenericCan you trust all of your users all of the time? A typical end user receives 16 malicious emails each month, but only 17 percent of these phishing campaigns are reported to IT. Once an attack is underway, most organizations won’t discover the breach until six months later. A staggering amount of damage can occur in that time. Despite these risks, 93 percent of organizations are leaving their IBM i systems vulnerable to cybercrime. In this on-demand webinar, IBM i security experts Robin Tatam and Sandi Moore will reveal:

  • FORTRA Disaster protection is vital to every business. Yet, it often consists of patched together procedures that are prone to error. From automatic backups to data encryption to media management, Robot automates the routine (yet often complex) tasks of iSeries backup and recovery, saving you time and money and making the process safer and more reliable. Automate your backups with the Robot Backup and Recovery Solution. Key features include:

  • FORTRAManaging messages on your IBM i can be more than a full-time job if you have to do it manually. Messages need a response and resources must be monitored—often over multiple systems and across platforms. How can you be sure you won’t miss important system events? Automate your message center with the Robot Message Management Solution. Key features include:

  • FORTRAThe thought of printing, distributing, and storing iSeries reports manually may reduce you to tears. Paper and labor costs associated with report generation can spiral out of control. Mountains of paper threaten to swamp your files. Robot automates report bursting, distribution, bundling, and archiving, and offers secure, selective online report viewing. Manage your reports with the Robot Report Management Solution. Key features include:

  • FORTRAFor over 30 years, Robot has been a leader in systems management for IBM i. With batch job creation and scheduling at its core, the Robot Job Scheduling Solution reduces the opportunity for human error and helps you maintain service levels, automating even the biggest, most complex runbooks. Manage your job schedule with the Robot Job Scheduling Solution. Key features include:

  • LANSA Business users want new applications now. Market and regulatory pressures require faster application updates and delivery into production. Your IBM i developers may be approaching retirement, and you see no sure way to fill their positions with experienced developers. In addition, you may be caught between maintaining your existing applications and the uncertainty of moving to something new.

  • LANSAWhen it comes to creating your business applications, there are hundreds of coding platforms and programming languages to choose from. These options range from very complex traditional programming languages to Low-Code platforms where sometimes no traditional coding experience is needed. Download our whitepaper, The Power of Writing Code in a Low-Code Solution, and:

  • LANSASupply Chain is becoming increasingly complex and unpredictable. From raw materials for manufacturing to food supply chains, the journey from source to production to delivery to consumers is marred with inefficiencies, manual processes, shortages, recalls, counterfeits, and scandals. In this webinar, we discuss how:

  • The MC Resource Centers bring you the widest selection of white papers, trial software, and on-demand webcasts for you to choose from. >> Review the list of White Papers, Trial Software or On-Demand Webcast at the MC Press Resource Center. >> Add the items to yru Cart and complet he checkout process and submit

  • Profound Logic Have you been wondering about Node.js? Our free Node.js Webinar Series takes you from total beginner to creating a fully-functional IBM i Node.js business application.

  • SB Profound WC 5536Join us for this hour-long webcast that will explore:

  • Fortra IT managers hoping to find new IBM i talent are discovering that the pool of experienced RPG programmers and operators or administrators with intimate knowledge of the operating system and the applications that run on it is small. This begs the question: How will you manage the platform that supports such a big part of your business? This guide offers strategies and software suggestions to help you plan IT staffing and resources and smooth the transition after your AS/400 talent retires. Read on to learn: