Artificial Intelligence (AI) applications can raise ethical concerns in their design that don't usually arise with apps of other kinds. Part 1 provided an overview of some of those challenges. Part 2 shows how some governmental and industry thinkers and organizations are answering the call for guidelines.
Development of effective AI applications implicitly involves ethical concerns because the development process is creating a system that's designed to act autonomously. When an autonomous system is forced to evaluate conflicting goals, the AI may not be equipped to make such choices on its own because the choices may involve what are normally considered moral decisions. Choosing the best candidates for a bank loan, for example, may cause an AI app to make inappropriate choices because the datasets with which the AI learned to make decisions was unintentionally skewed in a particular direction. An AI making qualitative decisions may not weigh factors that a human would.
A growing awareness of the problem of ethical AI apps in the software industry generally has led to a number of groups and organizations trying to educate and guide software designers looking to build more effective—and ethical—AI systems.
Bringing Ethics to Machine Learning
The Institute for Ethical AI and Machine Learning is a research group located in the U.K. Made up of volunteers in the fields of machine learning (ML), data scientists, professors in STEM fields, and other industry experts, the Institute researches processes and frameworks for building ethical ML systems. It has developed eight "Responsible Machine Learning Principles" that seek to provide a framework for data scientists who design, develop, and maintain machine-learning systems.
Briefly, those principles are as follows. First, ML specialists are asked to "assess the impact of incorrect predictions" and "when reasonable, design systems with human-in-the-loop review processes." Human review is probably the cornerstone of effective AI system oversight. While one human might have blind spots when it comes to certain ethical or cognitive processes, a group of humans is more likely to see problems once they’ve performed a thorough review of a given AI system.
Second, ML specialists are urged to "continuously develop processes" that help humans "understand, document, and monitor bias in development and production." Hopefully, over time, such efforts will lead to identification of commonly occurring bias problems and a greater awareness of the need to take time to perceive them in AI systems.
The third principle is to "develop tools and processes to continuously improve transparency and explainability of ML systems where reasonable."
The Institute's fourth guiding idea is to "develop the infrastructure required to enable for a reasonable level of reproducibility across the operations of ML systems."
Fifth is to "identify and document relevant information so that business change processes can be developed to mitigate the impact" of worker automation. In general, most people can see this effect coming down the pipeline, but devising remedies may take more-specific knowledge of impacts in particular job categories.
The sixth principle has to do with practical accuracy, committing to "building processes that ensure accuracy and cost-metric functions are aligned to domain-specific applications."
The seventh addresses privacy by committing practitioners to "protect and handle data with stakeholders that may interact with the system" either directly or indirectly.
The final principle is to "develop and improve reasonable processes and infrastructure to ensure data and model security are being taken into consideration during the development" of ML systems.
The eight principles are a good template for addressing many of the challenges to coping with AI ethical challenges because it's the ML part of the process that develops an AI system's functions, and paying attention to these guidelines effectively can at least mitigate many of those challenges.
The Institute has used these eight principles as the basis for what it calls its AI-RFX Procurement Framework, "a set of templates to empower industry practitioners to raise the bar for safety, quality and performance." The Institute is looking for new members and interested parties can join here.
GitHub maintains an affiliated web site (although it confusingly refers to the above organization as "The Institute for Ethical Machine Learning") on which GitHub maintains a reading list of links to articles, websites, and libraries that explore different issues of ethics in machine learning.
The Institute for Ethics in Artificial Intelligence (IEAI), sponsored by the Technical University of Munich, was founded in January 2019, largely via a $7 million grant from Facebook. It looks to promote interdisciplinary collaboration to address ethical challenges facing AI "at the interface of technology and human values" and to foster development of "operational ethical frameworks in the field of AI." IEAI is affiliated with a number of other European and African groups that are also concerned with AI ethics.
AI4People was launched by the European Parliament in January 2018 as a forum for developing AI application guidelines. The organization maintains standing committees focused on particular industries where AI is gaining influence: automotive, banking and finance, energy, healthcare, insurance, legal services, and media and technology. At a conference last March, AI4People announced its 2020 efforts will center around development of seven key requirements for AI systems in these seven vertical industries.
The Global AI Ethics Consortium (GAIEC) exists to take "a systematic and scientific approach to studying practical ethical issues linked with AI and its diverse modern-day applications." The organization's website offers news items, an events calendar, publications and reports, and links to reports on group events. GAIEC is also sponsoring research projects into the specific impact of AI on the future of work, mobility and safety, choice and autonomy, medicine and healthcare, online behavior, and governance and regulation.
The ITU Focus Group on AI for Autonomous & Assisted Driving (FG-AI4AD) focuses on "behavioral evaluation of AI responsible for dynamic driving tasks" and is therefore concerned primarily with AI in driverless cars. The group is sponsored by the International Telecommunication Union (ITU), a United Nations agency specializing in information and communication technologies.
Responsible AI in Africa Network (RAIN) is affiliated with Kwame Nkrumah University of Science and Technology (KNUST) in Ghana and strives to bring together "a network of scholars working on the responsible development and use of AI in Africa." Via KNUST, the organization hosts a series of virtual and in-person workshops on ethics and AI app development.
Closer to home, the AI Now Institute, affiliated with New York University, is an AI ethics research group currently studying AI ethical challenges in the fields of rights and liberties, labor and automation, bias and inclusion, and safety and critical infrastructure. Founded in 2017, the group's mission statement is to produce "interdisciplinary research on the social implications of AI and acts as a hub for the emerging field focused on these issues."
The United Nations Educational, Scientific and Cultural Organization (UNESCO) is working to organize debates in several parts of the world with an eye to defining some international ethical principles to be used in AI development.
In addition, there are at least 18 other organizations, listed here and here, that are working with similar goals to address the ethics challenges facing AI. As widespread as the difficulties are, a similarly wide spectrum of groups has sprung up to identify all the problems, debate the solutions, and lobby for IT professionals working with AI to be vigilant about at least staying aware of challenges affecting AI in their own specialized areas of interest.
Corporate Ethical Guidelines for AI Apps
A path toward establishing guidelines for corporations considering building AI apps includes at least two organizations that are making what seem like common-sense suggestions for handling ethics.
Darrell West of the Brookings Institute, as part of that organization's "A Blueprint for the Future of AI," has suggested guidelines for corporations planning to produce AI applications. Briefly summarized, the guidelines include hiring ethicists to advise on building AI apps and to create a "code of ethics" that lays out some ethical guidelines for AI apps in advance, maintaining an AI corporate review board that regularly discusses ethical questions, creating a paper trail for how coding decisions within AI apps were made, training staff to be cognizant of ethics concerns in their daily work, and creating a remediation plan for cases in which an AI app causes harm.
Deloitte, the accounting firm and professional services network, is advocating that companies considering AI apps involve their "chief information officer, chief risk officer, chief compliance officer and CFO" to implement safeguards.
In the AI app design stage, according to Deloitte, these would include "a framework that defines the ethical use of AI and data in the organization," "a cross-functional panel with representation from the business units as well as the technology, privacy, legal, ethics, and risk groups," and establishing "a data or AI ethics code by which professionals must abide."
In the building and training phase for AI apps, Deloitte recommends establishing "a process for determining where and how to obtain the data that trains the models," "guidelines on where and how user consent becomes a consideration in the training phase," "policies for where and how to build models and whether to use open-source technology," and "an assessment of ways that an AI solution can teach itself behaviors that are out of synch with the organization's missions or values."
During the operational and maintenance phases of an AI app, an enterprise should also set up "a process for the organization to engage in continuous monitoring," "an assessment of ways that an AI-enabled solution can gain access to new forms of data," and "a process for the business to update the board on AI-related risks or issues."
To help encourage movement toward these goals, Deloitte has set up, in cooperation with the University of Notre Dame's Mendoza College of Business, the Notre Dame Deloitte Center for Ethical Leadership. The organization deals in all kinds of ethical questions in corporate leadership, emphasizing that this effort is "not just about business or management or how to train employees, but also about the human mind, behavioral science, culture, interpersonal exchange, and personality theory."
IBM has also issued some guideline suggestions in conjunction with its Watson platform in a pamphlet entitled, "Everyday Ethics for Artificial Intelligence." These include some similar suggestions such as "make company policies clear and accessible to design and development teams," "understand where the responsibility of the company/software ends," and "keep detailed records of your design processes and decision making." In addition, this pamphlet includes some key questions ethicists should be asking themselves, such as "how does accountability change according to the levels of user influence over an AI system," "is the AI to be embedded in a human decision-making process, is it making decisions on its own, or is it a hybrid," "how will our team keep records of our process" and "will others new to our effort be able to understand our records" as well as a number of other pertinent considerations.
International Agreement Is Possible
With such a large number of organizations and business thinkers working on the AI ethics idea, and given the similarity in principle, for example, of the Brookings, Deloitte, and IBM guideline suggestions, it hopefully won't be long before a general consensus emerges on ways to implement ethics in AI.
Militarized drones, for instance, may be controllable if the nations holding them can agree to ban their use in warfare, as was done, for example, with poison gases after WWI. In 2016, the U.S. initiated a multilateral effort to examine the implications of drone proliferation and use. The “joint declaration for the export and subsequent use of armed or strike-enabled unmanned aerial vehicles (UAVs)” was subsequently agreed to by 53 U.N. Member States, who also agreed to develop global standards for use and export of armed drones. That work is still ongoing.
Other Ethics Issues
How ethical guidelines will be implemented will still face having to work out the specifics of such issues as how review boards will function, how to eliminate bias in choosing datasets with which to train AI apps, and how to avoid "blind spots" in perceptions of any person assigned to oversee AI ethics. What's most important at this stage, however, is simply raising awareness that AI ethics is an issue to which attention needs to be paid and must be considered carefully as we gradually enter the Brave New World of autonomous computer apps.
LATEST COMMENTS
MC Press Online