6 months ethical diet for ChatGPT with ISO/IEC/IEEE 24748-7000
6 months ethical diet for ChatGPT and other AIs earnestly means to run it through the globally recognized Model Process for Ethical System Design: ISO/IEC/IEEE 24748-7000
by Sarah Spiekermann
As of March 30th 2023, 1377 experts have signed an open letter of The Future of Life Institute to pause the giant AI experiments for a 6 months ethical diet. The goal of the call is to think first about the human and social implications of the technology and define strategies to mitigate the risks.
Solutionism or chance?
At first, this call feels like ‘solutionism’: fixing a potentially insurmountable challenge in a few months is like fixing climate change in 3 years. But thinking hard about humanity’s challenges in the face of these AI systems, identifying low-hanging ethical fixes, and setting up a longer-term release-roadmap that ensures value-based, social and sustainable versions over the coming years makes sense.
That said, OpenAI (Microsoft) or Google should not go about such a project in their usual way: hiring some lobby-friendly experts that tell them what everyone knows already: that these systems need to embed privacy and transparency by design, need to be secure and reliable. This is far too easy! Instead, the big AI releasers should be willing to engage in the ‘dirty work’, really facing the degree of responsibility for society they bear. And this means to scrutinized and potentially even certify their AI systems against the only globally recognized standard for ethical IT system design released in 2021 and 2022 by the world’s leading technical standardization bodies ISO and IEEE; that is ISO/IEEE 24748-7000 “A Model Process for Addressing Ethical Concerns during System Design”. Hundreds of experts from around worldwide have been involved in this mega effort, long coined the “P7000 project” and IEEE recently released the standard in 2023 for free.
What would Value-based Engineering with ISO/IEEE 24748-7000 recommend?
In a nutshell, the AIs would need to be scrutinized in three phases: First, the operational concept of these systems would need to be laid out and ‘earnest’ stakeholder groups would be assigned for the various contexts in which these systems can cause harm; that is for instance in education, art and design, health, media communication, etc. Second, the project team and stakeholders would engage in a value elicitation phase that analyzes the AI’s operational concept from a moral perspective: What are the harms and benefits likely to result from the AI in various contexts? What virtue or personal character effects could ensue as a result of heavy usage? For example, would kids stop studying seen that the AI is there for them anyways? And thirdly, what high universal value principles do the respective cultures want to protect a priori, what human rights for instance? Once theses hard ethical questions are answered and future AI values identified and prioritized, so called “Ethical Value Requirements” (EVRs) would be identified for all of them. These are the criteria to which the AI needs to live up to; that is technical and organizational criteria, which would need to be met by the AI, addressed from one release to the next. Technically addressable Ethical Value Requirements would then be translated into system requirements given to the engineers for implementation.
The result for Microsoft, Google or any other AI provider would be to have a ‘value-based’ system instead of a ‘disruptive’ system. The people in charge of the respective AI companies now face this choice. Their names will make history in one or the other way.
Sarah Spiekermann co-chaired the ISO/IEEE 24748-7000 the development from 2016 to 2021. She chairs the Institute for IS & Society at Vienna University of Economics and Business.