Part 1: Artificial intelligence (AI) – an ethical and responsible approach

Lanny Cohen of Capgemini calls upon us to embed ethics into all AI systems: “artificial intelligence (AI) needs to be applied with an ethical and responsible approach – one that is transparent to users and customers, embeds privacy, ensures fairness, and builds trust. AI implementations should be transparent and unbiased, and open to disclosure and explanation.”

But how do we do that? There are many discussions, articles, and blog posts on this topic online, but most of them are, by nature, very abstract. It is far from an easy subject. There are no simple rules or methods available to assess the ethics of AI. This three-part blog post strives to provide guidance for making such assessments. By using existing ethical frameworks for product design and conducting businesses, we can make our lives easier.

A fast trolley ride through ethics

Let begin the discussion on ethics and AI with two common philosophical perspectives:

  • Virtue ethics: Virtue ethics measure actions against some given set of virtues, with the goal being to be a virtuous person. In short, are the actions that are built into in the AI motivated by virtue?
  • Consequentialism: The results matter, not the actions themselves. Whatever has the best outcome is the best action. In short: what will the outcome of the actions of the AI be?

First a few words about virtue ethics. The main question is: “Does the AI enhance our moral and societal values” such as honesty, equality, and care (for the environment, for example). I don’t want to elaborate on the virtues of virtue ethics here, but this type of ethics is mainly chosen because consequentialism is less effective for innovative technologies such as AI.

But frankly, most ethical discussions around AI are of a consequential nature. How do the consequences of the use of AI affect individuals, society, and the environment? Do the positive effects outweigh the negative? And, how do I weigh the consequences of using AI? This is not an easy discussion. Everyone should be familiar with the trolley problem, which is often used as an analogy for self-driving cars and the decisions the AI-based steering could face.

Lesson by Eleanor Nelsen

↑  Imagine you’re watching a runaway trolley barreling down the tracks, straight towards five workers. You happen to be standing next to a switch that will divert the trolley onto a second track. Here’s the problem: that track has a worker on it, too – but just one. What do you do? Do you sacrifice one person to save five? (Source: TED-Ed)

Although I’m not all in favor of consequentialism as the main method of assessing the effects of the use of AI, it is certainly the mainstream way of thinking about AI in the Anglo-Saxon world.

The question is, how do we determine the consequences of using AI? We need to know what they are before we can weigh them. AI is mostly regarded as a black box. We can put things, such as pictures or sales figures, into the system and get some kind of output, for example descriptions of pictures or insights which markets to target.

But in order to determine if the input is processed according to our ethical values, we need to compare the results the AI gives us. In the end, it is only by studying the outcome in depth, that we can ascertain whether the system is working properly.

For example: Amazon’s recruitment system was biased against woman. Analysis of the recommendations made by the AI-based recruitment system showed that. But, the system itself didn’t reveal its thinking logic on its own.

“It is important to recognise that, alongside the huge benefits that artificial intelligence offers, there are potential ethical issues associated with some uses.” (Sir Mark Walport, UK Government Chief Scientific Adviser)

In order to avoid haphazard detection of defects, such as bias, we need to add functions to the AI systems. These functions will allow us to gain knowledge on how the AI thinks and argues. These functions have to be built into the AI system purposefully. I call these functions the attributes of an AI system that are necessary to have the preconditions for creating an ethical AI system.

Attributes we need for ethical AI

Many publications on ethics and AI focus on the attributes AI should have to be ethical. These attributes are, in fact, the features of any AI-based product or service. They allow us to check if the AI is behaving correctly and ethically. There are countless checklists out there, so allow me to present my (incomplete) version, which is based on one of Tin Geber’s lists:

  • We need understandable AI.
  • We need explainable AI.
  • We need meaningful oversight.
  • We need accountability for AI.
  • We need defined ownership of AI.

(For a more complete list of attributes, please read the blog post by Alan Winfield.)

These attributes should be present in any AI implementation, but this is complicated since some AI techniques don’t allow for gaining that insight. For instance, deep learning algorithms are not explainable on a deep level by design. We can explain where and why we use deep learning in a certain application, but not how machine learning reaches a specific decision.

When the attributes above are present, we can start assessing whether the AI is behaving ethically or not or somewhere in between. We can start answering questions such as: Is my AI inclusive and non-discriminatory? Does my AI reach fair decisions? Can I explain to my customers how decisions are reached and what data is used to reach them? I’m aware that these are not easy questions to answer, but in order to build or apply AI routines that respect human rights and ethical values, they must be tackled.

Most companies’ AI algorithms and implementations will not be developed in house. The AI routines and services are bought as a ready-to-run application or service, a black box for the buyer of the AI – you put data in and you (hope to) get meaningful insights back.

It’s like driving a car. Most of us don’t know how our car works in detail. That’s not a problem when we’re driving under normal circumstances. But when the car starts to falter, it turns out that we’re clueless about the cause and even more clueless about how to fix it. In the meantime, accidents can happen, and the chance of them happening only increases when we aren’t aware of the defect in the first place.

I expect the same will happen for many AI implementations in real life. AI is bought or downloaded from a software supplier and an organization just uses it or integrates it within an existing software system. We expect the AI to behave correctly under normal circumstances and we expect that the AI will tell us when it’s broken. Without having to know the internal workings of the system in detail. Besides, software suppliers aren’t keen on revealing the intellectual property they’ve invested in their AI solutions.

So, how can we determine the ethicality of an AI solution without knowing all the details of the AI we’re using? As I mentioned earlier, sometimes the nature of AI algorithms doesn’t allow for these kinds of analyses, so how can we use AI in an ethical manner, even if we bought the AI functionality off the shelf?

AI isn’t on its own

What I find odd about the current discussions on ethics and AI, is that AI is treated as a standalone phenomenon – as though AI can perform tasks without an environment. But we all know that AI can only function with input from the outside world. In most cases this is data – lots of data.

Presently, AI can only thrive in a data-rich environment. AI is, as such, not equipped to interact with its environment. As Kathryn Hume, VP of integrate.ai, puts it: “So today, these algorithms – I like to consider them like idiot savants. So they can be super intelligent on one very, very narrow task.” We can only use AI effectively when we apply it within a broader system. Or, to put it differently, AI is embedded within a broader software application. That can be a standalone one, like an app you download on your mobile or integrated in step or task inside a business process, like reporting within an ERP–package.

“Artificial intelligence isn’t Frankenstein’s Monster knocking at your front door. AI will enter your house in a Trojan Horse through the back door.” (The author)

That is one of the reasons we, as ordinary consumers, aren’t really aware of the presence of AI in the products and services we use. AI is embedded in products such as Spotify and Facebook or for business users, embedded in applications such as Salesforce and SAP. Within those larger products, AI performs some specific functions together with a lot of other, non-intelligent functions of the application.

When we want to analyze the behavior of AI, we consider the AI apart from its context and use it within those applications. We should review the working of the entire application, including the AI-based functions. We should take into account that AI enables specific functionality in those applications. And we should consider that AI can only add value within the context of that application.

This all sounds very theoretical. And, I must admit that most discussions about AI and ethics are very theoretical. But to bring things down to earth, being aware that AI only functions within a product or service makes it possible to be more practical about ethics.

In the next part, I’ll describe how we can determine the ethicality of AI when it is used within products and services. Because AI cannot be used on its own, it can be incorporated in a product, for example an app. By assessing the ethics of the product using design frameworks, we implicitly also assess the AI used in that product.

This section of the article has been previously published on the Capgemini Insights & Data Blog.

Part 2: A designer’s view on AI ethics – AI used in products (and services)

In the previous part, I described how we currently see the ethics of artificial intelligence. We saw that it can be quite difficult to assess the ethicality of AI. But by reviewing AI within the context of use, for example a product, we can make our lives easier.

Because AI is used within products and services, we can use the existing frameworks for design and business ethics. These frameworks already exist for a longer time and offer the possibility to assess the ethicality of products and services.

AI hidden in a product

As an industrial designer, I’ve always been interested in how to make products that are not only useful and valuable on its own, but also have value for people, society, and the environment. It doesn’t matter whether this product is physical (a pen or mobile device), or virtual (a software app). As long as these products are sold and distributed to the general audience – to us as consumers – ethics for product design apply. The essence of the design of these kinds of products is that the manufacturer of the products doesn’t know his consumers personally. It’s the task of the designer to fill that gap and design products that are useful for his targeted audience.

“At first the designers have to understand their responsibility towards the environment and society. What is ethical?” (Prof. Michael Hardt, University of Lapland, Finland)

Ethical frameworks for product design put the responsibility for designing good products, products that are ethical, at the designer’s desk. As Dennis Hambeukers, Strategic Design Consultant @zuiderlicht, states: “Ethics is now part of the job for a designer.”

When we assess the ethics of a product, we don’t focus on the components alone. We focus on the product as a whole. The behavior of a product isn’t just the sum of the components, it is the sum of the interactions of the components with each other and the outside world, in most cases the human operating the product.

The “Ethical Hierarchy of Needs” (licensed under CC BY 4.0) (Source: https://ind.ie)

↑  As with any pyramid-shaped structure, the layers in the Ethical Hierarchy of Needs rest on the layer below it. If any layer is broken, the layers resting on top of it will collapse. If a design does not support human rights, it is unethical. If it supports human rights but does not respect human effort by being functional, convenient and reliable (and usable!), then it is unethical. If it respects human effort but does not respect human experience by making a better life for the people using it, then it is still unethical. (Source: Smashing Magazine)

To establish the ethicality of a product containing AI, we should not focus on the AI alone, but on the product as a whole, as it presents itself to the user. “Does this make the evaluation not more complex?” you might ask. Not really, because it places the AI in context of the product. Of course, we should establish how and when the AI affects the behavior of the product, but that puts the AI in perspective. If the product contains the effects of AI in an ethical manner, that’s all right.

If an AI gives you a biased product recommendation, you can easily dismiss it because there are alternative and easy-to-use ways to select the right product for you. If a biased product recommendation system lets you only select from a list the AI selects for you without showing alternatives – suggesting the list is exhaustive – that’s a problem.

Once we have determined the scenarios where AI becomes unethical, we can mitigate that behavior Firstly, by improving the AI. But risks will remain. Secondly, we can accomplish this by reducing the consequences of the unethical behavior. We can filter out unwanted outcomes by the AI. That can be quite cumbersome. But we can also downplay the consequences by allowing the AI to be overridden, by reducing the impact by using it for augmenting the user only. The user can override the decision made by the AI, somewhat like ignoring the directions of your satnav. And at the end, we’ll have to assess the product as a whole, based on the ethical framework we’ve chosen to use.

Design for Values

There are several frameworks to establish the ethicality of a product, for example the “Design for Values” program at the University of Technology in Delft. The authors of this quite elaborate framework state: “[…] technological developments in the 21st century, whether necessary to meet our challenges or made possible through new breakthroughs, only become acceptable when they are designed for the moral and social values people hold.”

Almost all design ethics frameworks focus on values and virtues. The whole of the product and its effects on the environment of the product – human, society, and nature – should be taken into consideration. And it really doesn’t matter whether this product contains AI or not. That being said, we should be aware that the behavior of products changes over time, for example, by wear and tear, and with AI, through the continuous learning of the algorithms.

The “Design for Values” program distinguishes 11 values that are important in design. All of these values also apply to artificial intelligence:

  • Accountability and transparency
  • Democracy and justice
  • Human wellbeing
  • Inclusiveness
  • Presence
  • Privacy
  • Regulation
  • Responsibility
  • Safety
  • Sustainability
  • Trust

Capgemini put great emphasis on trust. Accountability, transparency, inclusiveness, etc. are seen as ways of gaining trust in AI. When businesses don’t trust (the outcomes of) AI, they won’t buy it.

“Trust is the foundation of every transaction in life.” (Tamara McCleary, CEO at Thulium.co)

For our analysis, we shouldn’t think in a hierarchy of values. You cannot say beforehand that one value prevails over the other. The method wants you to appraise your design to all values, weigh them. And deal with any conflicts. Based on this analysis, you can derive norms, and from them design requirements for your AI design.

It’s beyond the scope of this blog to describe the method in detail. But I want to emphasize that the impact enhancing or violating a value can have different effects. For example, your recruitment system uses AI to select candidates and this system is biased. The biases effects your organization foremost because it violates your value of inclusiveness. And it will lead to bad publicity. But when the system enhances the value of inclusiveness, it can draw better candidates to your organization because these candidates want to work for your inclusive company.

Human-centered design

Design frameworks put humans at the center. Products should help humans. This is called human-centered design. Trine Falbe says: “Human-centered design is a framework as well as a mindset. At its core, working ‘human-centered’ means involving the people you serve early and continuously in the process, i.e. using research to establish the needs of these people, understanding what problems they have, and how your product can help solve these problems.” By putting humans at the center, we can assess the ethics of a product better. It’s not only about not harming the user, it’s more about delivering what the users expect from the product.

When you realize that AI is only a means to achieve certain product features, you should assess the ethicality of these features first. And then determine whether using AI – with all its hiccups – will fulfill the feature in a way it puts the interests of the humans first.

How to do that? To quote Capgemini’s design agency, Fahrenheit212: “Appoint more ‘corporate philosophers’ and help train employees and students alike on design ethics”. You should cultivate a culture where ethics are present. "The ethical culture in an organization can be thought of as a slice of the overall organizational culture." As Ethical Systems states. In the next episode, we’ll also see how this culture will help dealing with ethical issues surrounding AI.

In the next episode of this blog, I will discuss how we can assess the ethics of AI used within business processes. When we enhance our processes with AI, how can we establish if the result is ethical?

This section of the article has been previously published on the Capgemini Insights & Data Blog.

Part 3: A designer’s view on AI ethics – AI used in business processes

In the previous part, I described how we can use value-based design as a method of determining the ethics of artificial intelligence. This method is suitable when we want to sell a product (or service) with AI in it to the market. But when we want to use AI to enhance our business processes, in order to make them more efficient, another method can be used to determine the ethics of the AI.

Let me start with reviewing the value-based ethics for product design. When AI is embedded into a product of service – and almost all practical AI is – we should evaluate the ethics of the product itself and the effects of AI on the product as a whole. It is the product the user interacts with, not only with the AI-based components of the product.

When we use AI within a business context, particularly in a business process, we also shouldn’t look at the AI as some kind of standalone object. The AI is applied within the business process, enabling the goals we’ve set for that process. Again, AI isn’t used on its own, it’s used within a context – and that context, in this case a process, should be ethical, not just the AI used in that process.

Take this imaginary example: When AI is used to make superior decisions which bank accounts to plunder, the AI as such might do a swell job but the process as a whole is rather unethical.

We take the same approach as we did for the design ethics for products. This means that the values we’ve distinguished for good design and services also apply for business processes. However, we should take a different approach. Products and services should be human-centered, business processes should be business-centered. You can be in the business of designing, making and selling ethical products, but that doesn’t make your business process, as such, ethical. So, what should we do to create ethical business processes using AI?

Luckily, this is not unchartered territory. Business ethics have been around for some time, and we can reuse the methods and insights gleaned from this discipline to establish the ethicality of the AI applied within the businesses. And yes, creating ethical products is part of an ethical business process.

But there’s more. Let’s first take a quick look at business ethics. According to Wikipedia: “Business ethics refers to contemporary organizational standards, principles, sets of values, and norms that govern the actions and behavior of an individual in the business organization.” Replace “individual” with “AI” and you get a grasp on ethics for AI.

“There is no such thing as business ethics. There is only one kind – you have to adhere to the highest standards.” (John C. Maxwell, author, speaker, and pastor)

This blog post is not the place to start a mini lecture or course on business ethics. I will venture only into one aspect of business ethics, to give you an insight when the ethics of AI within businesses should be evaluated.

Responsible AI

The ethics of AI is also called responsible AI. Responsible AI is linked to responsible businesses. John Elkington introduced The Triple Bottom Line, a concept that encourages the assessment of overall business performance based on three important areas: profit, people, and planet. Whereas in design ethics the money aspect isn’t very explicit, on a business level making money is the essence of doing business. In determining the ethics, we must look at the economics around the business processes and AI.

CED’s Concentric Circle Model, Caroll's CSR Pyramid Model, and Elkington's Triple Bottom Line Model (Source: https://ir.unimas.my/id/eprint/18409/)

↑ Organizations often have a set of values or principles which reflect the way they do business or to which they aspire to observe in carrying out their business. As well as business values such as innovation, customer service and reliability, they will usually include ethical values which guide the way business is done – what is acceptable, desirable and responsible behavior, above and beyond compliance with laws and regulations. (Source: Institute of Business Ethics)

The CED’s Concentric Circle Model for business ethics places these economical responsibilities at the center. This inner circle comprises factors such as the efficient allocation of resources, provision of jobs, and economic growth. In an AI context, that can be translated into:

  • Profitability of the AI business case: what is the ROI on AI in terms of efficiency gains, investment, training, keeping the system up to date?
  • Impact on the organization: changes in processes, job losses, retraining of personnel.
  • Opening the future: enabling innovation, serving new markets, creating new products or services, becoming an “AI-first” company.

In my experience, the first bullet is also the first hurdle when it comes to starting an effective AI strategy. Most AI business cases don’t make it, because the ROI cannot be established. We cannot assess the effects of AI within business processes with enough precision. And that’s even before the first ethics discussion takes place. However, the economics of an AI solution also depend on the content and foreseen effects of that solution – and the consequences of any solution have to be ethical.

Businesses are expected to operate within the law, thus legal responsibility is depicted as the next circle of the model. Some people think that obeying the law is sufficient to be ethical. But that’s a misconception. Obeying the law is a prerequisite to being ethical – assuming that the law itself is ethical. History has proven that the latter hasn’t always been the case.

Ethical responsibility can be defined in terms of “those activities or practices that are expected or prohibited by society members even though they are not codified into law.” At this level, we can assess the ethics of the business the company is doing. Be aware that in the lower circles ethical aspects are also considered.

Business values

And like with design ethics – described in the part of this articles – these ethics are value-based. The effects, side-effects and other consequences of doing business with people on this planet should be assessed, weighed, and evaluated.

“The most common ethical values found in corporate literature include integrity, fairness, honesty, trustworthiness, respect, openness.” (Institute of Business Ethics)

Business ethics take a broad view on the whole business of an organization. But in our AI assessment, we should take a closer look at how the use of AI will influence the business processes, and consequently, how these altered business processes will change the way an organization does business.

As an example, I’ve taken a value system by Mark S. Putnam (CEO at Global Ethics, Inc.). I looked how such a system can be used for AI-enabled business processes. (There are many more other good systems.)

  • Honesty: Also called transparency and explainability. AI should be open in how it makes decisions.
  • Integrity: Integrity connotes strength and stability. The AI should keep up with its goals over its lifetime. With learning machines, this implies that we should keep monitoring the system and keep feeding the system with new data so it can stay up to date and relevant.
  • Responsibility: It might be difficult to assess the responsibility of the AI system as such, but the people within the organization should use AI in a responsible manner.
  • Quality: though most AI systems are used to make processes more efficient, we should take care that the decisions the AI system takes keep up with current standards – and why not use AI to improve the quality of the decisions?
  • Trust: Everyone who comes in contact with you or your company must have trust and confidence in how you do business. If you’re using AI, your AI should be trustworthy too. A lot of companies don’t make their AI public because of these trust issues, but that’s not very ethical.
  • Respect: We respect the laws, the people we work with, the company and its assets, and ourselves. The AI should also respect the experience and knowledge of their human co-workers. For instance, we allow employees to override decisions made by AI when they deem those decision to be wrong. The other way around, the AI should be sabotaged.
  • Teamwork: AI should collaborate with humans and enhance our human capabilities. Replacing humans completely by AI is at this moment not really feasible from an ethical standpoint.
  • Leadership: Managers and executives should uphold the ethical standards for the entire organization. A leader should make clear where and when AI is used and be open on the consequences of the AI for the organization and its people.
  • Corporate citizenship: Every company should be to provide a safe workplace, to protect the environment, and to become good citizens in the community. The AI shouldn’t make that worse. It should improve the citizenship of the organization.
  • Shareholder value: Without profitability, there is no company. AI should contribute to the profitability of the organization.

Look at AI not as a machine, but as a virtual employee. You should set the same standards and requirements for behavior of this virtual employee. The AI should behave in the same manner as an employee – or better, for that matter. If you want your employees to behave responsibly, so should your AI. Don’t take for granted that AI will behave without being instructed to do so. And don’t assume that the supplier of your AI software has made that assessment for you.

I don’t want to discuss the formal (or legal) responsibilities of AI. The question “Who is responsible when my AI makes a wrong decision?” is still unanswered. At the very least, you should be aware of the ongoing discussion. And because you’re going to use an AI system within your specific business context of processes – independent of the way you acquired and trained the AI – you should also take responsibility for the ethical consequences of your AI implementation. The business ethics of your organization will help to assess that specific AI implementation – and keep checking if the AI starts misbehaving over time.

Wrapping things up

In these three blog posts I’ve tried to explain that assessing the ethics of AI is not something you can do without the context of its use. AI will always be deployed as part of a product, service of as a device enhancing business processes. Broadening the scope may seem to make the assessment more complex, but I’m convinced that the large scope will enable us to more precisely gather more precise and context-sensitive values, norms, and requirements.

Using existing frameworks for design ethics and business ethics will help you to answer questions around AI and ethics. These established methods are ready to be used. So don’t let unanswered ethical questions stop you for exploring the possibilities of AI. Because they can be answered. And you can answer them too.

Capgemini believes that using AI will contribute to the profitability of your company, the people using AI directly and indirectly, and the world we live in. So let’s start using AI, in an ethical manner.

This section of the article has been previously published on the Capgemini Insights & Data Blog.

Photo: Ethics wordcloud – CC0 Public Domain by SVG SILH