Ethical Decision-Making
Last updated
Last updated
Emerging technologies raise questions not only about what we are able to do, but also about what we should do. The track record of companies and other organisations to identify and address ethical issues before they have a negative impact on society is not impressive, as we have seen.
Technologists play a critical role in addressing ethical issues. They are often the first who are in a position to identify ethical issues during product development, and they can play a key part in reasoning through and resolving these issues.
To apply ethical reasoning to emerging technologies effectively, a systematic approach is recommended. This chapter provides a broad over view of such an approach, detailing the three primary phases of ethical-decision making. Using the case of the AI-powered Portal Platform developed by Facebook, this chapter then demonstrates these three phases of ethical reasoning in a real-life product development scenario.
Ethical Decision-Making can be broken down into three phases:
Product teams identify ethical risks. To do this in a structured way, teams can employ ethical foresight methods and engage with stakeholders.
Product teams understand ethical requirements. This requires identifying which options are permissible in the light of regulation, standards, and human rights.
Product teams select options for action. This involves generating options for addressing the ethical risk. Among permissible options, organizations need to weigh competing considerations to arrive at a choice that is supported by good reasons.
Over the past few years, some important global actors, including governments, large tech firms, and professional organisations, have introduced frameworks for ethical decision-making, for example The Montreal Declaration, The EU Ethics Guidelines for Trustworthy AI, and The OECD Principles on AI. These frameworks incorporate different aspects of the ethical decision-making process.
Portal is a piece of software that provides users with a smart camera, using advanced computer vision to dynamically frame shots during video calls. Instead of having to move your device around yourself, the camera automatically follows you while you speak, using advanced facial recognition tools.
During a pre-launch test at Facebook, Lade Obamehinti—who was then in charge of technical strategy—noticed something strange. While Lade was talking, the camera focused on Lade's colleague instead of Lade. Lade is Black. Her colleague is white.
After Lade investigated the issue, she discovered that the software was trained on a non- representative dataset. The huge library of faces that had been used by Portal to train its AI contained mostly faces of white people. This made the AI very good at spotting and following white faces, but much worse at doing the same with people of color. As a result, Portal had a racial bias built into the product, which caused it to effectively prioritize white faces over the faces of people of color. The bias was not intentional. The problem was that the data that was convenient for the engineers to use had been non-representative.
Lade did not see this as a simple technical bug, but as a moral challenge. Facebook risked valuing convenience during the development process over basic fairness and inclusion, exacerbating racial inequality.
Lade not only sought to resolve this problem in this particular context, but to learn from this experience in designing future products. She did this by developing a framework for inclusive AI. Teams at Facebook now use this framework to reduce bias in their products.
The three phases of ethical decision-making can be clearly seen in the case of Facebook Portal. First Lade identified an ethical risk, namely that the software had a racial bias built into it causing it to prioritize white faces over black faces. She then recognised the ethical requirements, that it would be morally wrong for Facebook to release a piece of software that could exacerbate racial inequality. Lastly Lade selected the appropriate course of action, to commit to developing a framework for inclusive AI to reduce bias.