Identifying Ethical Risks
Ethical reasoning begins with anticipating and identifying ethical risks. This is more challenging than you might think. Ethical risks arise in many different domains, from privacy to sustainability. Ethical risks often become painfully apparent once it is already too late; for example, when some harm has been caused or data has been compromised.
For example, think of the Cambridge Analytica Scandal, which exposed how user data gathered from Facebook was illegitimately used to manipulate voters through political campaigns. This became a prominent topic of discussion only after a significant amount of data had already been compromised. But anticipating ethical risks before your work generates negative impacts can be challenging.
This chapter explores some of the reasons why potential ethical risks might be missed and outlines some practical tools and practices that can help in anticipating and identifying ethical risks.
Why Are Ethical Risks Often Overlooked?
There are many reasons why ethical risks raised by emerging technologies are missed. Thinking through the ethical implications of your work isn’t easy, and often involves a great deal of uncertainty about potential outcomes. That said, there are a few common reasons worth bearing in mind:
Failure to consider key stakeholders’ rights, interests, and values. Recall the case of Facebook Portal. One issue was that the smart camera that Facebook had developed tracked people of color less reliably than white people because the dataset included mostly white people, failing to represent people of color.
The assumption that ethical risks are someone else’s problem. Very few (if any) people working in organizations have ethics as their primary responsibility. Rather, people have responsibility for a certain aspect of a product or a certain technical area of expertise. Ethical risks often arise between and across areas of responsibility. Since ethics is nobody’s explicit responsibility, you can easily fall into the trap of assuming that someone else is going to take care of it.
The lack of clear and consistent practices of anticipating and identifying ethical risks. Teams working on emerging technologies often work in a high-pressure environment, having to meet demanding production targets. Considering the ethical implications of this work can seem like a time-consuming distraction. As a result, many organizations lack regular practices for systematically anticipating and identifying ethical risks and monitoring the impact their products have on stakeholders.
The under appreciation of ethical expertise. Ethicists are highly trained experts at identifying, preventing, and resolving ethical risks, yet most organizations either do not employ ethicists at all or hire unqualified people into ethics roles. Ethics responsibilities are often handed to professionals who care deeply about the world but lack appropriate training in navigating ethical grey areas. Being strongly motivated to do the right thing is not sufficient to make one an expert in ethics. While the ethical integrity of an organization is everyone’s responsibility, ethical expertise—the ability to identify and achieve a stable balance between competing values and translate principles into effective policies—generally requires an advanced degree and several years of research and practical experience.
Anticipating Ethical Risks
The four reasons above for why ethical risks easily get missed suggest four corresponding strategies, and guiding questions, for becoming more sensitive to ethical risks:
Consider affected stakeholders' rights, interests and values:
Who are all the stakeholders who will be affected by our product?
Are their rights, interests, and values adequately protected?
How do we know what their interests and values are—have we asked?
2. Give everyone responsibility for identifying ethical risks:
How do we treat team members who raise potential ethical risks? Is this something our culture encourages?
Do we have ways for rewarding and celebrating team members that bring ethical risks to our attention?
Does everyone on the team feel in charge of looking for cross-cutting ethical risks?
3. Build and conduct regular exercises for anticipating and identifying ethical risks:
Do we have regularly scheduled ethical risk-sweeping exercises?
Do we collect and analyze data to identify potentially harmful impacts on stakeholders?
Are there channels and forums for people on the team to raise ethical risks, and do people
feel empowered to use them?
4. Engage with experts in risk assessment:
Have we engaged with ethicists who have expertise in identifying, preventing and resolving ethical risk?
Have our ethical responsibilities been handed to those with the relevant expertise and training rather than merely those with genuine care?
Tools for Identifying Ethical Risks
There are also three tools that can help teams to identify ethical risks. These are:
Consequence scanning to prompt ethical themes.
Scenario analysis to improve foresight.
Stakeholder engagement
Consequence Scanning
Ethical themes are concepts that tend to capture the most important areas of ethical concern in emerging technology. Product teams can use ethical themes to help identify common ethical risks by means of consequence scanning. This method makes use of ethical themes to trigger people’s intuitions about a wide range of possible impacts of a product.
The great benefit of ethical themes is that they are general and can be used throughout the product development lifecycle. The downside of ethical themes is that they are less sensitive to particular contexts, because they do not help with thinking about novel problems or different stakeholder perspectives.
Scenario Analysis
A good way to structure a foresight exercise is to brainstorm and discuss possible future scenarios: little storylines that explore the “what if” of a product’s use and misuse.
Scenario analysis can be structured in accordance with well-known ethical risks posed by emerging technologies. For instance, product teams can ask themselves:
What if our product contributed to the emergence of a surveillance state?
How could our product contribute to this?
What other ways might our product be used to violate privacy?
Stakeholder Engagement
Bias and the limitations of our own perspectives can easily make us overlook important ethical risks. Therefore, the ability to engage stakeholders and experts is a crucial part of ethical decision-making. Product teams can consider three strategies for stakeholder engagement:
Perspective taking: Product teams look for people or groups affected by their product and engage in imaginative perspective-taking.
Expert consultation: Product teams consult representatives of a given stakeholder group or experts on a stakeholder group or type of ethical risk.
Focus group: Product teams conduct focus groups with members of an affected group to hear about their perspective first-hand.
Using Regulations, Standards and Human Rights to Identify Ethical Risks and Understand Ethical Requirements
Since many ethical risks created by emerging technologies are new or ill-understood, it is often difficult to understand how to capture an ethical risk in a concrete way. It is one thing to understand that your product poses a privacy risk, and another to capture this risk in a way that captures its salient features. Applicable regulations, standards, and human rights documents can all help in concretizing ethical risks.
Compliance with regulations and standards is different from ethics in that they capture only what you are legally required or expected to do, while ethics is about what you ought to do. Nonetheless, regulations, standards, and human rights are all critical tools for identifying ethical risks.
Regulations
A regulation is a set of rules made by a sovereign legislative body, often in consultation with subject matter experts. In contrast to ethical frameworks, regulations have legal standing and are enforceable. Regulation has several purposes and often results from compromises struck by policymakers with various different motives. However, one common function of regulation is to concretize ethical minimum standards. Regulation is often a useful way of screening out options for action that fail to meet ethical requirements.
Because not all regulatory authorities are equally legitimate, however, it is important to view regulations critically. Regulations promulgated by a regime with a poor human-rights record may be less legitimate than regulations enacted a government with a strong commitment to human rights.
Standards
Standards are rules or guidelines generally created by industry and civil society organizations, such as the International Standards Organization (ISO), the Institute of Electrical and Electronics Engineers (IEEE), and the Forest Stewardship Council (FSC). They serve to establish common norms for interoperability, product quality, and professional conduct, within the boundaries established by regulation. Standards are also an important mechanism for establishing and interpreting ethical requirements for emerging technology.
Standards often focus on implementing legal regulations and ethical best practices. They are well- suited to provide concrete technical guidance and policy details that can help to concretize ethical risks. Standards aim to be in sync with the views of experts on how best to operationalize an ethical requirement.
Standards set by these organizations are not legally binding and are therefore not legally enforceable. However, standard-setting bodies can penalize compliance failures by refusing to certify products, revoking membership in professional associations, and public shaming.
Typically, standards are not developed democratically but represent the opinions of experts. Like legal regulations, therefore, their legitimacy is not guaranteed. Standards may not always take due account of different interests and viewpoints of the people affected by them. But they can be a useful starting point for determining ethical requirements.
Human Rights
Human rights are rights we have as human beings. They create a protective zone around persons and allow them the opportunity to further their valued personal projects without interference from others. Examples of human rights include:
Security of the person
Due process and a fair trial
A right to own property
Freedom of movement
Freedom of speech and political participation
Freedom from discrimination
Freedom to marry
The right to work
Religious freedom.
Human rights complement standards and regulation when filtering permissible options. Regulation and standards are detailed and often apply to specific emerging technologies. Since the development of regulation and standards takes time, they tend to lag behind recent advances in emerging technology. By contrast, human rights lack the specificity of law and regulation. Yet human rights apply generally to all organizational activity and are widely accepted as ethical minimum standards. Organizations can use human rights to explore whether a given product may cause unacceptable negative impacts, even in areas where regulation and standards do not provide sufficient guidance.
An authoritative list of the core internationally recognized human rights is contained in the International Bill of Human Rights, consisting of the Universal Declaration of Human Rights and the main instruments through which it has been codified: the International Covenant on Civil and Political Rights and the International Covenant on Economic, Social and Cultural Rights. Coupled with the principles concerning fundamental rights in the eight ILO core conventions as set out in the Declaration on Fundamental Principles and Rights at Work, these documents provide a benchmark against which social actors assess the human rights impacts of business enterprises.
Human rights can be useful in concretizing ethical risks, particularly in areas where regulation and standards have not yet been developed or are lagging behind recent advances in emerging technology. The process for taking human rights into consideration includes:
Identify human rights relevant to your product.
Ask: Are there risks that our product will fail to respect human rights?
Last updated