Ethical Reasons
The most important part of identifying ethical risks, the first phase of ethical decision-making, is to ensure that sound ethical reasoning is employed as products and processes are examined. This chapter examines some of the reasons that play a role in the identification and resolution of ethical risks.
What is Ethical Reasoning?
Ethical reasoning is the process of identifying and evaluating reasons for action. Similarly, ethical reasons are considerations that count in favour, or against, a certain course of action.
Ethical reasoning helps to:
Make decisions consciously and reflectively.
Justify actions in terms that others can understand and accept.
Avoid bad decisions due to bias or fallacies.
Types of Reasons
There are three main categories of reasons that play a common role in identifying and resolving ethical risks: rights and their corresponding duties, certain kinds of interests, and values:
Rights are entitlements to act or be treated in certain ways. They generate weighty reasons, often amounting to obligations, for ethical decision-making.
Interests, particularly economic interests, typically generate reasons that reflect concern for tangible benefits, such as jobs, money, or economic opportunity.
Values generally present reasons for respecting what people find desirable, such as autonomy, spirituality, or fairness.
It is important to bear in mind that the lines between these categories are blurry. For example, certain interests—like an interest in not being assaulted—are also rights; and certain values, like fairness, can generate important obligations much like rights do. The hierarchical categorization presented here merely serves to illustrate that different kinds of reasons often have different roles in moral reasoning. For example, reasons based on rights and duties are generally significantly weightier than reasons based on general values or economic interests.
Rights and Duties
Reasoning about what to do often includes thinking about rights and duties. One view of rights is that they describe what we can reasonably expect other people, and society, to do to protect our interests. Many rights protect basic needs and interests, like our right to life, bodily integrity, free speech, and privacy. The UN Universal Declaration on Human Rights contains a list of rights that are commonly regarded as fundamental. Rights often correspond to duties. For example, your right to bodily integrity corresponds with your doctor's duty not to operate on you without your consent.
Rights and duties function as guardrails in situations that require trading off competing reasons. Rights are sometimes grounded in the notion of human dignity, to explain why there are certain things we must never do to others, or make others do, even if this could benefit a larger number of people. For example, your right to life and bodily integrity protects you from having your organs harvested by your doctor, even if your organs could save several other people’s lives. Correspondingly, your doctor has a duty to respect your right to life and bodily integrity.
Rights and corresponding duties provide strong reasons against actions that might transgress those rights. For example, if a particular product would violate people’s rights, this provides a strong reason—perhaps even amounting to a duty—for rethinking how the product might be developed differently. For example, the right to privacy imposes a duty on healthcare providers to protect our health data, even when making this data available to researchers could lead to faster advances in drug development.
Sometimes, different people’s rights will clash, and moral reasons will pull in different directions. In these cases, the aim of moral reasoning is to work out which rights, and which duties, are strongest.
Interests
Interests are possessed by people and groups and by other sentient beings such as animals. To have an economic interest in something is to be concerned for tangible benefit, such as jobs, money, or economic opportunity. Often several people have an interest in the same economic benefit, and their interests need to be traded off against each other. For instance, platforms like Google Play or Apple Store distribute apps to users for a share of the revenue. App developers have an interest in decreasing the share the platforms take, while the platforms have an interest in keeping their share high.
Interests also come into play in discussions of broad values like privacy, dignity, and liberty. For instance, the interests that social media users have in the liberty to express themselves can sometimes clash with the interests of other users in maintaining their dignity, which can be harmed by hateful speech.
Values
Values reflect conditions that are desirable or believed to be desirable. There is broad consensus about many values at a general level, such as fairness, knowledge, or autonomy. But agreement at such a general level masks disagreement about the specific meaning of each value, and the weight of each value when values conflict. In addition, there are some values that are not universally shared. Spirituality and curiosity are examples of values that some people find extremely desirable, while others do not.
Once rights and basic interests have been accounted for, it is useful to engage with groups potentially affected by a product, to understand which values matter to them in a given context. For example, two app users may possess the same interests in affordability, reliability, and functionality, while holding different viewpoints about the kind of privacy protections that are important. This kind of engagement is always work-in-progress, because our values evolve, not least in reaction to new technologies.
Stumbling Blocks for Ethical Reasoning
It is worth being aware of some of the assumptions that can hamper identifying and addressing ethical issues. These include:
Technical solutionism
Moral relativism
Moral righteousness
Technical Solutionism
Technical solutionism assumes that technology is the best way to solve any ethical issue. Emerging technologies do not only create ethical challenges, but they can also be an important part of the solution. For instance, Facebook has made significant progress in building algorithms that detect hate speech. Facebook started reporting the accuracy of these algorithms in Q4 of 2017. Back then, the algorithms were able to detect 25% of hate speech that was eventually removed from the platform. By the summer of 2020, improved algorithms were able to detect 95% of speech that was eventually removed from the platform.
Such advances in putting AI to use to solve an ethical issue are impressive. But it would be a mistake to assume that technology can solve any ethical issue. In fact, the Facebook example shows why technical solutionism is mistaken: In order to train algorithms, Facebook has to first define standards for hate speech, and it must continuously revise and refine them. Effectively operationalizing the notion of hate speech in a way that captures the nuance, history, and changing cultural norms requires a good deal of ethical reasoning. Defining these standards, as Facebook well knows, requires input from many global experts and stakeholders.
Technical solutionism is right that technology can play a significant role in implementing solutions to ethical challenges. But, for the time being, humans cannot pass on the task of reasoning through ethical challenges.
Moral Relativism
Moral relativism assumes that what it means to do the right thing varies from person to person, or from community to community. Ethics, on this view, is hopelessly subjective—There is no objective value framework to determine whether decisions are right or wrong.
Moral relativism makes it seemingly easy to handle disagreements about ethical choices. Many ethical risks are controversial. For example, should we prioritize privacy through end-to-end encryption or provide safety by screening chats for attempts to spread misinformation, which requires unencrypted communication? Such tradeoffs are difficult to make, and people passionately disagree about the right course of action.
But this does not mean that there can never be right answers, and that ethical reasoning about emerging technologies is futile. Some arguments about what we ought to do are supported by stronger reasons than others. While it may not always be obvious what "the right thing to do" is, or whether there is only one right answer in any given case, it is often possible to differentiate better and worse courses of action, and to identify the path of action for which there are the strongest reasons. Contrary to what some moral relativists might suggest, moral disagreements should make us take ethical reasoning more, not less, seriously.
Relativism is right that we should be tolerant of difference. After all, often there are multiple valuable perspectives on any difficult ethical issue. But, taken seriously, relativism commits us to the view that there is no right or wrong, because different people hold different views, all of which are equally valid. It is therefore important to draw a line between relativism—the view that anything goes—and pluralism—the view that there may be multiple valid perspectives. To tolerate different views, we need not take the relativist position that there simply is no one right thing to do.
Moral Righteousness
The flipside of the relativist position is moral absolutism, which can take the form of moral righteousness, or the view that everyone who does not agree with the ethical position you hold to be true is wrong, and perhaps therefore also a bad person. The danger of moral righteousness is that it might make you unwilling to listen to people who take a different view. The morally righteous tend to stick to like-minded people and despise those who disagree.
Because ethical reasoning depends on considering different reasons and arguments, it requires a certain degree of open-mindedness and intellectual humility; that is, to recognize that we might have been wrong about our moral judgements when others present us with better reasons for different judgements.
Reasoning Fallacies
Fallacies are a common source for failures of moral reasoning. Fallacies are general patterns in which bad arguments fall. Here are a few of the most common types of fallacies:
Overgeneralization and stereotypes: stereotypes about people are an example of overgeneralization, which are assumptions about a group of individuals, or a broad range of cases, based on an inadequate sample. Examples include statements like, "computer scientists are shy and nerdy” and, “men are bad at learning languages.”
Slippery slope: a slippery slope is a type of argument that claims an otherwise permissible
action would lead to a chain reaction with ultimately catastrophic consequences (e.g., "if we allow preimplantation genetic diagnostics, we'll end up with designing babies according to parents' wishes"). It is important, though, to scrutinize slippery slope arguments carefully, because, while some slippery slope arguments are good arguments, often there is little evidence that an alleged slippery slope will indeed occur.
Appeal to authority: sometimes, we try to support our reasons by appealing to respected authorities. But even authorities can be wrong. And people who are authorities on one subject are not authorities on another. Being an authority doesn’t automatically make one right, and sometimes following authorities can be wrong.
Last updated