What’s at Stake
Last updated
Last updated
Attention to ethics and emerging technologies is increasing. Technology contains incredible opportunities to improve lives and make the world better. Technology also comes with many risks. In recent years, numerous organisations have handled the power of emerging technology badly, resulting in significant detrimental impact to their businesses and the public at large.
In this chapter, real-world examples of situations will be presented where emerging technologies confronted people with difficult ethical decisions. These cases highlight the importance of ethical reasoning. If the technology professionals involved in these cases had drawn upon ethical reasoning, they could have prevented adverse outcomes, including harm, conflict, financial loss, and embarrassment.
These cases also address common misconceptions about the ethics of emerging technologies, including:
That technological progress cannot be controlled.
That ethical problems in technology have straightforward solutions.
That technology is ethically neutral.
That ethics is somebody else's job.
Technologies often generate moral challenges and controversies by altering people’s incentives, creating new opportunities, or redistributing resources. Consider this recent case:
Researcher He Jiankui that his team had used CRISPR technology to edit DNA in human embryos to make them less susceptible to contracting HIV. CRISPR technology is a powerful tool for editing genomes. It has many possible applications, including in correcting genetic defects, preventing the spread of diseases, and improving crops.
However, its promise also raises ethical concerns. Immediate questions include the safety and long-term consequences of editing the human genome. In addition to this are deeper questions about whether or when we should treat human beings are "customisable". Should we edit genomes to enhance human well-being beyond normal functioning, or only to cure diseases? Standards of health and disease vary widely across time and place. Consider that many people and groups regard homosexuality as a disease, while in some societies, schizophrenia can be considered a gift for communicating with the divine. How, then, should we define normal functioning, well-being, and disease?
Technologies create new possibilities of action. As this case shows, this can generate new moral challenges to which we do not yet have the answers. People working with emerging technologies— perhaps you are one of them—are the first to encounter these questions.
Technological determinism is the view that technological progress cannot be controlled. As a result, ethical judgement and regulation are futile. This view is controversial. Technological developments are the result of human choices, not forces of nature. When individuals face difficult choices, societies have many options for incentivising and disincentivising different courses of action. Societies also have many options for deciding how to regulate and live with new technologies once they exist.
Technology can both promote and undermine values. It may even change which values guide us in our everyday lives.
Consider the case of Predpol, a software program using arrest data designed to predict future crime in neighborhoods across the United States. By predicting where crime might strike based on historical arrest data, Predpol aimed to assist departments in allocating resources efficiently. However, this has led to disproportionate prosecution of petty crimes in neighborhoods that are predominantly poor and Black.
This case shows that ethical challenges in emerging technology need not be about bad intentions. Yet Predpol was both inefficient and unfair, because it assumed that data on arrest rates accurately reflects crime. Since arrests in the data base were disproportionately made in poor and Black neighbourhoods for petty crimes, the algorithm recommended focussing future efforts in these areas as well, creating the risk of exacerbating racial and social inequalities.
Sometimes the values that technology embodies depend on the context of its use. This means that we cannot fully understand a technology’s potential impact without understanding the different ways in which that technology might be used. This is an issue that Facebook has confronted.
Facebook’s mission is to give people the power to build community and bring the world closer together. But in 2017, members of the Myanmar military, using the social network, misused that power horrifically. Using Facebook, they carried out a systematic campaign to target the country’s mostly Muslim Rohingya minority group, exploiting Facebook’s wide reach in Myanmar. This is one of the first widely reported instances of an authoritarian government using the social network against this government’s own people. While Facebook took down the official accounts of senior Myanmar military leaders in August 2017, human rights groups and the UN criticized Facebook’s response for being too slow and ineffective. The violent campaign went undetected for long enough to allow the anti-Rohingya propaganda to incite mass murder, rape, and the largest forced human migration in recent history.
Context matters. Creating technology to connect people can be a wonderful thing. But the same technology that brings people together can be used to incite genocide. One of the things this shows is that we cannot discuss technology in a vacuum, but must understand the different contexts in which it might be used, including unintended ones.
While context matters in evaluating technology, this does not mean that technology itself is ethically neutral. Researchers have argued, for instance, that usage of social media alters social norms. In one study, usage of social media was correlated with greater support for freedom of expression and reduced support for privacy.
The impact of technology is often ambivalent, with positive and negative impacts going hand in hand. Yet once a technology is widely used, addressing its adverse impacts can be difficult.
Consider the case of credit scores based on data analytics, such as FICO in the United States. From the 1960s onwards, data-driven credit ratings helped to democratize access to credit, and provide a check against overt discrimination by loan officers. Today, consumer credit scores are widely used for making loan decisions, setting insurance prices, and for hiring decisions. Recently, AI and big data have opened up new data sources for assessing creditworthiness. Research showed that an AI algorithm based on five simple digital footprint variables such as borrower device type (e.g. PC or Mac) or email domain (e.g. Gmail or Hotmail) outperform the traditional credit score model in predicting who is more likely to pay back a loan.
This raises several ethical questions: Should Mac users be able to get better interest rates, if they are in general less likely to default than PC users? Does your decision change if you know that Mac users are disproportionately white? More generally: should companies be allowed to make important decisions such as access to credit or jobs contingent on factors that have no inherent connection to what is at stake in the decision?
The impact of credit scoring is ambivalent. On the one hand, it has opened up access for large parts of the population that were previously excluded. On the other hand, within this now much enlarged pool of potentially eligible candidates, credit scoring has the potential to unfairly discriminate. The mechanisms by which credit scoring may discriminate have, however, become more complex and opaque. That generates new challenges for organizations and regulators in avoiding the adverse effects of credit scoring.