Causes of Ethical Failures
Last updated
Last updated
As shown, ethical dilemmas can affect many different types of companies in big and small ways. It is safe to say that every company has or will face ethical challenges while developing or using emerging technologies. The ability to recognize when ethical breakdowns can happen and why they happen is an essential tool for anticipating potential pitfalls and avoiding their effects on the business. This chapter will outline four common sources of ethical failures:
Ethical failures due to bad intent.
Ethical failures due to lack of awareness.
Ethical failures due to a lack of analysis.
Ethical failures due to a lack of governance.
News coverage tends to attribute failings to bad intent or gross neglect for societal interest. We can understand bad intent in light of the model of ethical decision-making. For instance, the British consulting firm Cambridge Analytica collected personal Facebook data without people’s consent. The data was collected through an app running on Facebook. It consisted of questions used to build psychological profiles. The app also collected the personal data of users’ Facebook friends. The UK Information Commissioner’s Office found that Cambridge Analytica’s abuse of Facebook user data was intentional. In fact, the abuse of the data of American voters was at the heart of the analytics service Cambridge Analytica provided to political campaigns, including the 2016 presidential campaigns of several US political candidates.
Psychologists have suggested that we tend to attribute others’ negative behavior to bad intentions while excusing our own bad behavior as a result of the circumstances (ultimate attribution error). As a result, we may overestimate the role of intentions in ethical behavior. This matters because it means that having well-intentioned teams in our organizations is not sufficient to prevent ethical failure. While actors with bad intentions are often responsible for ethical failures, other causes are more common.
Some ethical issues are like birds: they can be difficult to spot, even when hidden in plain sight. This applies particularly for the ethical impacts of emerging technology, which are not yet well understood. Consider the case of Facebook Portal. Through the intervention of one employee, the company noticed that their smart camera was less able to recognize people of color than white people. While Facebook narrowly avoided the consequences of this ethical failure, the fact that the problem went undetected until just before shipment is troubling. The case illustrates that sometimes the biggest challenge in preventing ethical issues is to spot them in the first place.
Some ethical issues are complex. Addressing one dilemma often creates another. And a solution to an ethical issue may conflict with other ethical principles. Consider the case of Apple vs. the FBI. In the wake of the December 2015 terrorist attack in San Bernardino, a federal judge asked Apple to provide technical assistance to the FBI in accessing the information on the suspect’s iPhone, hoping to discover additional threats to national security. To decide whether to comply with the court order, Apple had to decide how to weigh considerations of safety and security against its users’ privacy. Apple decided to send engineers to advise the FBI, but refused to comply with the court order to bypass the phone’s security measures. A number of major tech firms filed amicus briefs in support of Apple, while the White House and Bill Gates stood behind the FBI. The tradeoff Apple faced was difficult, and the case remains as controversial today as it was at the time.
Governance is the system of rules, practices, and processes by which a firm is directed and controlled. It answers the question: who decides what, and how?
Many technology firms rely on metrics to govern the development and use of emerging technology. While metrics can be a powerful lever of innovation, overreliance on metrics can lead to problematic outcomes. YouTube’s recommendation algorithm aims to direct users to content they will like based on their previous viewing habits and searches. This seemingly innocuous feature has given rise to numerous scandals. YouTube has been found to promote terrorist content, extreme hatred, and conspiracy theories. Research by DeepMind has shown that feedback loops in recommender systems can give rise to "echo chambers," which can shift a user’s worldview. Guillaume Chaslot, who worked on YouTube’s artificial intelligence recommendation engine, suggests that the root cause of these failures is that the algorithm’s developers were driven by bad incentives. Software engineers were given a single metric: to increase the time that people spend on YouTube. Misinformation, violent content, and divisive content all drive engagement. Since the product teams working on the recommendation engine have been incentivized to single-mindedly pursue maximize engagement, other values got predictably side-lined.