Accountability Risks
What is Accountability
Accountability is the practice of holding agents responsible for outcomes, processes, actions, or intentions. It operates in the context of a social relationship: someone is expected to provide an account to someone else, for some state of affairs, according to some set of standards.
Accountability comes with the possibility of sanction. Accounts that are inadequate or reveal misconduct invite blame and punishment, while accounts that are satisfactory or reveal excellent conduct invite praise and reward. Accountability operates retrospectively, aiming to assess why and how events occurred. But relationships and mechanisms of accountability are also supposed to incentivize responsible conduct prospectively. Knowing that we may be called to account in the future can and should lead us to act well and prepare to explain our choices.
Accountability can be understood as a virtue and as a mechanism. As a virtue, accountability means reliably answering for our actions. An accountable individual, system, or organization is one that regularly accounts for its behavior to the relevant observers and according to the appropriate standards. Accountability mechanisms are processes and frameworks that facilitate these forms of account-giving and sanctioning.
Accountability is closely related to responsibility, but the two are distinct concepts. Responsibility primarily refers to blameworthiness. To be blameworthy for some bad event, an individual must have been able to foresee the possibility of this event and control the actions that lead to it. In many cases, you can be both responsible and accountable for something. If you borrow a relative’s car and damage it through reckless driving, you may be both responsible and accountable. However, you can also be accountable for an event without being responsible for it. If your dog destroys your neighbor’s garden, you might not be responsible for this. But you could certainly be held accountable. Similarly, if your subordinate makes a costly mistake in engineering a product, you may be accountable without being responsible.
Accountability in Emerging Technologies
Accountability takes on particular importance in emerging technologies for several reasons.
Emerging technologies contain tremendous potential for harm and benefit. Those who are harmed are entitled to an answer for what went wrong, to identify and confront responsible parties, to be compensated, and to be assured that the problems have been resolved.
The complexity of how technologies work creates additional grounds for concern. Those who design, use, or are affected by emerging technologies may not fully understand how they work. Certain technologies, such as those that rely on machine learning, are inherently opaque, making it difficult for individuals to comprehend the reasons behind decisions. When opaque technologies operate in combination with each other or with other complex technologies, tracing mistakes becomes increasingly hard.
Emerging technologies often involve contributions from many different individuals, organizations, and technical systems. The problem of “many hands” refers to the idea that accountability suffers the more contributors there are involved in a product or event. We may not know why a product malfunctioned, or who is responsible for the malfunction, because the malfunction was the result of the convergence of innumerable actions.
Decisions in emerging technologies are increasingly made with less and less human control. When decisions are made by AI, it is often not clear what or whom to hold responsible for things that go wrong. This is sometimes called an accountability gap.
Finally, legal regulation and standards traditionally play a large role in establishing accountability for products and services. Regulation and standards often lag behind the development of emerging technologies, meaning that product developers are left without authoritative guidelines on whom they owe answers, for what, or why. Despite the lack of guidance, technology producers are still expected to take prudent steps toward accountability.
Identifying Accountability Risks
Common sources for accountability risks include technical issues, organizational issues and regulatory issues. Each of these areas can limit the ability of an organization to account for adverse decisions or consequences. The next few sections describe these sources in more detail.
Technical Accountability Risks
Accountability risks for emerging technologies at the technical level include:
Black box processes: using techniques like machine learning that are inherently opaque creates more opportunities for accountability failures.
Use of third-party components: the origins and qualities of third-party data sets, products, and tools may be unknown, insecure, erroneous, or biased, making it difficult to trace processes and decisions.
Lack of documentation of decisions and processes: when people are working quickly, it is easy to forget to document decisions made and the rationales behind them; but this creates more problems down the line.
Task delegation to autonomous systems: it is often unclear where responsibility lies when autonomous systems result in controversial or harmful actions.
Processes in decentralized systems: decentralized processes can involve automated execution of code, for which no human user is accountable, and might obfuscate activities of malicious actors.
Organizational and Regulatory Accountability Risks
Accountability risks at the organizational and regulatory levels include:
Lack of legal regulation and common standards: regulations that are vague, contradictory, or missing make it difficult for technology producers to know how to account for their actions, and to whom.
Lack of internal guiding principles: teams and organizations that do not establish shared guiding principles lack clear expectations of goals, limits, and virtues.
Lack of clear lines of responsibility and liability for decisions: confusion about roles, or who is liable for what kinds of decisions, makes it difficult to attribute responsibility and sanctions.
Lack of external oversight: organizations and teams that do not invite scrutiny from external auditors are less likely to establish strong accountability protocols.
Lack of an accountable culture: an organization’s culture can lack accountability by promoting behaviors that oppose it.
Case of crises: risks often emerge in times of emergency.
Accountability Tradeoffs
Promoting accountability may come at the cost of other values. In particular, accountability may raise concerns in organizations where efficiency, power, growth, and profit tend to be prioritized.
Efficiency Tradeoffs
Data-centered technologies, including big data and AI, enable more efficient decisions. For example, algorithmic decision-making may be faster, more consistent, and more objective than human decisions in many circumstances. However, the opaque nature of these processes makes it more difficult, and in some cases impossible, to understand how decisions are made.
There are many ways and contexts in which increased efficiency complicates accountability. For example, many people worry that we are moving towards a future in which AI systems increasingly decide and act without direct human control, including when it comes to life and death decisions. In extreme cases, one concern is that increased efficiency may also increase the risk of collateral damage. Who should be to blame for a malfunctioning autonomous weapons system causing a civilian massacre? And who should be to blame for a self-driving car running over a pedestrian? The company that made the car, those who designed the software, the authorities that permitted using the car, or the people who used it?
There are immense incentives to create powerful, hyper-efficient decision-making algorithms—even if this comes at the cost of transparency and, ultimately, accountability. Remember the flash crash of 2010, in which the Dow Jones plunged by almost nine percent in just 15 minutes, baffling Wall Street traders and the rest of the world? This was partly a result of the combination of extremely powerful, but also unpredictable and inexplicable, algorithms controlling financial markets.
There is also a more basic way in which efficiency and accountability may come into tension. Taking steps to check and validate work, document processes, review decisions, and prepare materials for oversight takes time, skill, and energy. It is often faster and easier to break things and ask forgiveness later. This attitude is understandable in the face of accountability measures that are poorly designed. But neglecting accountability measures can be extremely dangerous. especially when people's rights are at stake.
Power, Growth, and Profit Tradeoffs
Social media companies have enormous power over what people see on the Internet. But major tech companies were not designed to serve the public good. Private companies’ primary purpose is to generate growth and profits. To harvest data, for example, the surveillance-advertising business model uses content-selection algorithms to maximize user engagement. These algorithms promote provocative, outrageous content, "fake news," biases, and so on. This has generated calls for better regulation and greater accountability; for example, when it comes to monitoring content on social media platforms.
Greater accountability sometimes means more restrictions. For example, campaigners have pointed out that teenage suicide cases have risen partly as a result of teenagers’ ability to view distressing material on social media, as well as instructions on how to take their lives. People have therefore called for social media companies to take greater responsibility, partly by removing more potentially harmful content. Greater accountability in this sense, and more effective content moderation, means more restrictions on what companies allow people to put and see on their platforms.
In many contexts, taking accountability seriously also means that product teams and individuals must disclose details about their products, processes, intentions, and ideas. This may compromise a company's ability to maintain a competitive advantage, it may also invade the privacy of individual people as well as close-knit teams. Such situations require tradeoffs between the needs of technology subjects and third parties for accountability and the needs of technology producers, for instance, with regard to their interest in being able to operate with minimal restrictions.
Mitigating Accountability Risks
A variety of strategies can help to mitigate accountability risks. These can be split into those at the organizational level and those at the product design level.
At the organizational level:
Document company policies clearly, and provide them to all design and development teams.
Establish review boards from within or outside your organization to provide oversight of
sensitive decisions.
Provide opportunities for stakeholders to report concerns, contest outcomes, and interrogate
decisions/processes.
Require credentialing of technical staff and operators of products whenever appropriate.
Cooperate with standard-setting bodies, watchdog organizations, and regulators to improve
standards and regulations for your industry.
Consider adopting a fair competition policy to behaving fairly when competing for
customers' business and when placing business with suppliers or offset partners. Such policies include not making false claims or remarks that unfairly disparage competitors, or improperly interfering with a competitor's business relationships.
Consider adopting an open data policy, which may include a commitment to make data freely available to everyone to use, without restrictions from copyright, patents or other mechanisms of control.
At the product design level:
Follow the business conduct guidelines/governance provided by your organization.
Establish lines of responsibility and liability for outcomes of each process.
Establish standard operating procedures for workflow and interactions with customers.
Maintain communication with third-party partners, and clarify division of responsibilities in
written contracts.
Where appropriate, consider using visual contracts as a way of creating a binding legal
contract without complex legal jargon. A visual contract contains pictures, words and flow
charts, which may be easier to understand for non-lawyers.
Where appropriate, consider using smart contracts: a self-executing contract with the terms
of the agreement between buyer and seller being written in code.
Document and record design processes and decisions with the expectation that you may be
audited or investigated.
Pilot-test all products and document results.
Use RACI (Responsible, Accountable, Consulted, Informed) matrices to establish project
roles.
Mitigation Tools
Here are just a few of the mitigation tools available to address accountability risks:
The Algorithmic Impact Assessment is a tool that has gained interest in Canada, the EU, and the United States. Modeled on the idea of an environmental impact assessment, it requires entities to forecast and disclose potential impacts of algorithmic systems, provide opportunities for public comment, and respond to those comments. Although initially devised for use by government agencies, the tool can also be modified for use in other settings.
The Responsible AI Design Assistant from AI Global is an online tool that helps users identify various ethical risks in AI models. It pays particular attention to accountability risks.
Data visualization and dashboard reporting can also be used to identify potential errors and malfunctions, as well as to demonstrate performance of products to stakeholders without revealing confidential information.
Researchers at Google have proposed an auditing framework called SMACTR (Scoping, Mapping, Artifact Collection, Testing, and Reflection) for use in various organizations. The framework comes with templates for adaptation.
Human in the loop refers to ways of ensuring that human beings are involved in sensitive decision-making by autonomous systems. One way to improve the accountability of autonomous systems is to ensure that they are always under some form of human control.
Last updated