Security Risks
One of the most basic requirements of emerging technologies is that they are secure. Core to the idea of security is protecting users and other stakeholders from harm. Keeping data safe and protecting users and infrastructure from cyber attacks is an important element of cybersecurity.
What is Security?
Emerging technologies bring new ways of making our lives better and more comfortable. At the same time, they create new risks for harm. For instance, the more our lives depend on the sharing of personal data, the more a data breach exposing personal data can harm us. It might give criminals access to our personal banking environment or personal communications.
Harms can be intentional or non-intentional. Building a secure bridge, for instance, means among other things that it doesn’t collapse when someone crosses it. When security fails, we consider the resulting harm to be unintended, in the sense that the collapsing bridge did not mean to harm the people crossing it, and neither did its designer.
Security also appeals to the idea of protection against intended harm. Malicious agents attempt to harm others through blackmailing, IP theft, identity theft, and so forth. Security against intended harm is often more difficult to achieve because it needs to take into account motives and strategies of potential malicious actors.
Security is not always a good thing. A malicious government might use very secure information systems to oppress its population. Criminals might also make use of secure communication channels to facilitate their crimes. Hence, the security of data and infrastructure is valuable only to the extent that it does not facilitate harm.
The security of data-driven technologies rests on three aspects:
Confidentiality: refusing unauthorized actors access to systems.
Integrity: preventing attackers from modifying data.
Availability: making sure that resources are available to authorized actors when they need to access the resources.
Bad Actors
Because of the open nature of many data-driven technologies, there is a growing variety of actors who aim to gain advantage by attacking these technologies. Knowing who these bad actors are and what their motives might be can help in addressing security risks.
The following are typical motives of bad actors:
Financial gain: bad actors might compromise systems or blackmail users to gain money
Political gain: bad actors might attack systems to cripple a political group or even a nation, for instance in the case of cyber warfare attacks
Espionage: bad actors might compromise systems to gain access to confidential information such as technological specs or government documents
Recognition or revenge: bad actors might attack systems to gain reputation among their peers or take revenge on the basis of perceived personal harm
Entertainment: some bad actors attack systems as a pastime, simply to see whether they can breach security measures
The following are the most common types of bad actors:
Script kiddies: low-skilled or inexperienced hackers who use existing scripts created by other hackers to attack systems
Professional hackers and cyber criminals: higher-skilled hackers who attack systems for a living, often specialized in particular domains such as finance
Cyber terrorists: attackers who compromise systems to cause fear
State-sponsored hackers: attackers who act on behalf of governmental cyber strategy, sometimes involved in cyberwarfare
Hacktivists: hackers driven by a purpose other than personal gain, such as social change. Some people who aim to compromise systems see themselves as ethical hackers.
Ethical hackers: hackers who attack systems with the purpose of disclosing vulnerabilities to the system stakeholders
Identifying Security Risks
Sources of Security Risks
Not every security risk is an attack. Security risks have different sources, of which there are four main types:
Human mistakes: designers and engineers are fallible like all human beings and sometimes make mistakes. For instance, a mistake can be a software bug that causes a system to crash. Especially for vital infrastructure, like banking software, human mistakes can bring serious security risks. Human mistakes are the most common source of security risks.
Technical failures: sometimes a technology might fail without an attributable human mistake. A power outage, for instance, can trigger technical system failure that cripples its functionality.
Natural disasters: natural events like floods, storms, or earthquakes can cripple technical infrastructures and cause security risks. Consider, for instance, the Fukushima nuclear disaster, in which a tsunami caused security risks in a nuclear reactor.
Malicious attacks: criminals, malicious governments, and bad actors might have motives to attack a data-driven technology. They will use certain strategies or techniques to cripple a system’s functionality and to harm its users.
Cyber Attacks
For data-driven technologies, malicious cyber attacks are a particularly important source of security risks. The following types of attack are most common:
Malware: any type of executable computer code used for malicious purposes. Some of the most common include viruses, which replicate as they spread between files or computers; worms, which—unlike viruses—don’t attach themselves to files but also replicate across systems; trojan horses, which include malicious payloads disguised as legitimate software; spyware, which spies on users’ behavior without their knowledge or consent; and ransomware, which restricts access to data or systems and demands payment for access to be returned.
Denial of Service (DoS attack): an attack that prevents people from using a service. This can come in many different forms. For example, a website or server might be taken down, so people cannot use it as intended. Or some flaw in application code might be exploited to block important services and make them inaccessible.
Zero-day exploits: attacks that exploit unknown system vulnerabilities. Zero-day exploit attacks pose some of the greatest security risks, because they are difficult to protect against, and often go unnoticed.
War driving: searching for Wi-Fi signals and hijacking them. Since Wi-Fi signals often extend beyond, say, a particular building, and anyone within a certain perimeter of the premises might be able to pick up the signal.
Passive wiretapping: an attacker intercepts information on your organization’s network. This can give attackers access to all (unencrypted) data transmitted via your network.
SQL injection: SQL is a popular database format. SQL injection is an attack where the requests or queries to one of your organization’s SQL-databases are interfered with. This allows attackers to view data from the database they are not normally able to see.
AI-Specific Cyber Attacks
Some cyber attacks specifically target the functioning of models used in AI systems. The following two are most common:
Evasion attacks: in evasion attacks, an input to a machine learning algorithm in deployment is modified to avoid correct classification. For instance, spammers may craft messages that go undetected by spam filters. Evasion attacks exploit blind spots in an algorithm that is already deployed.
Poisoning attacks: Poisoning attacks target algorithms during training. A poisoning attack attempts to contaminate training data to influence the performance of a model. The case of making a self-driving car that makes mistakes reading road signs is an example. In poisoning attacks, training data is manipulated to teach the AI system the wrong things and compromise its decision-making process.
It is often surprisingly easy for attackers to introduce malicious input into AI training models. This is because many modern AI systems are trained by taking input from public sources on the Internet. By connecting to servers on the Internet, and manipulating network traffic data, attackers can influence the training data and ultimately cause AI systems to fail to work properly.
This kind of introduction of malicious data into training can be difficult to detect and difficult to mitigate. Since this can create serious risks to safety and security, it’s all the more important to be aware of how adversarial attacks might compromise AI systems. It is also useful to preserve the provenance of data, i.e., documenting the data origin, any manipulations applied to the data, and where the data moves over time.
Security Risk Identification
The most common way to identify security risks is conducting a risk assessment. At their core, quantitative risk models can be broken down in two components:
An estimate of the probability that the risk will materialize.
An estimate of the impact of a risk in case it materializes, usually expressed as a monetary value.
Multiplying the estimated impact with the probability yields the expected impact of the risk. For example: The probability of user data leaking has been estimated at 5%. The impact has been quantified at $500,000. As a result, the expected impact of the risk is $25,000.
Quantitative risk analysis is useful for making risks comparable with each other and for making decisions about appropriate steps to take in mitigating the risk.
The main ethical challenge with quantitative risk analysis is not to lose sight of the nuances of ethical decision-making. Some risks threaten to violate basic human rights. But the special gravity of rights violations cannot readily be translated into a dollar figure. Therefore, distinctions between risks violating rights and other negative impacts are not readily apparent in quantitative risk analysis. Furthermore, the significance of risks may manifest differently for different stakeholders, yet the distribution of risks across different stakeholders is not always taken into account in standard quantitative risk analysis.
A connected challenge consists in translating impacts into dollar values. For instance, the costs of a data leak might be different for your organization than for users themselves. Some impacts might be challenging to translate into monetary value, such as risks to reputation and risks of physical harm.
Security Tradeoffs
Security should be a basic feature of emerging technologies. However, greater levels of security may come into conflict with other ethical considerations, such as fairness, privacy, and freedom. In particular, there are four different tradeoffs—with privacy, accountability, fairness, and environment —that your organization might face when considering security measures. These challenges do not have simple answers, and they require teams to both think carefully about the tradeoffs involved in matters of security and be prepared to defend their choices.
Privacy Tradeoffs
Security often strengthens the privacy of data-driven technologies. For instance, data encryption leads to greater security of services and provides privacy to users. However, security and privacy can conflict; for example, when national-security aims of protecting the population against criminals conflict with privacy-preserving technologies that enable criminals to communicate.
Consider the Apple vs. FBI case. After a terrorist attack in San Bernardino, California, left 15 people dead and 22 wounded in December 2015, the FBI asked Apple for assistance in unlocking the suspect’s iPhone. To prevent unauthorized access, Apple had programmed this iPhone version to automatically delete all user data once 10 unsuccessful attempts had been made to guess its PIN code. The FBI demanded that Apple create and provide software to bypass these security protocols. Apple objected to this request, claiming that creating a device backdoor would threaten the security of all users and give governments unchecked power to invade user privacy. Although a federal judge sided with the FBI and ordered Apple to comply, the FBI ultimately dropped the case when a third party volunteered to create the software needed to unlock the iPhone.
The case raises the important issue of ethical tradeoffs when it comes to security. There is, in principle, no limit to the degree of security one can apply to a technological artifact. Any kind or level of security we set can always be increased in some way. For instance, a smart phone can be programmed to automatically delete data after 10 unlock attempts; it can also be set to begin this process at 5 attempts, or 3 attempts—or to self-destruct entirely. It can require two-factor authentication, three-factor authentication, or authentication with any number of factors. These possibilities force us to ask, how secure is secure enough?
In the case of Apple versus the FBI, the security of user devices came into conflict with national security and public safety. Increasing the security of the iPhone afforded great protections for user privacy, but those protections also made it easier for malicious actors to avoid detection and accountability.
Accountability Tradeoffs
The democratization of AI (for example, through open-source AI technologies) may entail the proliferation of AI to unscrupulous actors. Security measures are often targeted at bad actors and should therefore remain unknown to these actors. This means that security often goes hand in hand with secrecy—the need to keep certain information confidential. Yet, secrecy might conflict with accountability, because it makes the basis for security-driven decisions non-transparent.
Consider the case of the Panama papers. In 2016, a collective of investigative journalists published information on offshore financial assets, which were created by and managed by the Panamanian law firm and corporate service provider Mossack Fonseca. The publication was based on a giant leak of financial information coming from an anonymous whistle blower or hacktivist.
The publication exposed a large-scale global system of money laundering, political corruption, and tax avoidance. The leak of the Panama papers was made possible by vulnerabilities in the security measures of the Mossack Fonseca firm. Yet, it also helped hold powerful individuals and organizations accountable for dubious or illegal practices.
Fairness Tradeoffs
Increasing security of data-driven technologies often means putting additional burdens on people. Yet, these burdens can exclude people from opportunities they would have enjoyed with a lower security level in place.
Consider new regulations after the financial crisis of 2008-2009 to make mortgages more secure. To make banks more resilient, jurisdictions around the globe introduced new borrowing requirements. As a result, borrowers need to contribute more capital, show a higher and more consistent income, and have better credit scores to obtain credit. This excludes less affluent people from obtaining the credit needed for a home mortgage.
Environmental Tradeoffs
Security measures can lead to higher use of energy and therefore lead to harm to the environment. This is because security protocols can be computationally intensive, and so can require disproportionate amounts of energy.
Consider the case of Bitcoin. Security is one of the key features of the technology of Bitcoin. The underlying blockchain uses a cryptographically secured system of incentives that allows for the secure transfer of funds without risks like the possibility of double spending, meaning that an actor is not allowed to spend the same amount of cryptocurrency twice. The secure incentive system is also known as the proof of work system. This system relies on network nodes (miners) to compute the solution for an increasingly complex mathematical problem.
Hence, the proof of work system greatly increases the security of the system against possible attacks. However, it also uses enormous computational power. In 2019, it was claimed that Bitcoin used as much energy as the entire country of Switzerland.
Mitigating Security Risks
After you have identified the security risks for an emerging technology and considered the tradeoffs required to balance security with other values, you can identify methods to mitigate the risks.
There are three main ways that security risks can be mitigated:
By making your own organization more secure. This means establishing a baseline systems behavior and creating possibilities for rapid response.
By securing data in storage and in transit. This means applying techniques such as encryption and secure network protocols.
By analyzing and moderating security risks. This means continuously running threat models and testing and analyzing potential attack strategies.
Establishing Baselines for System Behaviour
Identifying abnormal system behavior and defending against attacks requires knowing how a system is normally supposed to operate. For instance, if you observe that a system is suddenly drawing twice as much electricity as it normally needs, that may be a sign something is amiss. What counts as normal operation will often be different for each system. For instance, some systems may have specific inputs and outputs, and an unexpected change in input or output will be cause for concern. Other systems rely on variable inputs or outputs, and these kinds of variations may not necessarily signal abnormal behavior. Thus, it is important for a team to reflect carefully on what counts as normal behavior, what symptoms are reliable indications of abnormal behavior, and how these symptoms can be effectively monitored.
Common baseline metrics include:
Bandwidth consumption
Software versions
Upload and download times
Task completion times
User access and behavior
Key performance indicators
Designation of Rapid Response Teams
Preparing for safety and security incidents requires designating personnel to perform particular tasks in the event of a breach or malfunction.
For instance, a Cyber Security Incident Response Team (CSIRT) includes:
Investigators to identify causes of abnormal behavior.
Security specialists trained to fix systems or install new protections.
Help desk staff to assist users who may be affected by an incident.
Crisis communications experts to provide information to stakeholders.
Managers to coordinate the entire response.
A response team can be a full-time business unit, a unit assembled from existing staff with different primary roles, or outsourced to specialists.
Protection of Stored Data
As mentioned earlier, data protection generally has three main objectives: confidentiality, integrity, and availability (CIA).
Ways of achieving CIA objectives for stored data includes:
Encryption: Data is stored in a format that cannot be understood without a decryption key.
Access control: Only certain individuals are permitted to access or modify the data.
Physical barriers: Data is stored in a secure physical location, where unauthorized users or intruders will face difficulties in attempting entry.
Destruction: Data that is no longer needed is automatically deleted or overwritten, especially temporary databases containing aggregated data.
Protection of Data in Transit
Data is mobile by nature, and securing data as it is transmitted from place to place and person to person is even more challenging than securing data in storage. It is important for a team to consider all the ways in which its data might be moved, how each move might represent a security risk, and how to mitigate these risks. Fortunately, the CIA framework of confidentiality, integrity, and availability can also help with securing data in transit.
Network protocols that protect data in transit include Secure Sockets Layer, Transport Layer Security (SSL TLS), which is often used for web services, and Secure Shell (SSH), which is often used for remote access. SSL TLS is a form of link encryption, meaning that data is decrypted at each intermediary point. SSH is a form of end-to-end encryption, meaning that data is decrypted only at its endpoints. End-to-end encryption is generally more secure than link encryption.
Both forms of encryption rely on digital signatures, which use cryptography to identify the sender and receiver of the data as well as its contents. Digital signatures help to ensure both data confidentiality and data integrity, as they can verify whether data has been altered in transit.
Ensuring the availability of the transmitted data mainly requires maintaining the medium of transmission. This can be challenging when the number of users or bandwidth is variable, or when attackers are attempting to interfere with transmission.
Techniques for ensuring availability include:
Load balancing across multiple servers.
Creating redundancies and failovers.
Purchasing DDOS mitigation protections from specialist firms.
Threat Modeling and Analysis
The techniques used in threat modeling and analysis can be compared with techniques of defense in a game of chess, in which each player tries to model the potential next moves of his or her opponent and mitigate these with counter moves.
Some frequently used techniques are:
Vulnerability scoring, which provides a metric to assess the severity of a potential threat or vulnerability.
Cognitive security, which is the use of AI to detect security threats. It implements big data analytics to find connections and vulnerabilities in systems that are very difficult for humans to detect.
Attack trees, which map out the different routes to compromising a system or asset.
Visual, Agile, and Simple Threat (VAST) modeling, which models operational and application
threats.
Challenger models, which refers to developing alternative models—such as with a different
method or data—and comparing performance with the original model to identify security
differences.
Security Information and Event Management (SIEM) systems, which provide an overview of all
security issues and events in an environment.
Comparing system attributes with provisions in Application Programming Interface (API) agreements and negligence law to assess potential liabilities.
Reviewing the potential limitations of training data or models in AI systems.
Breach and Attack Simulations
To identify and mitigate unknown safety and security risks, organizations can perform breach and attack simulations on a regular basis. These are automated solutions for manual exercises of penetration testing, in which one internal team tries to defend the security of a system while another tries to attack it.
Simulations are based on known malware attacks and cybersecurity breaching strategies. A program automatically uses these to attempt to breach the security of a system. Once breaches are found, the simulation automatically proposes mitigating actions and recommendations for follow-up. These methods are sometimes called black-box multi-vector testing, as they seek to penetrate the system from different angles without reading the underlying code.
Organizations should also elicit user feedback on security issues and revisit their security practices on a regular basis.
Last updated