Security, Risk, and Bayes…oh my! January 6, 2017Posted by Chris Mark in Uncategorized.
Tags: adaptive, Bayes, conditional, DHS, hacking, Manunta, probability, risk, security, statistics, threat
According to Dr. Giovanni Manunta, the term security does not yet have a commonly accepted definition and evokes numerous connotations among practitioners. Although often not well defined, the relationship between security and risk is well accepted among business, government, and security professionals (Department of Homeland Security, 2008). While providing fodder for debate to those tasked with the security of information assets, the ambiguous definition of security and the differences in risk analysis techniques create significant challenges to effectively protecting assets.
The practical relationship between security, risk, and decision making is articulated well by the US Department of Homeland Security as it is described as an approach for making and security decisions (DHS, 2008). This is further established in the NIST 800-37 Risk Management Framework:
“…For operational plans development, the combination of threats, vulnerabilities, and impacts must be evaluated in order to identify important trends and decide where effort should be applied to eliminate or reduce threat capabilities; eliminate or reduce vulnerabilities; and assess, coordinate, and deconflict all cyberspace operations…” (NIST, 2010. p. 3). (emphasis added)
While appearing straightforward, efficiently and effectively applying a risk based security framework is challenging. As discussed, the term security does not have an accepted denotation among practitioners (Manunta, 1999). Nearly 2,400 years ago Socrates identified the need for definition when, in response to a debate about virtues, he posed the question: “What is F-ness?” Without a definition, Socrates argued, one cannot describe the characteristics that constitute that which is not defined. As importantly, it is not feasible to measure the attribute or virtue described (Benson, 2012). Quite simply, without a commonly accepted definition of security, it is not possible to accurately identify whether an organization is in a “secure” state.
The identification and analysis of risks creates additional challenges for organizations. Common risk management frameworks provide mechanisms for identifying and analyzing risk, and reference the need to employ risk management to address security needs (DHS, 2010). While these risk management frameworks have been implemented by numerous organizations, they are often based upon conventional, frequentist probability models. Frequentist probability models do not account for the changing environment and variable, unpredictable threats facing organizations today (Fenton, Neil, 2013).
Due to the adaptive nature of cyber threats and the relatively statistically scarce nature of cyberattacks, organizations do not have sufficiently accurate historical data to accurately quantify the probability of a future event. According to Fenton and Neil (2013), the accuracy of risk analysis is predicated upon the fidelity of the underlying data and existing probabilistic models cannot effectively account for adaptive threats such as cyber threats initiated by adaptive human antagonists. As such threats should be estimated defined in terms of subjective probabilities, as opposed to frequentist objective probabilities in order to quantify an estimated loss (Kirpichevsky, Matheu, & Seda-Sanabria, 2012).
Manunta (1999) described the concept of security as contextual and invariably subjective, meaning “Different things to different people” (p.57). His attempt at defining security offers insight into both the subjective and variable nature, while concurrently demonstrating security’s inextricable relationship with risk. Manunta (1999) put forth an equation to calculate security when he offered the following formula: S=f(A, P, T) Si.
In his formula the variables are described as follows: S (Security) = f(function of) (Asset, Protector, Threat) in a given Si (Situation). By incorporating the Si variable that represents a situation, Manunta appears to have taken a deliberate step toward integrating the concepts of risk analysis within the definition of security. Manunta’s formula, by incorporating situation in the definition, implicitly acknowledges both the variable, and adaptive nature of threats.
By definition, adaptive threats modify their behavior in reaction to a given security control or situation (Kirpichevsky, Matheu, & Seda-Sanabria, 2012). With an understanding that situation (Si) is critical to measuring security and that risk management is fundamentally focused upon allowing decision makers to make effective and efficient decisions (DHS, 2008), it is possible to identify more appropriate models of risk management and decision making. The answer, it is proposed, can be found in the application of Bayesian probability as a more applicable risk analysis and decision model.
In the frequentist view of risk only two variables: probability and impact form the basis of risk analysis. A common expression used to quantify risk is the function of probabilities and impacts or (R = PxI) (Fenton, Neil, 2013). This model is further supported by NIST in their Risk Management Framework, as well as others (NIST, 2010). Likelihood is typically expressed as a statistical probability of some event occurring, and impact is often represented in terms of monetary loss if the event is realized.
The fundamental failing of existing models of risk within the context of security is that they apply the classic or frequentist view of probability (Fenton, Neil, 2013). The frequentist view of probability assigns an objective probability based upon a series of experiments under ideal situations and cannot be used as a predictive mechanism under uncertainty (Fenton, Neil, 2013). The frequentist models do not account for changes in the environment and cannot be applied to security issues, such as adversarial human actors which are considered adaptive threats (DHS, 2008).
Frequentist probability is defined by Fenton and Neil (2010) as the: “…frequency with which that even would be observed of an infinite number of repeated experiments” (Fenton and Neil, 2010 p.61). Frequentist probabilities exist only in theory and can be considered only within the construct of a theoretical environment as external conditions do not exist in such an environment (Fenton and Neil, 2010). In a theoretical sense, the probability of flipping a heads or tails on a coin accurate but in the real world this is not the case. Coins are not balanced perfectly and environmental as well as other factors will influence the true odds of flipping heads or tails.
Conditional Probability, by contrast is a concept that borrows from the aforementioned aspect of frequentist probabilities. Even frequentists, when assigning probabilities are, in effect assigning conditions to those probabilities. As Fenton and Neil (2010) explain, all probabilities assigned to an uncertain event are conditional on a given context (Fenton and Neil, 2010). Any attempt to measure uncertainty inherently requires a subjective judgement on the conditions that affect the event (Fenton and Neil, 2010). Understanding this point, it is appropriate to use Bayes’ theorem as a representation of conditional probability where P(A/K) where K is the background knowledge or context that impacts the variable A and probability P.
Adaptive threats modify their behavior according to controls implemented to address the identified vulnerabilities (Kirpichevsky, Matheu, & Seda-Sanabria, 2012). Additionally, frequentist models which rely upon objective statistical methods do not offer casual explanations for actions (Fenton and Neil, 2013). Fenton and Neil (2013) also suggest that causal probability models are necessary to obtain rational measures of risk since risks are conditional and not statistically independent.
In contrast with objective, frequentist probability models of risk, the incorporation of knowledge and conditional probabilities are fundamental aspects of subjective Bayesian probability theory (Fenton, Neil 2013). This is important to understand as frequentist views of probability measure the objective proportion of outcomes of experiments where subjective probability models express the measure of belief of an outcome (Fenton, Neil 2013). Within the context of security, as expressed by Manunta, Bayes’ theorem aligns well with Manunta’s security model (S=f(A, P, T) Si).
Comparing the two formulae one can see that Bayes’ theorem is a measure of belief expressed as: P(A/B)K. This can be described as the Conditional Probability that event A is true, given event B and considering the knowledge K. Manunta’s (1999) variable for situation (Si), may be considered analogous with Bayes’ variable for knowledge (K) which provides the foundation of subjective probability and enables for the modification of the subjectively perceived risk as the situation evolves or more information becomes available.
The United States Central Intelligence Agency (CIA) has studied the effects of using Bayesian mathematical models in support of intelligence analysis since at least 1967 (Zlotnick, 2012). Studies have demonstrated that applying a Bayesian mathematical model improves the accuracy of intelligence analysis in areas of uncertainty (Heuer, 1999). When analysis is more a process of inductive inference, proceeding not from a few general propositions but from many particulars the impact accuracy even more (Zlotnick, 2012).
Zlotnick (2012) states that applying Bayesian statistics provides mathematical test for internally consistent analysis (Zlotnick, 1995). By providing a mathematical test, heuristic biases inherent in analysis can be mitigated Heuer, 1999). The same mathematical models can be employed to more accurately identify security events within an organization.
Benson, H. H. (2012). Socratic epistemology: The priority of definition. In B. A. Smith (Ed.), Continuum Companion to Socrates. Bloomsbury Academic. Retrieved from http://www.ou.edu/ouphil/faculty/benson/priority_of_definition.pdf
Center on Budget and Policy Priorities (CBPP). (2015, March 9). Retrieved March 20, 2015, from http://www.cbpp.org/cms/index.cfm?fa=view&id=3252
Chronology of Data Breaches Security Breaches 2005 – Present. (2005, April 20). Retrieved March 20, 2015, from https://www.privacyrights.org/data-breach/new
Department of Homeland Security; Risk Steering Committee. (2008) DHS Lexicon. Retrieved from https://www.dhs.gov/xlibrary/assets/dhs_risk_lexicon.pdf
Fenton, N., & Neil, M. (2012). Risk assessment and decision analysis with Bayesian networks. Boca Raton: Taylor & Francis.
Kirpichevsky, Y., Matheu, E., & Seda-Sanabria, Y. (2012). MODELING ADAPTIVE THREATS: INCORPORATING A TERRORIST DECISION MODEL INTO SECURITY RISK ASSESSMENTS. The United States Society on Dams, 219-231. Retrieved from http://ussdams.com/proceedings/2012Proc/219.pdf
Manunta, G. (1999). What is Security? Security Journal (12) 57-66. doi:10.1057/palgrave.sj.8340030
National Institute of Standards and Technology (NIST) (2010) NIST Special Publication 800-37; Guide for Applying the Risk Management Framework to Federal Information Systems. Retrieved from https://www.dhs.gov/xlibrary/assets/rma-risk-management-fundamentals.pdf
National Research Council, Committee on Deterring Cyberattacks: Informing Strategies and Developing Options. (2010). Proceedings of a workshop on deterring cyberattacks: Informing strategies and developing options for U.S. policy. Retrieved from http://www.nap.edu/catalog/12997/proceedings-of-a-workshop-on-deterring-cyberattacks-informing-strategies-and
National Research Council, Committee to Review the Department of Homeland Security’s Approach to Risk analysis. (2010). Review of the Department of Homeland Security’s Approach to Risk analysis. Retrieved from http://www.nap.edu/catalog/12972/review-of-the-department-of-homeland-securitys-approach-to-risk-analysis
Sarin, R., & Wakker, P. (1998). Revealed likelihood and knightian uncertainty. Journal of Risk and Uncertainty, 16, 223-250. Retrieved from http://link.springer.com/article/10.1023/A:1007703002999#
Zlotnick, Jack. A Theorem for Prediction. DC: Central Intelligence Agency, 2012. Print.