jump to navigation

New Article: Exploits, Vulnerabilities & Threat Adaptation March 17, 2020

Posted by Chris Mark in cybersecurity, InfoSec & Privacy.
Tags: , , , , , , ,
add a comment

AT&T CyberSecurity published my new blog post.  You can read it here!

“Security, whether focused on physical, cyber, operational, or other domains, is an interesting topic that lends itself to considerable debate among practitioners.  There are, however, basic concepts and underpinnings that pervade general security theory. One of the most important, yet often misunderstood concepts are those inextricably entwined concepts of vulnerabilities and exploits.  These basic underpinnings are critical in all security domains. 

What are exploits and vulnerabilities and why are they important to the study of security?

First, security cannot be considered a binary concept such as: “secure” or “not secure”.  The appropriateness of any security strategy is relative to the controls implemented to address to identified risks.  One cannot say: “my house is secure”.  The measure of security is predicated upon the identified risks and the associated controls implemented to address those risks.  One can say: “My house has been secured in a manner that is commensurate with the identified risks”.  Second, security should be viewed as a function of time and resources.  Finally, security, in any domain, can never be ‘assured’ nor can there be a ‘guarantee’ of security.  The reason is simple.  Technologies change and human threats are adaptive.  According to the Department of Homeland Security’s Security Lexicon, Adaptive Threats are defined as:

“…threats intentionally caused by humans.” It further states that Adaptive Threats are: “…caused by people that can change their behavior or characteristics in reaction to prevention, protection, response, and recovery measures taken.” The concept of threat adaptation is directly linked to the defense cycle.  In short, as defenses improve, threat actors change their tactics and techniques to adapt to the changing controls.  As the threat actor improves their capabilities the defensive actors necessarily have to change their own protections.  This cycle continues ad infinitum until there is a disruption.”  Read the whole article!

What Coronavirus can Teach us about CyberSecurity February 28, 2020

Posted by Chris Mark in cybersecurity, Data Breach, Industry News, InfoSec & Privacy.
Tags: , , , , , , , , , ,
add a comment

The 2020 RSA CyberSecurity Conference was held recently in San Francisco, California. There were some notable companies that elected to not attend this over safety concerns related to Coronavirus.  On February 25th the mayor of San Francisco declared a state of emergency for their city over Coronavirus fears.

This state of emergency was declared is in spite of the fact that there are no confirmed cases of Coronavirus in the city. Mayor Breed, in discussing her prudent steps stated: “We see the virus spreading in new parts of the world every day, and we are taking the necessary steps to protect San Franciscans from harm…”

First identified in Wuhan, China in late 2019, Coronavirus (covid-19) has reportedly infected over 80,000 people worldwide and has resulted in over 2,700 deaths on several continents. Recently, the World Health Organization identified the newly identified Coronovirus as a potential “Disease X”.  “Disease X” was added to World Health Organization’s “Prioritizing diseases for research and development in emergency contexts” list of illnesses. This list includes such diseases as the Crimean-Congo hemorrhagic fever (CCHF), Ebola and Marburg virus disease, Lassa Fever, MERS, SARS, Nipah and henipaviral diseases, Rift Valley fever and Zika.  Importantly, “Disease X”:

(…represents the knowledge that a serious international epidemic could be caused by a pathogen currently unknown to cause human disease, and so the R&D Blueprint explicitly seeks to enable cross-cutting R&D preparedness that is also relevant for an unknown “Disease X” as far as possible) (emphasis added). 

What can the current Coronavirus situation teach us about cybersecurity?

Reflecting upon the situation in San Francisco and the WHO’s statements, it is possible to utilize the Johari Window to analyze the situation. The Johari Window[1]developed by psychologists Joseph Lutz and Harrington Ingram in 1955 and reintroduced to the American Public in  2012 when then Secretary of State in referencing Iraqi Weapons of Mass Destruction stated:

…there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don’t know we don’t know…it is the latter category that tend to be the difficult ones.” (paraphrased)

The Johari Window identifies four panes of knowledge.  They include: The “known/knowns” where both the person and others know of a given situation. There is the “Known/Unknown” where the person knows and others do not know of a situation. Consider a personal secret that has not been shared with others. There is then an “Unknown/Known” where the situation is not known the person yet is known to others. In simple terms think of a surprise birthday party where everyone but the birthday boy/girl is aware.  Finally, there are “unknown/unknowns” where neither the party knows.  This is the truest example of an ‘unknown’ and represents, the most difficult situation to analyze because it truly represents a position of ignorance on both parties.

In 2016 the World Health Organization identified that there was a conceptual, although yet undefined threat that was both unknown to others and to themselves but they understood that, theoretically, existed and would present a major risk if and when it was eventually realized.  This, they proactively identified as ‘Disease X’. This was the ‘unknown/unknown’ in the Johari Window until the time that it was identified as Coronavirus.

It is now a ‘known/known’ threat although countries are still struggling to identify how to deal with the risk it presents. Until it was actually realized, however, there was little any country could do except wait until it was realized. Once it was identified, then actual defensive and protective measures could be put into place to address the threat.

In much the same way, organizations dealing with cybersecurity today are presented with the ‘unknown/unknown’ of the conceptual “Disease X” threat in cybersecurity.  This is any yet unidentified and yet predicted threat that may impact their organization in the not too distant future.  Companies are faced with attempting to develop security and continuity plans for a threat that they do not yet know exists and what specifically that threat encompasses.  On a nearly daily basis, however, a ‘Disease X’ arises in cybersecurity and companies are forced to react quickly and decisively to address such threats.  Adding to the threat is the fact that these threats are not naturally occurring and are, in fact, created by humans – intent on creating harm.

Compounding the problem of the ‘unknown/unknown’ is the idea of threat adaptation in known threats.  While not modified by naturally security processes, security strategies, like those of disease control must also deal with threat adaptation. Using the Coronavirus as an example, according to a South China Morning Post article posted on February 4th, 2020 Chinese scientists had already:

“…detected “striking” mutations in a new coronavirus that may have occurred during transmission between family members.” It further states that: “While the effects of the mutations on the virus are not known, they do have the potential to alter the way the virus behaves.”

It has been well established that Influenza virus ‘shift’ and ‘drift’ antigenically.  Without delving into the specifics of how these occur, according to the Center for Disease Control and Prevention, states that:

“When antigenic drift occurs, the body’s immune system may not recognize and prevent sickness caused by the newer influenza viruses. As a result, a person becomes susceptible to flu infection again, as antigenic drift has changed the virus enough that a person’s existing antibodies won’t recognize and neutralize the newer influenza viruses.”

While not a direct corollary to a natural viral drift or shift, human actors respond in a similar way when attempting to commit criminal acts. They ‘adapt’ to the changing security environment and are defined as ‘adaptive threats’.  According to the Department of Homeland Security’s Security Lexicon, Adaptive Threats are defined as:

“…threats intentionally caused by humans.”  It further states that Adaptive Threats are: “…caused by people that can change their behavior or characteristics in reaction to prevention, protection, response, and recovery measures taken.”

In short, as defenses improve, threat actors change their tactics, and techniques to adapt to the changing controls and prevent the established controls from identifying and protecting against the newly adapted threat.  As the threat actor improves their capabilities the defensive actors necessarily have to change their own protections.  This cycle continues ad infinitum until there is a disruption. This recurring cycle is known as the Defense Cycle.

Consider medieval castles.  Originally, they were built of wood.  Those assaulting castles would simply use fire to burn the castles to the ground.  Castle makers then built Castles of stone.  Assaulters then created siege engines to knock down the walls or began digging under the walls to ‘undermine’ them.  Castle walls were made larger and stronger and were nearly impenetrable until cannons were introduced.  Even in situations where the attackers could not ‘storm the castle’ they would simply lay siege and starve the inhabitants until they capitulated.  This is a classic example of threat adaptation and the defense cycle.

In a more relevant and timely example consider a standard network with security controls applied commensurate with the identified risks. An attacker may try an attack against the network layer.  If this is ineffective and the incentive is great enough the attacker will likely modify their behavior and attack methodology to attempt to circumvent some other control.  This process continues until a resource has been compromised.

Applying the concepts addressed in this article, a newly identified or developed exploit is the proverbial “Disease X”.  As it has not yet been identified, the organization has no definitive defense against it. Once it is identified and known, then the company can begin identifying new controls to address the newly identified risk. The attacker will then, once again, modify their behavior.  As stated, this cycle can continue ad infinitum.

In 2020, organizations are dealing with myriad threats.  First there are the ‘unknown/unknowns” that represent the “Disease X”of the cyber attack world.  These may include new attack vectors, or zero day exploits.  Secondly, organizations are faced with defending against motivated, determined adversaries who are not only is focused on attacking networks and resources but are continually adapting their strategies as defenses improve.  While not a direct correlation, by looking at nature and how diseases impact our society, organizations can better understand their own security strategy and risk management practices.

 

I am back ;) “The Markerian Heptad and Understanding Attacker Motivations” February 24, 2020

Posted by Chris Mark in cybersecurity.
Tags: , , , , , , , , ,
add a comment

It has been a bit of time since I have posted.  I am back with a blog post I wrote for AT&T CyberSecurity Blog. Titled, “Understanding CyberAttacker Motivations”  It discusses what I call the “Markerian Heptad” (Yes..I named it after myself 🙂 and describes the 7 basic motivations that underpin why an attacker would target a particular person, company, organization, etc.

“Implementing a risk based security program and appropriate controls against adaptive cyber threat actors can be a complex task for many organizations. With an understanding of the basic motivations that drive cyber-attacks organizations can better identify where their own assets may be at risk and thereby more efficiently and effectively address identified risks.  This article will discuss the Rational Actor Model (RAM) as well as the seven primary intrinsic and extrinsic motivations for cyber attackers.

Deterrence and security theory fundamentally rely upon the premise that people are rational actors. The RAM is based on the rational choice theory, which posits that humans are rational and will take actions that are in their own best interests.  Each decision a person makes is based upon an internal value calculus that weighs the cost versus the benefits of an action.  By altering the cost-to-benefit ratios of the decisions, decisions, and therefore behavior can be changed accordingly. 

It should be noted at this point that ‘rationality’ relies upon a personal calculus of costs and benefits.  When speaking about the rational actor model or deterrence, it is critical to understand that ‘rational’ behavior is that which advances the individual’s interests and, as such, behavior may vary among people, groups and situations.”..READ MORE HERE!

超限战 – “Warfare without Bounds”; China’s Hacking of the US February 24, 2020

Posted by Chris Mark in cyberespionage, cybersecurity, Politics, weapons and tactics.
Tags: , , , , , , , , ,
add a comment

Unconditional_warfare

“Pleased to meet you…hope you guessed my name…But what’s puzzling you is the nature of my game.”
– The Rolling Stones; Sympathy for the Devil

UPDATE:  On Feb 10, 2020 The US Government charged 4 Chinese Military Officers with hacking in the 2017 Equifax breach.  On January 28th, the FBI arrested a Harvard professor of lying about ties to a Chinese recruitment effort and receiving payment from the US Government.  The attacks, subterfuge and efforts continue against the US.  Why?  Read the original post form 2016 and learn about Unlimited Warfare.

Original post from 2016: More recently, the With the recent US Government’s acknowledgement of China’s hacking of numerous government websites and networks, many are likely wondering why China would have an interest in stealing employee data?  To answer this question, we need to look back at the 1991 Gulf War. You can read my 2013 Article (WorldCyberwar) in the Counter Terrorist Magazine on this subject.

In 1991, a coalition led by the United States invaded Iraq in defense of Kuwait.  At the time Iraq had the 5th largest standing army in the world.  The US led coalition defeated the Iraqi army in resounding fashion in only 96 hours.  For those in the United States the victory was impressive but the average American civilian did not have an appreciation for how this victory was accomplished.

The Gulf War was the first real use of what is known as C4I.  In short, C4I is an acronym for Command, Control, Communications, Computers, and Intelligence. The Gulf War was the first use of a new technology known as Global Positioning Systems (GPS).  The Battle of Medina Ridge was a decisive tank battle in Iraq fought on February 26, 1991 and the first to use GPS.  In this 40 minute battle, the US 1st Armored Division fought the 2nd Brigade of the Iraqi Republican Guard and won decisively. While the US lost 4 tanks and had 2 people killed, the Iraqis suffered a loss of 186 tanks, 127 Infantry Fighting Vehicles and 839 soldiers captured.  The Chinese watched the Gulf War closely and came away with an understanding that a conventional ‘linear’ war against the United States was unwinnable.

After the Gulf War the Chinese People’s Liberation Army tasked two PLA colonels (Qiao Liang and Wang Xiangsui) with redefining the concept of warfare.  From this effort came a new model of Warfare that is published in the book “Unrestricted Warfare” or “Warfare without Bounds”.  Unrestricted Warfare is just what it sound like.  The idea that ‘pseudo-wars’ can be fought against an enemy.  Information warfare, PR efforts and other tactics are used to undermine and enemy without engaging in kinetic, linear battle.  Below is a quote from the book:

“If we acknowledge that the new principles of war are no longer “using armed force to compel the enemy to submit to one’s will,” but rather are “using all means including armed force and non-armed force, military and non-military, lethal and non-lethal means to compel the enemy to accept one’s interests.”

“As we see it, a single man-made stock-market crash, a single computer virus invasion, or a single rumor or scandal that results in a fluctuation in the enemy country’s exchange rates or exposes the leaders of an enemy country on the Internet, all can be included in the ranks of new-concept weapons.”

It further stated: “… a single rumor or scandal that results in fluctuation in the enemy country’s exchange rates…can be included in the ranks of new concept weapons.”

On April 15, 2011, the US Congressional Subcommittee on Oversight and Investigations conducted a hearing on Chinese cyber-espionage. The hearing revealed the US government’s awareness of Chinese cyberattacks. In describing the situation in his opening remarks, subcommittee chairperman Dana Rohrbacher* astutely stated:

“[The]United States is under attack.”

“The Communist Chinese Government has defined us as the enemy. It is buying, building and stealing whatever it takes to contain and destroy us. Again, the Chinese Government has defined us as the enemy.”

Given the Chinese perspective on Unlimited Warfare, it becomes much more clear that what we are seeing with the compromises are examples of ‘pseudo wars’ being fought by the Chinese.  It will be interesting to see how or if the US responds.

*thank you to the reader who corrected my referencing Mr. Rohrbacher as a female.  My apologies to Chairman Rohrbacher!

HR 4036, the “Hack Back Bill”; Understanding Active & Passive Deterrence and the Escalation of Force Continuum. October 22, 2017

Posted by Chris Mark in cybersecurity, Uncategorized.
Tags: , , , , , , , , , ,
2 comments

SMallPirI wrote this original post several years ago but it seems to be more relevant now.   As CNN reports HR4036…”…formerly called the Active Cyber Defense Certainty (ACDC) Act and informally called the hack-back bill – was introduced as an amendment to the Computer Fraud and Abuse Act (CFAA) last week. Its backers are US Representatives Tom Graves, a Georgia Republican, and Kyrsten Sinema, an Arizona Democrat.”

This is a bill that is sound in theory and terrible in practice.  According to the Bill, (named ACDC) it would enable a company to take “..active defensive measures..” to access an attacker’s computer.  This is only applicable in the US…Think about this for a minute.  What is the evidence that I was the attacker of company A?  Maybe (quite possibly…almost certainly) a hackers is using my system as a proxy.  So some company can now attack my personal computer?  What happened to “due process”?.  If company X simply believes I am a hacker, they can access my personal data without a court order or any due process.  More profoundly, the issues it raises pose very real and very direct risks to employees of the company who ‘hacks back’.  This, I think, is unacceptable.

Having performed physical security in very real and very dangerous environments, I can personally attest to the fact that physical threats are real and difficult to prevent.  By allowing a ‘hack back’ the company faces a very real risk of escalating the situation from the cyber domain into the physical domain.  There is NO corporate data that is worth risking a human life.

Too often cybersecurity professionals forget that they are SECURITY professionals first and the  same rules of deterrence, escalation of force and other aspects apply.  Given this new Bill,  I felt this was a good time to again discuss deterrence (active and passive) and once again talk about the Escalation of Force Cycle.  So, what is deterrence? (warning…long post)..pic of the author off the cost of Somalia doing anti-piracy operations)

The History of Deterrence Theory:

The concept of deterrence is relatively easy to understand and likely extends to the earliest human activities in which one early human dissuaded another from stealing food by employing the threat of violence against the interloper.  Written examples of deterrence can be attributed as far back as the Peloponnesian War, when Thucydides wrote that there were many conflicts in which one army maneuvered in a manner that convinced the opponent that beginning or escalating a war would not be worth the risk.[1]  In the 4th Century BC, Sun Tzu wrote: “When opponents are unwilling to fight with you, it is because they think it is contrary to their interests, or because you have misled them in to thinking so.”[2]  While most people seem to instinctively understand the concept at the individual level, contemporary deterrence theory was brought to the forefront of political and military affairs during the Second World War with the deployment of nuclear weapons against Nagasaki and Hiroshima.[3]

The application of deterrence during WWII was the beginning of understanding that an internal value calculus drives human behavior and that behavior could be formally modeled and predicted with some degree of accuracy.  (more…)

%d bloggers like this: