1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
On Disclosure of Intrusion Events in a Cyberwar:
------------------------------------------------
The Nation State's guide to STFU
================================

In a cyberwar (such as the ongoing events on the Internet), all actors are
motivated to remain silent about incidents that they detect. However, on some 
occasions, strategic and political considerations will be more powerful 
motivators. These rare disclosure events don't negate the primary motivations
for remaining silent, they simply demonstrate that sometimes there are better 
reasons for speaking out. 

TL;DR; actors in a cyberwar are motivated not to disclose incidents, but 
sometimes strategic and/or political realities take precedent.

I discussed this briefly with Adam Shostack over Twitter, but the constraints 
of the medium limited the depth of the discussion. Recently Adam posted a [blog post]
(http://newschoolsecurity.com/2013/01/the-high-price-of-the-silence-of-cyberwar/)
that more deeply explored his position. He believes that actors in a cyberwar 
are not (always) motivated to remain silent. He also proposes a methodology for 
selecting incidents to disclose, and then lays out several benefits that he 
believes such disclosure provides. I still think he is wrong. Rather, he got the
right answer for the wrong reason.

Rather than addressing his arguments in detail (because I don't find fault
with his logic, it is his premise that is incorrect), I will lay out the 
reasoning behind my position. This will provide a more comprehensive 
understanding of an important aspect of cyberwar, one frequently ignored in the 
discussions -- Counter Intelligence (COINTEL). I'll briefly outline some core 
COINTEL concepts, then apply them to the current cyberwar, and then finally 
agree with Adam's conclusion anyway.

Firstly, yes, my understanding of the motivations of the actors in the
cyberwar is partially informed by discussions I've had with active
participants. However, more importantly, I've spent the last year studying
counter intelligence and looking at how to apply it to cyberwar. Part of that 
research was presented in my [OPSEC for Hackers](www.slideshare.net/grugq/opsec-for-hackers) talk.
The following arguments are therefore from the point of view of someone who
views the ongoing cyberwar as primarily a series espionage operations and
activities. 

NOTE: I must emphasize that what I outline here is pure speculation. I have no
security clearance with any country, so I have no secret knowledge. My opinions
are informed only by open source materials (re: I read books and stuff). 


COINTEL for dummies
===================

In the broadest sense COINTEL is the practice of mitigating against and
attacking the intelligence capabilities of the adversary (I will use adversary
and opposition interchangeably). Although there are several attempts to
categorize COINTEL strategies, I'm partial to the following: *basic denial*;
*adaptive denial*, and *manipulation*. [1]

* Basic Denial: This includes techniques and methodologies that restrict the
                amount of intel the adversary is able to collect. Many OPSEC
                techniques are basic denial techniques.

* Adaptive Denial: These techniques and methodologies are developed to address
                specific vulnerabilities that exist in your organisation.
                For example, if you learn (via your own intelligence
                capabilities) the adversary is monitoring your phone calls,
                switching to couriers for comms would be adaptive denial.
                Because it requires some capacity for determining the
                adversary's capabilities and then enacting a response, this is
                a more advance COINTEL practice.

* Manipulation: This is when you specifically target the intelligence agencies
                of the adversary and attempt to control their understanding of
                your capabilities, methodologies, membership, techniques,
                tools, and so on. This is an expensive, risky and tricky
                practice to pull off effectively. It requires significant
                resources to plan and orchestrate successfully. 

In the realm of cyberwar basic denial techniques, to prevent the opposition
from knowing your own capabilities, are crucial. For example, if the opposition
learns about a bug that you have, they may patch it and neutralize your
capability. Similarly, this applies to tools and techniques that are
components in your toolchain.

[1] Terrorism and Counterintelligence, Blake W. Mobley, 2012. (http://www.amazon.com/Terrorism-Counterintelligence-Detection-Irregular-ebook/dp/B0092X9OBC/ref=sr_1_1)


Motivators to STFU
==================

There are several reasons that I believe an actor in the global game of
cyberwar is motivated to practice basic denial about intrusion incidents and
STFU. The strongest reasons, I believe, are:
 * creating uncertainty in the adversary regarding his success rate
 * preventing the adversary from engaging in *adaptive denial*,
 * creating scenarios where *manipulation* is possible, and
 * back hack the adversary


Fear, Uncertainty and Doubt
===========================

By not disclosing known intrusions, the adversary is denied knowledge of his
success rate (as measured by covert persistence). Without feedback on what
boxes and networks he controls vs. those he only believes he controls, his
confidence is diminished. He is also significantly more likely to utilize a
compromised resource that is under active surveillance or has been otherwise
neutralized. Also, the adversary's military leaders will be lessconfident that 
they can utilize a specific capability, perhaps even completely dissuaded. 

Additionally, if the opponent learns that their operation was a failure (e.g.
their intrusion was discovered and cleaned up), they are likely to attempt it
again. Subsequent operations by the adversary might not be successfully
detected and thwarted. 


Stop Adaptive Denial
====================

The adversary is an intelligent dynamic opponent who will alter his tools,
techniques and methodologies to remain effective. By denying the adversary
information about which of his operations have been discovered, and how, you
are reducing his ability to detect and address vulnerabilities within his
tradecraft. Keeping the knowledge of this vulnerability to yourself (and
possibly your allies) provides you with an advantage against the adversary.
Maintaining this advantage is, obviously, in your best interest. Therefore,
practicing basic denial and not disclosing which of the subset of successful
intrusions you have detected, and particularly how they where detected, is an
important COINTEL practice.

The motivation here can be summed up as: "keep the adversary's knowledge about
our knowledge of his activities, capabilities and techniques in the 'known
unknowns' quadrant".


Enable Manipulation Opportunities
=================================

Once the adversary has successfully conducted a computer network attack
(CNA), they (a) want to avoid having to do it again, and (b) seek to
profit from it. Typically this is accomplished by installing malicious
software that will provide surreptitious access to the adversary. The adversary
can then search the computer for operationally relevant data. [2]

This situation presents a few interesting opportunities for a COINTEL
manipulation operation. The obvious one is to provide fake data that appears
legitimate but is useless, dangerous or even a lure. A publicly known 
example of this is in The Cuckoo's Egg, where fake documents were used to
provide attribution (the KGB did it!) as well as prove malicious intent (the 
hackers weren't just playing around on the system).

Typically, when an intelligence agency uncovers the agent of an opponent, they
do not shut them down (e.g. arrest them). There is far more benefit to be gained by
allowing the agent to continue to operate... under very heavy surveillance. If
the opposition's agent is a penetration (a "mole"), they will be "packed in
cotton wool" and left in place. Monitoring who a known agent interacts with,
how they operate their tradecraft and what sort of information they are
looking for provides tremendous opportunities for insight into the
opposition's intelligence agency and operational objectives. Finally, this
known agent can be used to feed false information to the adversary. 

In traditional intelligence lingo this would be a "double agent", an agent of
adversary that has been converted to work for your own side. This opens a channel
into the opposition's intelligence agencies. A deliberate operation to create 
such a channel would be to use a "dangle", essentially a lure to attract the 
adversary's attentions. 

The similarities with honeypots should be obvious. 

Publicly announcing and disclosing an intrusion that could still yield valuable 
intelligence is an extremely poor use of a scarce resource. Granted, the sheer 
scale of the cyberwar and massive number of incidents reduce the value of any 
one single event. Additionally the limited COINTEL resources of the actors 
would seem to limit the utility of manipulation, however it remains an 
intriguing possibility.


[2] NOTE: we are ignoring directly destructive or distruptive attacks against the 
computer, such as Stuxnet, to focus specifically on the espionage angle.

Back Hack
=========

First, a brief history lesson. In the 1990s hackers used to put systems
online with the latest rumored vulnerabilities. They would monitor to see
when they were hacked, and from where. Then the hacker would hack each bounce
box back up the chain (hence "back hack") until he was in a position to
collect the adversary's toolchain. This was one way that 0days and private
tools were stolen. This technique predates honeypots.

As has been noted in numerous research reports, the quality of the adversary's
toolchain varies considerably, and generally tends towards shoddy. Exploitable
bugs in the C&C software used in intrusions are common, and indeed are typically
easy to find and exploit. Laurent Oudot published a large number of such bugs at
SyScan Singapore 2010 (unfortunately the archive isn't online, so here's the
[Full Disclosure mail](http://seclists.org/fulldisclosure/2010/Jun/432)).

One possible COINTEL operation would be to replace the opponent's software
with a malicious version that attacks the C&C infrastructure. This would
enable any number of follow-up operations to exploit the intelligence
opportunities. A recent public example of this was the "Georgia Hacker",
well summarized in this
[article](http://arstechnica.com/tech-policy/2012/11/how-georgia-doxed-a-russian-hacker-and-why-it-matters/).


COINTEL SHMOINTEL, its an election year!
========================================

This outlines my position on why actors in the global game of cyberwar are 
motivated to remain silent about incidents. These motivators are all COINTEL 
based. 

COINTEL is a powerful guiding force in information warfare. But it is 
not, of course, the only consideration. This is where I have to agree with 
Adam's conclusion. The value of a COINTEL operation, whether basic denial or 
manipulation, has to be judged against the value that can be gained from 
disclosing the incident. This judgment is for the politicians and
other policy makers. It is a strategic decision that must be made to reflect 
policy and advance your own position (at least, that's the theory).

There are instances where disclosing an intrusion and the details of that
intrusion make more sense than maintaining silence. There are also, I believe,
instances where this is emphatically not the case. Unfortunately these
decisions will be made by people who know little to nothing about computers or
hacking.