Thursday, April 12, 2018

Stealing the Socket for Policy and Profit

One exploit that has fascinated me for more than a couple of years is this one by Yuange of NSFOCUS. When I mentioned this on Twitter, Yuange himself pointed me at this paper, where he describes a bit of his technique and his philosophy. A few things stick out from this exploit:

The first, is he was ahead of his time in adopting the PE parsing technique for writing portable Windows shellcode. He also had a uniquely Chinese style of writing the entire exploit in C, and having the shellcode just "compiled" instead of hand written. Thirdly, he used an entirely new and innovative method of stealing the socket on IIS by using the built in ISAPI handler calls. Fourthly, he built in a micro-backdoor to his exploit shellcode.

I want to highlight the third thing - the socket stealing. But first, I want to look at the work of another well known hacker group: LSD-PL. I can't remember now if their Windows Asmcode paper was the first public example of the PE-parsing technique for Windows shellcode. I remember Oded Horowitz worked in that area before it was public (and also wrote a special purpose linker for Windows which allowed you to write your shellcode in C using Visual Studio).

LSD used a specific technique for their FindSck Asmcode which looks almost exactly like their Unix version. I'll paste it below since a significant portion of the policy community is learning hacker assembly now.

Page 22 of this presentation has the decompilation of this.

In this case they go through every FD from 0 to 0xffff and call getpeername() on it. Then they see if the source port is the one they hardcoded into the shellcode at runtime to look for.

However, compare that technique to the first GOCode in Apache Nosejob from hacker comedy group Gobbles. Apache Nosejob was the second version of Apache-scalp, which exploited an "impossible" bug released by IIS XForce researcher Mark Dowd.

As you can see it's called "GOCode" because on the remote side, the shellcode is going through its FDs and sending "G" to them and the exploit responds to that G with an O as a simple handshake. This technique is obviously noisier (every socket gets a G, like in some weird Oprah show!) but more resilient against certain kinds of networking environments (aka NAT).

But why are all these somewhat contemporary techniques so different? And why even invest this kind of time and energy in stealing sockets?

Here's what Yuange has to say:

And here is what LSD has to say about that same thing:

One key point from the LSD-pl Windows slides is that they implemented a mini-backdoor in assembly partially to solve the problem all Unix hackers had moving to Windows before Powershell was included by default - the OS feels lobotomized.

Shellcode is called "Shellcode" because a Unix shell is a full-featured programming environment. There are thousands of ways to transfer files from point A to point B given shell access to a 1990's Unix system. This is not nearly as easy on Windows 2000. But LSD and Yuange both realized that the path of least resistance on Windows was to build file transfer into your stage-1 assembly code rather than trying to script up a wrapper.

Yuange's IIS exploit doesn't "pop cmd.exe" - it has this mini-shell for the operator to use.
So now let's go back to the Yuange exploit and talk about the ISAPI-stealing code as if you are 22yo me, puzzling over it. The first thing he does is set an exception handler for all access violations, and then he walks up the stack, testing for a possible EXTENSION_CONTROL_BLOCK.

The ECB has a set size (0x90), which it stores as the first DWORD and then the connID field at ecb+8 will always point...right back at the ECB! Once he has found the ECB he now has a connID and the addresses (stored in the ECB) for function pointers to the ReadClient() and WriteClient() that IIS provides every ISAPI.

This means his exploit is going to steal the socket reliably, no matter what ISAPI he targets, and whether or not it is In_Proc or Out_Proc, using SSL or not, even if he is behind several layers of middleware and firewalls and proxies of various sorts. In that sense it is BETTER and more generic than the LSD-PL and GOCode styles for this particular problem set (IIS Exploits).

Generic shellcode platforms are often derided for not being worth the effort by penetration testers, but I hope by reading this article you have now gained the foresight to see that for real work, by skilled but small teams who cannot afford a room of Raytheon engineers to architect bespoke solutions to every exploit and operation's microclimate, this became a necessary investment. Kostya summed up a lot of Immunity experience with this in a BlackHat talk.

Generally further in time from left to right.

If you're completely non-technical, then the goal of this kind of analysis is difficult to understand, but we wanted to point out that real teams consider their exploit only done when it "Works in the wild" and socket-stealing and post-exploit data transfer is a big part of that. Likewise, there are many ways to solve these problems, and different teams chose different ways which speak in interesting patterns. Historically, the people who were developing these techniques have moved on into interesting places (Yuange is at Tencent I hear) and if you were not impressed with them in 2001, you may not truly understand the modern landscape.

There was a purpose to hacking in the 2000's beyond getting on stage somewhere. The early hacker groups were run by strong philosophies. Mendez is not the only hacker who had a political bent driven by a strong-world view. What and/or who was the AntiSec movement, for example? You can't spend all of your spare time obsessively reading secrets without being changed and those twists are evident in modern geopolitics as clearly as glacial troughs, if you have the right eyes for it.

Monday, March 19, 2018

Some CrazyPants ideas for handling Kaspersky

These pants make more sense than some of the ideas posted for handling Kaspersky

So the benefit of being a nation-state, and the hegemon of course, is that you can pretty much do whatever you want. I refer, of course, to last week's LawFare post on policy options for Kaspersky Labs. The point of the piece, written by a respected and experienced policy person, Andrew Grotto, is that the US has many policy options when dealing with the risk Kaspersky and similar companies pose to US National Security. Complications include private ownership of critical infrastructure, the nature of cyberspace, and of course ongoing confusion as to the whether we have punitive or palliative aims in the first place. Another complication is how crazypants all the suggestions are.

He lists six options, the first two dealing with "Critical Infrastructure" where the Government has direct regulatory levers and Kaspersky has a zero percent market share already and always will. The third one is so insane, so utterly bonkers, that I laughed out-loud when reading it. It is this:

Ok, so keep in mind that "deemed export" is an area of considerable debate in the US Export Control community, and not something any other country does. While yes, applying the BIS Export Control rule in this case would immediately cause every company that does business in the United States to rush to uninstall KAV, this is not where the story would end.

Instead, we would have a deep philosophical discussion (i.e. Commerce Dept people being hauled in front of Congress) - because for sure not everyone who works at Azure, every backup provider in the world, or literally any software company, is a US Citizen. Because while Kaspersky has deep and broad covert access to the machines they are installed on, they are not the only ones.

We currently interpret these rules extremely laxly, for good reason.

The next suggestion in the piece is adding Kaspersky to the Entities list - essentially blacklisting them without giving a reason. Even ZTE did not get this treatment and while they paid a fine and are working their way back to good graces if possible, this was highly defensible. I mean, in these cases what about the thousands of US businesses that already have Kaspersky installed? The follow-on effects are massive and the piece ends up recommending against it, since the case against Kaspersky, while logical, is possibly not universally persuasive as a death sentence without further evidence?

Tool number 5 is the FTC doing legal claims against Kaspersky for "unfair or deceptive acts or practices" in particular, for pulling back to the cloud files that are innocuous. Kaspersky's easy defense is going to be "We don't know they are innocuous until we pull them back and analyze them, we make it clear this is what we do, we are hardly the only company to do so, for example see this article." I.E. the idea of FTC legal claims is not a good one and they know it.

The last "Policy Tool" is Treasury Sanctions. Of course we can do this but I assume we would have to blow some pretty specific intel sources and methods to do so.

Ok, so none of the ideas for policy toolkit options are workable, obviously. And as Andrew is hardly new at this, I personally would suggest that this piece came out as a message of some kind. I'm not sure WHAT the message is, or who it is for, but I end with this image to suggest that just because you CAN do something doesn't mean it is a good idea.

What happens if the Russians get false flag right?

There's a lot of interesting and unsolved policy work to be done in the Russian hack of the 2018 Olympics. Some things that stuck out at me was the use of Router techniques, their choice of targeting, and of course, the attempt at false flagging the operation to the North Koreans. I mean, it's always possible the North Koreans, not shabby at this themselves, rode in behind the Russians or sat next to Russian implants, and did their own operation.

There's a lot of ways for this sort of thing to go wrong. Imagine if there had been a simple bug in the router implants, which had caused them to become bricked? Or imagine if the Russians had gotten their technical false flag efforts perfect, and we did a positive attribution to North Korea, or could not properly attribute it at all, but still assumed it was North Korea?

Or what if instead of choosing North Korea, they had chosen Japan, China, or the US or her allies?

What if a more subtle false flag attempt smeared not just a country, but a particular individual, who was then charged criminally, which is the precedent we appear to want to set?

I don't think anyone in the policy community is confident that we have a way to handle any of these kinds of issues. We would rely, I assume, on our standard diplomatic process, which would be both slow, unused to the particulars of the cyber domain, and fraught with risks.

It's not that this issue has not been examined, as Allen points out, Herb Lin has talked about it. But we don't have even the glimmers of a policy solution. We have so much policy focus on vulnerability disclosure (driven by what Silicon Valley thinks) but I have seen nothing yet on "At what point will we admit to an operation publicly, and contribute to cleanup"?  or "How do we prove that an operation is not us or one of our allies to the public". In particular I think it is important that these issues are not Government to Government issues necessarily.


  • Herb Lin: LINK
  • Technical Watermarking of Implants Proposal: LINK

Tuesday, March 13, 2018

The UK Response to the Nerve Agent Attack

Not only do I think the UK should response with a cyber attack, I think they will do so in short order.

It's easy to underestimate the brits because they're constantly drinking tea and complaining about the lorries but the same team that will change an Al Qaeda magazine to cupcake recipes will turn your power off to make a point
The Russians have changed their tune entirely today, now asking for a "joint investigation" and not crowing about how the target was an MI6 spy and traitor to the motherland killed as a warning to other traitors (except on Russian TV). I don't think the Brits will buy it. As Matt Tait says in his Lawfare piece, this is the Brits talking at maximum volume, using terminology that gives them ample legal cover for an in-kind military response. Ashley Deeks further points out the subtleties of the international law terminology May chose to use and how it affects potential responses.

For something like this, sanctions go without saying, but I don't think that ends the toolbox. The US often also does indictments, but that's more message sending than impactful sometimes. The UK could pressure Russia on the ground in many places (by supporting Ukraine, perhaps?) but that takes a long time and is somewhat risky. Cyber is a much more attractive option for many reasons, which I will put below in an annoying bullet list.

  • Cyber is direct
  • Cyber can be made overt with a tweet or a sharply worded message
  • GCHQ (and her allies) are no doubt extremely well positioned within Russian infrastructure (as was pointed out in this documentary) so operational lag could be minimized or negligable
  • Cyber can be made to be discriminatory and proportional
  • Cyber can be reversible or not as desired
  • Sending this message through cyber provides a future deterrent and capabilities announcement
That answers why the Brits SHOULD use cyber for this. But we think they will, because they've sent that as a signal via the BBC and the Russians heard it loud and clear.

Tuesday, March 6, 2018

Why Hospitals are Valid Targets for Cyber

Tallinn 2.0 screenshot that demonstrates which subject lines are valid in spam and which are not. This page has my vote for "Most hilarious page in Tallinn 2.0". CYBER BOOBY TRAPS! It's this kind of thing that makes "Law by analogy" useless, in my opinion.

So often because CNE and CNA are really only a few keystrokes away ("rm -rf /", for example), people want to say "Hospitals" are not valid targets for CNE, or "power plants" are not valid targets for CNE, or any number of things they've labeled as critical for various purposes.

But the reason you hack a hospital is not to booby trap an MRI machine, but because massive databases of ground truth are extremely valuable. If I have the list of everyone born in Tehran's hospitals for the last fifty years, and they try to run an intelligence officer with a fake name and legend through Immigration, it's going to stand out like a sore thumb.

The same thing is true with hacking United. Not only are the records in and out of Dulles airport extremely valuable for finding people who have worked with the local federal contractors but doing large scale analysis of traffic amounts lets you guesstimate at budget levels and even figure out covert program subjects. People look at OPM and they see only the first order of approximation of the value of that kind of large database. Who cares about the clearance info if you can derive greater things from it?

The Bumble and Tinder databases would be just as useful. If you are chatting with a girl overseas, and she says she doesn't have a Bumble/Tinder account, and you're in the national security field, you're straight up talking to an intelligence officer. And it's hard to fake a profile with a normal size list of matches and conversations... 

And, of course, hacking critical infrastructure and associated Things of the Internet allows for MASINT, even on completely civilian infrastructure. People always underestimate MASINT for some reason. It's not sexy to count things over long periods of time, I guess.

Also, it's a sort of hacker truism that eventually all networks are connected so sometimes you hack things that seem nonsensical to probe for ways into networks that are otherwise heavily monitored.

I highly recommend this book. Sociology is turning into a real science right before our is intelligence.
SIGINT was the original big data. But deep down all intelligence is about making accurate predictions. Getting these large databases allows for predictions at a level that surprises even seasoned intelligence people. Hopefully this blog explains why so many cyber "norms" on targeting run into the sand when they meet reality.

Wednesday, February 28, 2018

A non-debate on the EU VEP process

VEPfest EU! Watch the whole show here

I know not many people watched the VEPFest EU show yesterday, but I wanted to summarize it. First, I want to comment on the oddity that Mozilla is for some reason leading the charge on this issue for Microsoft and Google and the other big tech companies. Of course, this was not a "debate" or even a real discussion. It was a love-in for the idea of a platonic ideal of the Vulnerability Equities Process, viewed without the actual subtleties or complexities other than in passing mention.

To that end, it did not have opposing views of any kind. This is a pretty common kind of panel setup for these sorts of organizations on these issues and it's not surprising. Obviously Mozilla would prefer a VEP enshrined in EU law, since they have had no success making this happen in the US. Likewise, he really hates the part of the VEP that says "Yes we obey contract law when buying capabilities from outside vendors".

It's impossible to predict the direction of Europe since this issue is a pet project of one of their politicians but an EU-wide VEP runs into serious conflict with reality (i.e. not all EU nations have integrated their defense/intelligence capabilities) and a per-country VEP would err on "WE NEED TO BUILD OUR OFFENSIVE PROGRAMS STAT!" Unless the 5eyes are going to donate tons of access and capability to our EU partners, they're going to be focusing hard on the "equities" issue of catching up in this space for the foreseeable future.

I was of course annoyed, as you should be, by Ari Schwartz deciding to make up random research about things he knows nothing about. At 1:45:00 into the program he claims that bug classes have been experiencing more parallel discovery than before.

To be completely clear, there has been no published research in "Bug class collision", which would be extremely rare, like studying supernovae collision. Typically "bug class spectrum analysis" is useful to do attribution from a meta-technical standpoint, which is the subject of a completely different blog post on how toolchain-timelines are fingerprints, specifically because new bug classes are among the most protected and treasured research results.

There has been some work on bug collision, but at very preliminary stages due to the lack of data (and money for policy researchers). Specifically:

  • Katie's RSA paper (Modeled) - PDF  
  • Lily's RAND paper (small data set) - PDF
  • Trey/Bruce's paper (discredited/faked data set) - PDF
There's also quite a lot of internal anecdotal evidence and opinions at any of the larger research/pen-test/offensive shops. But nothing about BUG CLASSES, as Ari claims, and definitely nothing about a delta over time or any root causes for anything like that. Bug classes don't even have a standard definition anyone could agree on.

Anecdotally though, bug collisions are rare, full stop. You cannot secure the internet by giving your 0day to Mozilla, is what every expert knows, even if you are the USG and have a wide net. Literally Google Project Zero had Natalie do a FULL AND COMPREHENSIVE review on Flash vulnerabilities and almost no difference in adversary collections was made, despite huge efforts and mitigation work, and automated fuzzer work, etc.

But let's revisit: Ari Schwartz literally sat on stage and MADE UP research results which don't exist to fit his own political view. Who does that remind you of?

Monday, February 26, 2018

What is a blockchain for and how does it fit into cyber strategy?

The best answer to what a blockchain is is here: A Letter to Jamie Dimon.

But the best answer to what it is for, is of course a chapter of Cryptonomicon, which can be read online right here.

I will paste a sliver of it below:
A: That money is not worth having if you can't spend it. That certain people have a lot of money that they badly want to spend. And that if we can give them a way to spend it, through the Crypt, that these people will be very happy, and conversely that if we screw up they will be very sad, and that whether they are happy or sad they will be eager to share these emotions with us, the shareholders and management team of Epiphyte Corp.
I think one thing you see a lot (i.e. in personal conversations with Thomas Rid, or when reading Rob Knake's latest CFR piece is a reflexive confusion as to why every technologist they seem to talk to holds what they consider extremely libertarian views.

My opinion on this is that it's a generational difference, and that the technologists at the forefront of internet technology simply reached that future earlier than their peers who went into policy. In other words, it's a facilely obvious thing to say that the Internet was built (and continues to be built) almost entirely on porn.

But beyond that, the areas where Westphalian Governments are not in line with people's desires create massive vacuums of opportunity. To wit: The global trade in illegal drugs is typically assumed to be 1% of GLOBAL GDP. But that money is not worth having if you can't spend it.

And credit card processors, built on the idea of having a secret number that you hand out to everyone you want to do business with, are a primary way governments lock down "illicit" trade. On a trip last week to Argentina, where Uber is outlawed but ubiquitous, I found you cannot use American Express or local credit cards, but a US Mastercard will work. And if you're a local, they suggest you can use bitcoin to buy a particular kind of cash-card which will work.

The same thing is true in the States when it comes to things that are completely legal, but unfavored, for example the popular FetLife website, which recently self-censored to avoid being blackballed by Visa.

In other words, you cannot look at the valuations of Bit-Currencies and not see them as a bet against the monopoly of Wesphalian states on currency and transactions that has existed since Newtonian times. What else does this let you predict?

Friday, February 23, 2018

Blockchain Export Control

Wanting to withdraw from the Wassenaar Arrangement is totally sane policy position and hopefully this blogpost will help explain why.

Mara would be better off rewriting Wassenaar's regulatory language as a Solidity smart contract on top of Ethereum. They share (aside from the obtuseness of the language) several key features. In particular, they can be described as one way transaction streams.

I know that supporters of the WA, which requires 41 nations to all agree on a change before it happens, think that the current path of export control is hunky dory and well adjusted to technical realities. But even in areas that ARE NOT CYBER you only have to sit through a couple public ISTAC meetings before you see that while it is easy to CREATE regulations, it is nearly impossible to revise or erase regulations. This is why we have regulations on board that appear to apply to technology from the 50s, which one day is what people will look at all Ethereum programs as.

For technologies that change slowly, this is less of an issue. But you cannot predict the change rates in technological development before you decide to regulate something with export controls. Nor is any form of return on investment function for your regulation specified, so unused and ill-planned regulatory captures just hang around on the Wassenaar blockchain forever.

As a concrete example, let's take a look at Joseph Cox's spreadsheets, wherein he FOIA'd various UK Govt license filing information.

The 5A1J ("internet surveillance system") spreadsheet, here, specifies two real exports, one of what appears to be ETIGroup's EVIDENT system to the UAE and the other which appears to be BEA Detica to Singapore, both of which were approved.

Now I personally have spent maybe fifty hours this year trying to untangle the stunningly bad 5a1j language, which uses technically incorrect terminology, arrived vastly out of date (i.e. applies to any next gen firewall/breach detection system) and has no clear performance characteristics. All of this for something that in the UK resulted in TWO SALES, which if they had been blocked would just have resulted in the host governments putting something together from off the shelf components??!?!

Taking a look at his 4D4 "intrusion software" spreadsheet, here, you get similar results:

  • A sale to the United States
  • A sale of a blanket license for "Basically anything penetration testing related" to Jordan, Philippines, Indonesia, Kuwait, Egypt, Qatar, Oman, Saudi Arabia, Singapore and Dubai.
  • A sale to Bahrain
  • A sale to Dubai (but just for equipment "related"?)

Even if those are the most important four export control licenses ever issued I think the time anyone has spent on implementing or talking about these regulations is EXACTLY LIKE the entire rainforest fed into the blazing fire every day that is Ethereum's attempt to emulate the world's slowest Raspberry Pi running Java.

There's a weird conception among "civil society" experts that export control is useful whenever any technology can have negative uses. That's a misunderstanding of how Dual-Use works that is not shared even among the most optimistic of the specialists I've talked to in this area.

In addition, NOT issuing those licenses results in four possibilities none of which is "Country does not get said capabilities":

  1. The country develops it internally by gluing off the shelf components together (because there is basically no barrier to entry in these markets - keep in mind HackingTeam was not...a big team)
  2. The country buys it from China 
  3. The country buys it from a Wassenaar country with a different and looser implementation of the regulation. (Unlike Ethereum, every WA implementation is different, which is super fun. For example, the US has this neat concept called "Deemed export" which means you need a license if you give the H1B employee next to you something that is controlled.)
  4. The country buys it from a reseller in a country with less baggage using a cover company and then emails it to themselves using the very complicated export control avoidance tool "Outlook Express".

But for FOUR LICENSES seriously who cares? This whole thing is like having a BBQ on the side of the space shuttle. With enough expended energy you can sure toast a few marshmallows, but it's not going to be the valuable memory building Boy Scout experience for your kids that maybe you were hoping for.

And I'll tell you why I personally care and it's because all the people who should be working on policies that "make sure we don't lose an AI war to China" are instead sitting in Commerce Dept rooms defending their companies from the deadly serious rear naked choke that is Wassenaar! And it's not just cyber, it's everything.

If you want to make a number for your controlled Frommy Widget in the WA go from 4Mhz to 6Mhz then it's a simple three year process of arguing about it with various agencies and then it goes through the  system and by the time the language has changed it's already out of date, much like every valuation of your BitCoin you've ever gotten. So now you're spending your precious cycles arguing for a change from 6Mhz to 8Mhz in the very definition of a Sisyphean process.

The end result is that instead of exporting hardware around the world, we export jobs as companies set up overseas in the VERY INDUSTRIES WE CONSIDER MOST SENSITIVE AND IMPORTANT. This is a hugely real issue that should be part of the ROI discussion around any of these regulations but never is for some reason.

This could be maybe fixable by implementing a mandatory nonrenewable 5 year sunset to all Wassenaar regulations. But to do this, the US (and the international community) basically needs to hard-fork the whole idea of technological export control, which is something we should do for many reasons. A more realistic option may be to pull completely out of WA and re-implement the parts that make sense with bilateral agreements.

Another issue is that the actual technical understanding cycles spent on implementing new regulations are lower than they should be, for a process that is only a one-way diode. I.E. you need people full time on every one of the new and old issues but by definition the technical experts on these issues work on them part time. Basically you want people doing a TDY looking at all the regulations from a technical perspective, and we don't have that as a community. We could solve that by giving grants to various companies to fund it, or by hiring it within the Commerce department (and various related international equivalents). Think the DARPA PM program, but for export control experts.

But that's hugely expensive, and as pointed out, it's questionable if any of this makes any more sense to invest in than a virtual blockchain cat!

Thursday, February 22, 2018


Today I'm listening to Brandon Valeriano, Donald Bren Chair of Armed Politics, Marine Corps University. You can do that yourself here:

He makes some good points and has some good questions regarding a few clear things, in particular, that our US-focused understanding may be making it hard to see the real shape of the effects of cyber power projection, and likewise, that as a community we focus too much on the Megafauna operations such as Stuxnet.

In particular though it's funny to hear him talk about how limited the effects of cyber operations are, while the entire first page of the NYT today, and every day, is about a successful Russian cyber operation.

This, in a nutshell, is where I thought Brandon's previous book ran into trouble and it's evident in the current talk. Policy and law communities like to split the spookware set of disciplines into very clear buckets. This is espionage, this is sabatoge, etc. But this is like trying to say what's Karate and what's kickboxing and what's Kung-Fu but you're doing so in the UFC cage, and someone is currently punching you in the face!

When we forward deploy NSA people into war zones and provide total coverage across an entire populace's telecommunications for our Marine units, is that cyber power projection? In a way, the final part where you kick down the door and shoot someone is the boring part, right?

Again, he says with China that their policy of stealing technology (and M&A deals) through cyber "does not work" and that they've given up. Which frankly is exactly what they wanted us to think.

Maybe a more accurate description was that it DID work and they are now pivoting to protecting their lead? They have more AI research happening than we do now. Basic science research now happens in Shanghai and Beijing as the US draws back on funding it. Their Quantum detectors are amazing and revolutionary, if hard to understand. Why wouldn't they want a new norm against economic cyber espionage after fifteen years of running the table?

Also, let me point out that Brandon's usual comments on "Cyber weapons being one use tools" are just weird. Exploits can be reused, and are rarely caught, but you do run that risk, and implants get caught eventually, but are often re-tooled and re-deployed. And methodologies, listening points, and all the other things that go into cyber power projection are not "one shot". I'm honestly not sure where he comes from here. But he does keep saying it! Maybe after he reads this post he will write why he thinks that. I know it's part of his logic regarding the desire of nation states to hold back on escalatory cyber attacks, but it's not strictly true in any important way. I feel like someone from TAO told him this at a dinner party over drinks and he really hung onto it.

Ok, so as you finish the talk I know he's not going to be able to support his larger thesis in 20 minutes, but it's so hard to hear someone say that cyber power projection is NOT a revolution of nation state conflict and that it cannot cause disruptive effects on a mass scale. Also, it's clear that everyone is now focused on influence operations enabled by cyber, and are going to be completely surprised at cyber's next metamorphosis. :)

Tuesday, February 20, 2018

Meta changes in endpoint defense: Airframes vs Drones

As the video I stole this images says "The more autonomy and intelligence you put on these platforms the more useful they become!" You know what's a lot more autonomous than an F-35? A drone! :)

One clear shift in defense occurred when Crowdstrike and Mandiant and Endgame (and now Microsoft, etc.) built platforms for companies to do detailed introspection of their computing fabric. For the first time ever serious attackers were getting caught in the act. 

This technology, despite the buzzword hype, is quite simple: A kernel inspector, streaming metadata to an aggregation system, optionally a network sniffer doing same, and algorithms that run on the data to generate actionable results. The expensive part here is the kernel inspector, which is stupidly hard to make reliable, portable, and secure! 

This recent MITRE/CrowdStrike piece demonstrates clearly the effectiveness of this approach against a modeled nation-state adversary who has not themselves tested their implant against CrowdStrike Falcon. 

These mega-implants/"endpoint protection agents" are essentially as expensive to build as airframes. In addition, every vendor produces multiple airframes which escalate in complexity when they detect anything wrong on an endpoint. But what you don't see right now is a lot of ingestion of open-source-style telemetry for your pre-escalation defenses. 

For example, this blogpost details using ELK+OSQUERY+KOLIDE to build an off-the-shelf, scalable, and completely free suite that rivals the instrumentation abilities of some of the more complex market products for "threat hunting". This is essentially the drone-analogy to the endpoint protection market. In many cases, these sorts of toolchains completely avoid the need for a kernel-level inspector, which avoids every bluescreen being "your fault". In many cases, Operating System vendors have upgraded the built-in capabilities of their platforms so that it's not necessary and in other cases, you just go without the deeper levels of data.

Just as drones changed air war forever, I expect these sorts of widely deployed defensive toolkits to change cyberwar, if for no other reason than we can assume they will penetrate the mid and low-end markets, as opposed to just the high end that the major endpoint protection players cover. Also like drones, these sorts of things didn't even exist a couple years ago, and now they are fairly fully featured. 

Of course, DARPA has a role to play here, as it did with the stealth technology behind the F-35. Much as the best part of Cyber Grand Challenge is less the attack tools and more the corpus of targets, we really really really need a massive "corpus" of behavioral/network/etc data from a real company, sanitized such that different detection algorithms can be trained and tested. 

Thursday, February 15, 2018

Indicators of Nation-State Compromises

What team composition counters what is an extremely complex question with direct applicability to the level of complexity we see around cyber war decision making.

So while I enjoy talking about Overwatch, I'm not doing so on this blog for the fun of it. There is a fundamental difference worth pointing out between our "game theory of deterrence" and our evolving understanding of cyber war which is best illustrated by the complexity of modern gaming. I'm not going to point fingers at any particular paper, but most papers on the game theory of cyber war use ENTIRELY TOO SIMPLE game scenarios. Maybe political science departments need to play more Overwatch?

Here's two problems I have run into in the policy space:

  1. I found an implant on my nuclear energy plant and I'm not sure if it's just in the wrong place, or deliberately targeting this plant for espionage, or targeting this plant as a precursor for turning off the power to Miami-Dade.
  2. I found an implant on the Iranian president's network, which I also have an implant on, and I want to know if I should "remove it" or if I should back off because I'm already getting all the take from this network via partner programs of some sort
  3. I found an implant on an ISIS machine, which needs to let me know that it is about to be used to do something destructive, and I should not install "next" to it for fear of getting detected when it does so

Instead of doing a program that is all about diplomats and lawyers meeting constantly to try to work out large global norms around these issues, which invariably will result in long (and completely useless) lists of "Places that should not be hacked" and "Effects your trojans may not cause!", I want to do something that works!

Let's go into this with eyes wide open in that we have to assume the following:

  • We hack our allies and vice versa
  • Our allies hack systems we also want to hack
  • Someone could in theory reuse our own technology against an ally
  • Allies are not going to want to let us know exactly which machine they caught us on

Obviously the first take on solving these sorts of problems is going to be a hotline. You would have someone from one State Dept call up the US State Dept and say "Hey, we found this this something you think will do serious damage if we uninstall it?"

This has problems in that the State Dept is probably not aware of our programs, and may not know who to call to find out. Likewise, any solutions in this space need to work at wire speed, and be maintainable "in code space" as opposed to "in law space".

So here is my suggestion. I want a server that responds to a specialized request that contains a sort-of-Yara rule, with some additional information, that lets you know if an implant or exploit is "known" to you as being in that particular network or network type. The server, obviously is going to federate any questions it gets. So while the request may have come into the US State Dept, it may be getting answered by a NATO partner. You would want to rate-limit requests to avoid the obvious abuses of a system like this by defenders.

The offensive teams hate any idea of hints of attribution, but life is about compromises, ya know, pun intended. :)

Saturday, February 10, 2018


Overwatch games have six players on a team. It's a common thing to ask for "2-2-2" at the beginning of a game, meaning you want your team to organize into two healers, two tanks, and two DPS. In hacking terms, what this means is that you need to invest both in exploits, implants, and a sustain/exfiltration crew.

"Ready for...?"
That sounds obvious, I can hear you say in your head. Who would invest only in exploits? Who would have only implants? How far can you get with only a sustain crew? Lots of idiots, lemme tell you. Everyone thinks DPS is the fun part so why would anyone play the other team roles? It is the same in hacking.

The truth is that any team comp can be a very viable strategy, but unbalanced comps tend to be the result of immature CNE efforts. Balance and coordination are the sign of mature - and successful - programs. You may find advanced teams using primitive toolchains and simple strategies to great success because they've built a program with the proper team composition.

People (including me on this blog!) like to measure adversary programs by the sophistication of their tools. But what true teams have is rapid turnaround on exploits, completely unique implants, and massively creative sustain while inside. They take every small advantage - every tiny mistake the defenders make - and turn it into domain admin. 

Friday, February 9, 2018


So if you watch Overwatch League you know that there are three major classes of characters who show up at the pro-level:

  • Healers (Providing SUSTAIN)
  • Damage Dealers (Penetrating into space)
  • Tanks (Holding space)

Heroes never die.

In our game-theory model we use tanks as synonyms for Implants. Damage dealers are clearly your initial operator team or automated toolset which penetrate into adversary networks. Healers are your sustain. But what is sustain, when it comes to CNE?

I have a very particular definition of sustain which is best illustrated by a story I heard recently from Law Enforcement about a hacker who got caught after ten years of having his implants on a regional bank. Every day, for ten years, he had logged in and maintained his presence on that network. Think of the dedication that requires.

But he's not alone. Right now, all over the world, hackers are waking up and visiting thousands of networks, making sure logs are being deleted, gathering new passwords that have changed, moving from host to host to avoid detection, looking to make sure no one is investigating their boxes. There's a giant list of things you have to do - reading the admin's mail to see when upgrade cycles are scheduled and then planning how to stay installed through that kind of activity is not easy!

But just as in Overwatch, this game is won or lost not by how great your DPS is, and sometimes not by the sophistication of your implants, but purely on sustain.

Wednesday, February 7, 2018

Changing the Meta: The Evolution of Anti-Virus

Extremely accurate graphical timeline of AV changes...there has been a LOT of innovation here yet everyone's mental picture is still signature based systems!

So when we talk about the changing Meta of cyber war, I believe that many people have somehow ignored the massive disruptions happening in the defensive "Anti-Virus" market.

Looking at AV from the offensive side, there are many things you have to now take into account, including VirusTotal, Cloud Reputation Systems burning your executables, Cloud Reputation Systems burning your C2/dropper web sites, malware heuristics catching you, VM-detonation systems catching you, anti-rootkit systems messing with you, other implants running their own private analysis against you, etc.

In other words, it's a rough world out there for implants ever since about 2010, and only getting rougher.

But the biggest change, the one that altered the Meta forever, in my opinion, was the switch to reputation-based systems from signatures and heuristics. Being able to see and predict this and engineer around it drove attacker innovation for some time. This affected policy as well, because now targets that normally would be of no value became of huge value because of their reputational quality. What are the policy implications of stealing certificates from random Hong-Kong based software providers to hack random other people?

In fact, there were many attacker responses, all of which were predictable, to this meta-shift:

  • Attacking of cloud AV providers (for example, the Israeli team on Kaspersky's network)
  • Coopting of cloud-AV providers (which is what DHS claims it is worried about re: Kaspersky)
  • Full-scripting language implants (aka, powershell implants, chinese webshells)
  • Implants which run only as DLL's inside other programs (and hence, don't need reputation against earlier systems which did not check DLLs)
  • Watering hole attacks (for both exploitation and C2)
  • Large scale automated web attacks (for gathering C2 Listening Posts)
  • Probably more that I'll think of as soon as I post this. :)

The next meta-change is going to be about automated response (aka, Apoptosis - see MS Video here), as the Super-Next-Gen systems are about to demonstrate. So my question is: Have we predicted the obvious attacker responses?

Monday, February 5, 2018

Policy is just cyber war by other means

S4 published a video of my talk. Rewatching it, it feels disjointed to me. So to summarize the points I was trying to make:

  1. Current policy team in cyber is largely spinning its wheels for various and predictable reasons
  2. Applying more complex game theory is a fruitful thing to do when trying to build a predictive framework around cyber war
  3. Non-state actors are the driving actors, and cannot be ignored in our risk equations

Monday, January 29, 2018

Non-State Actors Practice Deterrence!

I know it's going to annoy the International Relations/Law people when I say this, but non-state actors have a more developed deterrence methodology in the cyber domain than state actors at the moment.

There's a whole slide about this in the Immunity T2/S4 keynotes:

Governments, including the USG, need to be aware of the levers of power projection various private entities have. "Access/Analysis/Remove/Offer" come from the Immunity cyber weapons categorization methodology as explained elsewhere.

To be fair, I think Microsoft and Google can do many things that will, completely legally, hamstring the USG in many ways.

For whatever reason, the thing that has awoken many in Government to this threat is the much more innocuous Strava Heat Map. I know that a month ago if you asked "How would I unmask every US drone base in Africa" the answer would not be an SQLi bug in a jogging data app.

But of course the fact that the international consortium of industry players working on the Meltdown bug were able and willing to keep it a secret from the USG is another interesting data point when it comes to way private industry can hold its own interests above governments.

One thing I look at with a lot of this technology analysis is whether or not we have crossed the cell membrane that separates a world where the USG is a market driver, or whether it is considered a niche market and the rivers all run in the opposite direction. For information security, it was true ten years ago the USG was driving the latest technological trends. They were a huge market and had specialized needs that they were very clear about.

I don't think anyone believes that's the case anymore, and it has massive implications for important things like supply chain security, export control, and strategic issues around technological diffusion and power projection.

Friday, January 26, 2018

What is the merit of a merit-based immigration system?

Last week's Grey's Anatomy had a transsexual hack-back plot-line. It was realistic: The FBI looked after their own interests instead of the victim. And there are a ton of transsexuals in the hacking community. As you might imagine any discipline of iconoclasty has a tendency to fit in well.

This week's Grey's Anatomy had a plot point of a black 14 year old getting shot by cops as he broke into his own house. They don't show the aftermath, but you, if you're doing strategic analysis of the cyber domain, have to think: This is what you would target if you were our adversary. The natural fault line. The military "center of gravity" of the States is a fragile unity when you have Mattis telling his soldiers to "hold the line" and yet we can't stop racist memes from being on the signs in the Overwatch League video stream.

It's a normal thing to explain to some of our kids how to behave around cops so they don't get murdered by them. THIS IS EXACTLY THE SORT OF THING CYBERWAR WEAPONIZES INTO INSURGENCIES.

I have three kids, and one of them is brown enough I don't let him carry toy guns outside the yard.

The most surprising thing to a lot of us is that anyone is surprised at how many neo-Nazis there are in America. Like every time Susan Hennessy is like "Where did all this come from?!?" you have to laugh. A lot of Immunity employees in Miami sometimes fantasize about moving the HQ to a different city. But to me a lot of cities were always out of the running. Miami's justice system can be corrupt, but it's not compromised by a Confederacy.

I've felt it both ways: on one hand I'm chameleon enough because of my vocal intonations , sometimes I can pass - I had one person in a bar in Del Ray ask me if I could understand what it was even like growing up a "person of color" and I almost spit out my beer. On the other hand, in the Florida Keys, which are an hour south of Miami and fifty years behind, I'm my white friend's Hispanic helper to the locals. It's a thing. When girls in Miami flirt with me they often start with "Where are you from?" by which they mean "Why are you brown, exactly?"

I see immigration both ways too. I had a cousin who was a dreamer who had to go back to Peru without knowing more than third grade Spanish. She liked World of Warcraft and computer stuff and that's the shibboleth of being an American as far as I'm concerned. But do companies want a massive increase in H1Bs because it lowers salaries overall? Probably. And I don't think the Democratic proposals are coherent because that's their general policy in life.

A lot of countries use a "merit based" immigration system. They assign points to people based on how likely they are to be of benefit, like going for a job interview at a big company. I remember my job interviews at the NSA, which was for a sort of affirmative action ROTC-like program where they paid for collage.

My grades in high school were terrible, and the only reason the NSA was talking to me was I was brown, and my SAT scores were decent, and I wanted to join, because although the NSA was more secret back then, it was still the geekiest thing I'd ever heard of.

Affirmative action is by definition odiously unfair. But on the other hand, I think the NSA did OK with that program. I think it needed a few people who would park their shitty Camry with the FREE KEVIN sticker in the director's spot without even thinking about it, and frankly who cares how they got them? That was a precipitous time and the NSA had a few people who were outside its box right when it needed them.

For a lot of people, the merit they are looking for in their immigration system is one that let's them bring their family to live with them in a place they've come to love. I don't think the NSA knew it was getting a needed skill-set when it hired me so many years ago. They didn't have a points system. I think they took a chance on an unknown who had enough drive to want to be a part of them. And you can't tell me some bureaucrat can think of a better merit than that.

In any case, Immunity is hiring again soon for information security consulting jobs, and you don't have to be brown, or even American.

Changing the Meta: Format String Bugs

New bugclasses often change the meta-game of cyber war, and a smart player will prepare for that eventuality. And the one I think best represents this dates to 2000, when Scut of TESO did a talk at Chaos Computer Congress 17 and then released a paper on it. Who is this Scut guy and whatever happened to him, you might ask? I'm sure it's not important.

The specifics of what a format string bug are a bit beyond a policy blog, but here's some things you learn from his paper:

  1. Format string vulnerabilities were everywhere
  2. Exploiting them taught the hacking community a lot about exploit primitives, for example how to covert relative write-one word primitives to absolute write-many or into information leaks. In that sense it was a watershed.
  3. Having source code made it super easy to scan for format string vulnerabilities, including with automated analysis techniques. That's why today, like Dodos, they are rather rare.
To return those glory days of free remotes in every public daemon you have to go into IoT auditing. But there were winners and losers when it came to the format string feeding frenzy of 2001. Having source code mattered for the offensive teams because it was a race and because exploitation at this level involves a deeper understanding of an entire program than vulnerability finding does.

But that said, when it's not a race, binaries are just as good as source, and often better.

To take it back into a higher level: The meta changed and if you were prepared for it and could adapt quickly enough, you were able to establish a beachhead of shells on boxes all around the world that could establish a permanent power projection capability.

Adaptability is a hard thing to measure in your offensive team. Can your static analysis tools be quickly retooled to find a new bugclass? Can your implants be quickly ported to a new platform? Does your operator team have the ability to quickly absorb a new toolkit?

And yes: Having a lot of source matters to prepare for meta changes because grep is the cheapest and best security analysis tool ever invented. There's a reason every Government finds a way to get source code to everything. If it 's not some sort of issue with your imports being certified, then it's because you want to export your code and it happens to link to a cryptographic library. In that sense, source code access is about new bugclasses, not new bugs.

Wednesday, January 17, 2018

The role of the shotcaller

In Overwatch and many online games, one player is often decreed the "shotcaller" on your team. This person has a scope of the battlefield (i.e. is a backline player), and while they are not responsible for the overall strategy (i.e. team composition, initial setup positioning), they do make "Calls".

  • Use Ultimates/Don't use (we've already won/lost)
  • Fight (We have a chance top win!) or Run/Die on Purpose (We have lost, time to regroup) 
  • Status of enemy cooldowns, location of important enemies (such as snipers)
  • Target focus (Roadhog is alone!)/Healing focus (Our Reinhardt needs heals!) 

This has direct analogies to cyber operations. I know right now military people are nodding about the ooda loop, but people always focus on the "action" portion of the ooda loop, whereas in cyber, you gain your advantage from speeding up the analysis portion.

To give you an example, let's say you ssh into a box with a stolen key, and then you notice the admin is on the box poking around. You have a set up choices. Do you immediately log out, and hope the admin doesn't notice the logs you have left by logging in? Do you root the box with an 0day, then clean up the logs, then leave immediately? Do you just continue on your mission as if they were not there, since you are probably in and out before they can figure out what's going on?

Ana (who is usually the shotcaller)'s seated pose is from Carlos Norman Hathcock's pic...

A lot of people will say "This is what the operator does" but the decisions you make here affect your global scope. If you try your 0day on boxes where you are likely to get caught, that 0day can easily be burned. But if you log off immediately, your stolen key will likely be burned. If you root the box to clean up, but don't finish your mission, then they may patch or secure the box before you can get back in. A good shotcaller is NOT TOO PARANOID because the question of "Have we been found?" is a very hard one to get right and extremely high consequence.

In other words, the decisions of a shotcaller in a cyber operation (or a penetration test) are the same as in Overwatch. When to go in, when to get out, when to use which tools, where to be persistent and where to leave alone. This is different from your operational planner, which is going to be more tightly connected to your development arm and decide which tools to build and how to tie them together to get an operational capability.

Since this blog is for policy people I want to also point out the policy implications of the Persistence part of APT. Persistence induces many additional risks, especially when done in the face of an active attempt to remove you from a network. There are opsec risks, of course, but what I want to focus on are the risks to the target network.

In order to remove a persistent threat, the target is going to have to rip up large portions of their network, and the attacker is going to have to use techniques that have a chance of causing permanent damage to hardware or causing downtime. If, say, the Chinese QWERTY PANDA group's policy is to stay resident on the DNC's network even after being found, that introduces an escalatory problem first for the DNC, and then for the US.

Most government have a default policy of "If you get caught, get out" for opsec reasons only. I would argue that it makes sense as a norm for other reasons.

Thursday, January 11, 2018

Rethinking Rethinking Security

It's worth reading Jim Lewis's paper from this week on the CSIS website. That said, I can also summarize it polemically by paraphrasing it as "Westphalian states remain the only players that really matter, and cyberwar won't change how they interact that much."

Needless to say, I think he's very very wrong in ways that are important enough to write a blog post about.

We haven't seen a cyber 9/11 only if you refuse to recognize a cyber 9/11 when it is the headline of every politico article for the past two years!

He thinks that if we define "attack" to be equivalent to "coercion against a state to achieve political effect" that it's not happened and all any of us can do is look around and see it happening in real time! Likewise, his claims of states being robust organizations that shake cyber operations off is totally true except that really Westphalian states are giant balloons made of reputation and shared mythos and cyber seems like a bullet created to pop exactly that sort of thing!

My S4 talk, which is what I'm supposed to be working on right now, is the exact opposite of this position. But it's that way not because I feel like aggrandizing cyber operations, but because I have seen a different history and I honestly believe it is impossible to analyze the strategic impact of Mendez's little creation without having that whole picture. Jim says in his paper that the Internet is a creation of Millenial ideals, but the 90's hackers have had a massively larger impact on it. What does he think w00w00 is doing right now?

Where is Dug Song when you need him?

To me, not understanding click-scripts and why they are used and still doing strategic analysis is the same as not understanding the longbow but still trying to understand the battle of Agincourt. This, of course, is the kind of opinion that gets you not invited to write Lawfare pieces. :)

I'm not saying states are powerless, but if he was hanging around inside the NSA while cyber started, and then watched it grow, he'd probably believe the river of talent and technology was mostly running the opposite way, that non-nation-states may have capabilities that rival or eclipse EVEN THE MOST ADVANCED NATION STATES, and to think otherwise is to continue to develop the same cyber policy that has led to us wandering the cyber desert for forty years and I for one think it's time to hire a cartographer or two!

I mean if he thinks nation states are so resilient as an institution, then why exactly? Has he noticed that his barber and taxi driver are both pretty invested in bitcoin right now? Does he know a state with a unvarnished reputation for truthfulness that could withstand all forms of cyber coercion right now? Did he just watch the US govt come out with an attribution of Wannacry that was several months after Google's and backed up with basically the same stuff?

As far as I can tell the argument is this:

  • Cyber operations have had limited impact on states
  • What impact they HAVE had is beyond reach of non-state players
  • Conclusion: Don't Panic

I just think those things are so obviously false that to me the whole concept of the conclusion falls into wishful thinking. It's not just him, of course, I think there's a massive element of cognitive dissonance in a lot of people who do cyber policy. Partially because, unlike other areas of policy, a lot of people (NOT EVERYONE) just don't want to read the source material, which in this case, is often source code.

Coming back to S4, which is a conference mostly about ICS - you get the feeling from reading Jim's paper that he thinks non-nation-state hackers cannot really do the complicated modeling and physical-cyber coordination to cause physical effects. Look, the real reason, is they don't feel like it.


Tuesday, January 2, 2018

What hasn't happened

When turning around a ship of this size, there's going to be a long moment where you make neither forward nor backward progress...

I wanted to provide a counter-tale to the Paul Rosenzweig piece in Lawfare last week. We can sum it up with this quote:
Trump’s efforts in cybersecurity have not been terribly impressive. He has made some modest policy improvements and begun putting together a good team—but not much more.
But in fact I think it is a mistake to say that doing nothing is not progress and all the areas where I have been directly involved have been massive improvements on that front. In particular:

The VEP process was one of a bad idea that was about to be codified into law. Instead, it has been shaped by a team that understands the real equities and supply chain issues involved, to try to make it work strategically as opposed to being driven by a an unrealistic ideology. The message previously was "We don't understand why we even need this line of the modern SIGINT business." That goes into massive brain drain and strategic failure. Now: Exactly the opposite message, even though the policy has not changed a lot, as Paul mentions in his article.

A similar thing is true for the export control area. The idea that you have to cut two regulations to add any one regulation is a silly one. But it works. Previously there literally was no concept of reducing the regulatory burden from things like export control, one of the most spaghetti codes on our lawbooks, and one that applies equally to all American businesses, big and small. If we had a Democratic administration I have no doubt that we would have implemented the Wassenaar Arrangements broken cyber tools controls without even bothering to change them - or more importantly, without examining WHY they were broken in the first place.

Needless to say, the fact that the EU and the US are going in very different directions on cyber regulations is not something we can just paper over, but without some of the sillier rules in place, and a savvy and business friendly appointment at Commerce, we wouldn't have situational awareness of our policy gaps going into the near future (AI, Quantum, etc.).

To sum up: America's cyber policy overall has been moving towards something more data-based, and realistic as opposed to something purely aspirational. While yes, as Paul and many people have noted, we don't have a Universal Theory or a detailed national strategy for dealing with many of our currently known systemic threats, we are at least demonstrating that we can change our policy based on evidence, which is a good first step.

P.S. I also think the Kaspersky thing is a sign of progress, but hard to detangle that argument here. :)