xorl %eax, %eax

Everything you wanted to know about OPSEC, and some more…

with one comment

So… I came across another of those “OPSEC recommendations” posts from a well known cyber security company and that motivated me to clear some things out. Having formally trained on OPSEC, like many of my readers, I am getting annoyed when people abuse very tactical and specific terminology; and one of the most abused is OPSEC. Let me clear out what OPSEC is, and what OPSEC isn’t for you. And hopefully I’ll be using this blog post as a reference instead of having to explain the same thing all the time.

It would be easier to start with the three most common mistakes when it comes to OPSEC references in the internet today. Those three are:

1. OPSEC is NOT Operational Security
OPSEC was first ever officially written/mentioned in 1966 by the US in Operation Purple Dragon. This was an investigation of what went wrong during some combat operations in Vietnam. Among others, this included a process (remember this), of how to perform such investigations prior to operations to avoid such fatal compromises. That process was called OPERATIONS SECURITY. OPSEC is OPERATIONS SECURITY, not Operational Security. Hopefully that clears out the first misconception about OPSEC.

2. OPSEC is not necessarily COMSEC (or even INFOSEC)
Some of the most common “OPSEC tips” you will see people sharing without second thought are things like “use PGP for email”, “don’t send this over unencrypted networks”, etc. Well… Those are not OPSEC (Operations Security). Those are COMSEC (Communications Security) and indeed, under certain conditions COMSEC might be necessary for OPSEC (Operations Security). However, this is not a rule. And just for reference, COMSEC is the discipline of preventing unauthorized interceptions of communication.

3. OPSEC is a process
The last and most important of the three misconceptions is that OPSEC is not a series of predefined tips and tricks. It is a well defined process consisting of five distinct steps. And it doesn’t matter if you are talking about kinetic military operations, cyber, space, or anything in between. OPSEC is a process that applies to all of them. Any operation (because OPSEC is Operations Security) can be protected by employing the OPSEC process. Remember this, OPSEC is a process. Right? OPSEC is a process.

Alright, now that the most common misconceptions are clear, let’s dive into the OPSEC process and how you can apply this to protect your operations. Regardless if you are talking about a playbook of your incident response team, a threat intelligence collection operation, a red team engagement, a counter-fraud investigation, or anything else. The same process applies to all of them. That’s the beauty of OPSEC.

Note: Some organizations define the five steps as 1. Analysis, 2. Understanding, 3. Monitoring, 4. Evaluation, and 5. Countermeasures but in practice it is almost identical tasks to the original process.

Here is a quick breakdown of those steps to make it more understandable. It all starts by initiating an OPSEC review for an operation you are doing and you want to minimize the chance of compromise.

  1. Identification of Critical information: In other words, define what do you have to protect to complete this operation. Is this your source IP address? Is this the tools that you use? Your C&C infrastructure? Where you are physically located? Whatever it is, define it clearly here. If you want to do it the traditional way, then you have to develop a list of the critical information of the four categories referred to as CALI (Capabilities, Activities, Limitations, and Intentions) and then create a CIL (Critical Information List) which is literally a list of what information is critical for the success of the operation.
  2. Threat Analysis: In cyber this usually falls under threat intelligence and it is literally identifying the potential threats for the defined CIL. After completing this you will have a better idea of your adversaries. For example, you are an incident response team and are working on an OPSEC review for your playbook for collecting malware samples. I am randomly assuming (and I might be wrong) that one of your threats would be hiding your source IP/network/fingerprint because you might be collecting malware samples from targeted attacks and doing so from an identifiable source would tip off the adversary of your investigation.
  3. Vulnerability Analysis: Now that you know what are your threats, you have to look for the vulnerabilities the adversaries are more likely to exploit. Using the incident response malware sample scenario, could it be that you have some automated system that fetches those samples? That some personnel isn’t trained and might detonate a sample from an internet facing sandbox? Well, this is the stage where you write those findings/vulnerabilities down.
  4. Risk Assessment: Now that you have an idea of your threats and vulnerabilities, just create your typical matrix of likelihood versus impact and explain what is the impact of each of those vulnerabilities being abused by the adversary.
  5. Appropriate OPSEC Measures: Based on the risk assessment, you prioritize and work out what measures you need to take. Also notice the word “appropriate” here, don’t go crazy. Do what makes sense for the operation security. (Yes, all the tips you see people sharing randomly are OPSEC measures which means they might be completely irrelevant to your operations)

How can you realistically make this work? Pretty easy if you already have some sort of documented processes for your operations and most offensive and defensive security teams do so. Some call them playbooks, others runbooks, plans, etc. The thing is that if you have any of those, then pick one, execute this OPSEC process which shouldn’t take more than a few hours in most cases, and then write down (and ideally automate) in that playbook the OPSEC measures that apply to it. Then, when you do your existing periodic review, if you change something significant initiate a new OPSEC review. You can even start delivering OPSEC briefings on new team members after having a clue of what your OPSEC measures are.

By the way, did I mention OPSEC (Operations Security) is a process? Yes, it’s a process. So, remember this and stop perpetuating misconceptions and misinformation about what OPSEC is, and how it can be applied. The above process (OPSEC is a process) is designed to work with ANY operation if you want to protect critical information of the operation. OPSEC is Operations Security and it is a well defined five-step process to protect critical information.

Thank you.

Useful reading material to better understand OPSEC and use it properly, as it was designed, and without abusing the term because it sounds cool.

Written by xorl

March 29, 2020 at 21:39

Posted in opsec

The 2018 NSA Cyber Exercise (NCX) Module 2 tabletop board game

leave a comment »

Yesterday YouTube suggested me this video from a 2018 event in Maryland, USA by the NSA. It was called NSA Cyber Exercise (NCX) and it had three different modules using the gamification approach. The first was about writing a cyber security policy and was titled Legal & Policy tabletop exercise, the second a tabletop blue/red team exercise called Red versus Blue, and the third a typical CTF game named Live Fire Operation Gongoro Dawn. Due to the pandemic I have some extra spare time so I decided to analyse this video and “reverse engineer” the board game used for the tabletop exercise since it seemed quite nice.

The board game has the red team on the left side and the blue team on the right side. Apart from the two teams, each table also had a third person who is probably the one that keeps track of the score and acts as the guide/narrator and/or observer/judge for the board game. From the video I managed to make a digital copy of the board game which looks like this.

Each square also has some icon representing the equivalent item, but I didn’t want to spend time adding those. Then, you have some decks of cards which are split to the following types of cards.

  • Mapped (Black color)
  • Enumerated (Blue color)
  • Compromised (red color)
  • Bonus (green color)
  • Blue team cards (white back color with NCX logo)
  • Red team cards (white back color with NCX logo)

As you can guess, the black (mapped) cards are placed on top of an item on the board if that item is considered mapped. The same also happens with the blue (enumerated) and red (compromised) cards which are also self-explanatory. Now, the blue and red team cards are different capabilities that each team can take advantage of to execute their strategy. Those cards look like the following where the top part describes the condition, the middle part the capability, and the lower part the impact.

The team cards are pretty simple in terms of their capabilities and it appears that depending on the scenario, the judge/observer is able to provide or take away specific capability cards from the teams. The following capture shows nicely how the teams, judge/observer, and board are placed in the game. On the left side it’s the blue team, in the middle the judge/observer, and on the right it’s the red team.

Although those are kind of self-explanatory too, here are some of the blue team capability cards that were visible in the video. Please note that most of the blue team cards had no condition and their impact was Persistent. Also, note that this is not the complete deck, it’s mostly to give you an idea of what kind of capabilities we are talking about.

  • Security training refresher
  • Internet whitelisting
  • OPSEC review program
  • Rebuild host
  • Password reset policy
  • System log audit
  • Firewall access control updates
  • Conceal server banners
  • Incident response team
  • Patch management program
  • Intrusion detection system
  • Strong passwords
  • Anti-malware controls
  • IP Security (IPSec)
  • Input validation controls
  • Strong encryption
  • Anomaly detection software
  • Web proxy
  • Deploy log server
  • Configuration management review

The red team had many more conditions and variety on the impact. Some of the conditions were things like: Play if the workstations are compromised, Play on mapped only hosts, Play on any compromised host, Play on the internal zone if it is accessible, etc. The same also applies to impact where it is mostly Persistent but some were also Instant. Examples of the red team capability cards you can find here.

  • Ping scan
  • Vulnerability scan
  • Sniffing
  • Reduce operational tempo
  • Port scan
  • Software vulnerability exploit
  • Data injection exploit
  • Pass the hash exploit
  • Cover tracks
  • Cache poisoning exploit
  • Phishing exploit
  • Stolen passwords
  • Cross-Site Scripting (XSS) exploit
  • Broken authentication exploit
  • Server banner grab
  • Build botnet
  • Virus/Worm exploit
  • Open source research
  • Install backdoor and rootkit
  • Zero-Day vulnerability exploit

And of course, there is also a pair of dice which I assume that it was used to determine the result of the proposed action and potentially used for score counting on each round.

Overall it looks like a very nice way for gamification of tabletop exercises for blue/red team engagements and potentially it can be even improved by, for example, using the ATT&CK framework TTPs as red team capabilities and NIST cyber security framework as blue team capabilities. Nevertheless, it is a suggestion with a potential implementation approach based on what NSA did in the 2018 NSA Cyber Exercise (NCX).

Written by xorl

March 28, 2020 at 14:48

Posted in security

Linux kernel 0days without code auditing

leave a comment »

The recent post of grsecurity “The Life of a Bad Security Fix” was the inspiration for this blog post which is somehow common sense among the community, but it’s not explicitly mentioned a lot outside of that.

I remember the first time I discovered a 0day privilege escalation in the Linux kernel many years ago. My process back then was fairly trivial and partially driven by my fear of this monster codebase called the upstream kernel. Here is what I did back then.

  1. Find a relatively simple kernel module that comes with the upstream kernel
  2. Dissect and learn every bit and piece of it
  3. Go through each function that involved user data and look for mistakes
  4. Test my hypothesis
  5. If valid, keep it aside to start working on an exploit for it

Later on, I found that you can only focus on the interesting code paths. The ones involving specific functions, logic, or interaction with user-provided or user-derived data. But all of that involves some sort of code auditing, right? Can we do it without it?

I have many such examples in my blog here going all the way back to 2006. Just like in the grsecurity’s blog post you can just read the ChangeLog or kernel commits. Magic 0days are hidden in there. ;)

ChangeLogs and commits are great and only require you to validate whether this is a vulnerability or a bug. The hard part of the code auditing and debugging has already been done, and in some cases reporters even include PoC code to reproduce the issue which gives you a starting point for your exploit development.

Another approach is going through the discoveries of syzkaller kernel fuzzer. Again, you usually have the trigger code and a pretty complete debugging output. This means, you know there is something interesting going on there, you just have to trace it down, and find a way to exploit it.

Lastly, you can keep an eye on malware sandboxes for Linux samples for potential 0days used in the wild. It is extremely rare that those samples will include Linux kernel 0days but it is also a very low effort task from our side to have some automated monitoring for those like a couple of YARA rules on VirusTotal API.

I hate to say it, but it’s true, nowadays you can build a decent stash of 0days for the Linux kernel with way less, in some cases zero, manual code auditing. I still enjoy going through the Linux kernel but if this is your job, then you might as well take advantage of work that has already been done by others.

Written by xorl

January 28, 2020 at 11:39

Posted in linux

Per country internet traffic as an early warning indicator

leave a comment »

The last few years there is a visible trend of governments shutting down the internet as a means of reducing the impact/controlling outbound communications during major incidents such as riots, conflicts, civil unrest, and other rapidly evolving situations that could pose a threat to national security. This makes monitoring internet traffic a key input for early warning indication of high-impact incidents.

In 2019 there were several examples of this behaviour. Just to reference a few: India, Iran, Indonesia, Iraq, Myanmar, Chad, Sudan, Kazakhstan, Ethiopia, etc. Typically, most of those occurred anything from hours to days before the actual incident unraveled. This means that in all of those cases tracking internet traffic per country could have helped you pro-actively detect an upcoming situation, and together with other enriching sources have sufficient time to make an informed decision.

Similarly, per country internet traffic monitoring is also visibly impacted in other widespread crisis situations such as earthquakes, volcanic eruptions, and other natural disasters. Below you can see an example of the recent (7 January 2020) earthquake in Puerto Rico. The most common patterns in cases of natural disasters is either significant traffic drop due to infrastructure issues like in the case of Puerto Rico, or increase in traffic due to the heighten outbound communications by the majority of the people in the affected geographical region. So, although this by itself could not result in immediate action, it can be automatically enriched by other means such as social media, local sensors & reporting, etc. to provide timely and actionable intelligence.

So, per country internet traffic monitoring can assist your intelligence team as an additional data point to generate actionable and timely intelligence products that will help you protect your assets proactively, and usually before an incident reaches public media.

Written by xorl

January 14, 2020 at 09:51

Left of boom: Do we actually do this?

leave a comment »

I decided to end this year’s blog posts with something slightly different. So, what does “left of boom” really mean? This phrase became increasingly popular in the Intelligence Community after the 9/11 terrorist attacks to describe the mission of the counter-terrorism units within the IC. Meaning, everything we do has to be before a bomb explodes. That is, the left side of the timeline of events that are about to unfold. So, why is this so important for all intelligence teams and do we actually do it?

The first and foremost goal of any (cyber or other) intelligence team is to provide an unbiased and as accurate as possible assessment of an upcoming event which will be used as key input in the decision making process. That word, proactive, encompasses the “left of boom” mentality. However, it happens more rarely than what most businesses would like to admit.

For example, is taking down phishing domains quickly after they become live proactive? Not really. Proactive would have been know that those are going to be used for phishing and take action before they were even up and running.

Is finding leaked credentials or user accounts on some forum proactive? Not really. Proactive would have been knowing that those were leaked before someone shared them in a forum.

Or on the non-cyber side, is reporting that a tornado just hit the location of one of your offices proactive? No. Proactive would have been to have briefed the relevant staff in advance that this was going to happen.

Some might argue that all of the above are proactive and actionable intelligence products, and I could go on with countless more examples trying to counter that argument, but this is not what this post is about. It’s about answering the question, are we “left of boom” or not?

In my opinion, we always have to be moving to the left side of the boom as much as legally and humanly possible. Apparently, as a private business intelligence team you cannot run CNE operations on a threat actor that operates phishing domains against your company. However, you can monitor for new registrations from that threat actor, new TLS certificates, understand their TTPs and monitor/track them closely. For example, do they use specific hosting provider(s)? Is there a pattern on the targets? Operating timezone? Habits? etc.

For all of the above, there are many proprietary and open source solutions that can assist you with the data collection, processing or even the information production in some cases. But turning that data and information into timely and actionable intelligence is something that only a team of skilled individuals can do.

By now the “left of boom” and its importance is probably very clear to the reader. But what about the title’s question, do we actually do it? The answer is no. You can never be enough on the left of the boom. As long as you are striving every single day to get a little bit further to the left, you are on the right path. If you can already identify a new phishing domain the moment it is being registered, then can you identify it even before that? You will realize that after a while under this operating model it will lead you to the actual intelligence that can assist in disrupting those threats once and for all. You will start looking for answers to questions/intelligence requirements such as: Who (as in physical person(s)) is behind this? What is the end-to-end operation they are doing? What is required to get this threat actor criminally prosecuted? etc.

And with this in mind, I am wishing everyone a happy New Year’s Eve and lets all work harder to make sure we are getting more to the “left of boom” in 2020. :)

Written by xorl

December 31, 2019 at 09:50