xorl %eax, %eax

Dumpster diving is still alive

leave a comment »

I would like to use a recent example to demonstrate that this threat is still valid, and companies should be considering it in their security policies. Especially, in the lockdown/remote working situation that many companies implement at the moment, this is an even bigger threat.

Dumpster diving is nothing more than going through someone’s (company or individual) trash/dumpster to discover proprietary information. Some of the most high-profile cyber-espionage cases that I am aware of had used this technique very effectively. But there is also the cyber-criminal aspect of it. For example, recently someone at Nizhny Novgorod found 10 folders with confidential information from the Vozrozhdenie Bank.



This happened literally days ago in a large organization (around 45k employees) and it shows that this threat is still relevant even for mature companies. Even more so now that many people work from home which implies that they might not have access to the facilities they had in their normal office environments. Here are some recommendations if you don’t already do that:

  • Have a clear policy for document handling/lifecycle
  • Have data classification that aligns with the policy and treats different classifications with proportional measures
  • Provide document destruction/disposal procedures
  • Provide the required equipment/facilities
  • Train, train, and train some more your employees on this threat
  • Use watermarks to identify the source of the leaks
  • Continuously monitor for leaks not only for digital goods, but also for physical ones (like documents, corporate devices, etc.)

Written by xorl

June 15, 2020 at 12:17

Posted in security

Lessons from the Twitter Saudi espionage case

leave a comment »

I was recently going through the Saudi Arabia espionage case on Twitter that went public in November 2019. I think there are lots of interesting lessons for any threat intelligence, and security in general, team in this case, which demonstrates a combination of cyber and traditional HUMINT techniques.



There are lots of information out there, but in my opinion the best source is the 27-pages long U.S. Department of Justice criminal complaint which goes through lots of details both on the counter-intelligence operation that the FBI in collaboration with Twitter did, but also all of the activities of the threat actors.

In summary, using a front charitable organization the Saudi intelligence officers organized a tour at Twitter’s office where they made their first contact with the insiders (also Saudi nationals working at Twitter) that they later recruited and used them to access over 6000 Twitter accounts’ data for intelligence collection purposes. After that they had several meetings in various locations (including during Twitter corporate events), and the intelligence officers were paying the insiders through a variety of means (wire transfers, deposits to relatives abroad, companies, etc.) for their services. The intelligence they were after was anything from dissidents, to background checks, and other intelligence collection targets (people that they were tracking).

I was trying to summarize the lessons that a threat intelligence team can take from this corporate espionage case, and here is what I came up with.

  • The insiders were SREs but they managed to obtain access to customer data via internal tools. Monitoring for such activity should be relatively easy with good role definitions and UBA rules and can quickly identify insider threats.
  • In a similar manner, the insider SREs were able to bypass the normal Twitter account takedown/complaint process and do it themselves for accounts requested by their handlers. Like the above, any access to systems outside the team owned services should be something to monitor.
  • The criminal complaint has some references where one of the insider SREs had dozens of calls with his handler during work hours to provide intelligence on specific Twitter accounts. Similarly, there are reports of one the insiders being very stressed, taking unusual days off, etc. The TLs should be trained on picking up those signs and handling them accordingly. It might be personal issues, mental health, but also signs of conducting espionage.
  • Similarly to the above, the insiders were making last minute trips with same-day return, they were getting paid tens of thousands of dollars by their handlers (which likely means that they were also spending more), they were receiving expensive gifts that they have been witnessed wearing and selling, setting up companies, etc. All of those are indicators that a TL should have picked up and reported to the threat intelligence (or security) team to look for signs of insider threat activity.
  • The DoJ document doesn’t provide a lot of details on this, but it seems that the initial meeting was set up trivially without any, even basic, background check on the visitors. At a minimum, the visitors shouldn’t be allowed in all areas, they must always be escorted, and employees should be trained on what can be shared and signs of potential espionage activity by 3rd parties.
  • The insiders were using unconventional means for communication with their handlers including Apple Note, non-corporate GMail accounts, etc. Those are things to consider when building your DLP and decryption strategy. First analyse what users typically use for communication, follow whatever processes for approval your company/governments requires, and monitor them for threat indicators.
  • Another key factor, was the amount of people involved. Just like in most HUMINT collection operations, it was a network of employees that were collaborating. Keep this in mind when conducting such investigations, it’s rarely a single person that is doing everything.
  • Lastly, another great lesson from this case was the fact that one of the insiders left to start his own company to receive the payments from the Saudi handlers, but he maintained access to Twitter’s internal systems via his ex-colleagues. Any off-duty employee account should be closely monitored because if they were to perform any malicious activity it is very likely they will do it either right before leaving, or just after they left. So, if the communication was monitored they might have been able to figure out what happened earlier.
  • When you have clues that you are dealing with a nation-state threat actor, involve the experts (AKA counter-intelligence agencies of your country). They probably have more intelligence than your team on the threat actors, and definitely more experience on how to handle such cases. For the same reason it’s important to have already established a good relationship with those agencies.
  • Lastly, when a private company is against a nation-state, the likelihood of getting any sort of legal implications is minimal. So, what you can do instead is public shaming (like in this case) to raise awareness and show the rest of the world what’s happening. Lots of those “public shaming” can actually lead your government to take a stronger stance (think if all private companies were going public with the espionage cases they had and which country was behind it). So, although it might look like there is nothing you can do, even going public is a great offensive action.

Just to be clear, I’m not bashing on Twitter security. They did an excellent job, including the entire counter-intelligence operation in collaboration with the FBI, the interviews of the insider threat actors, and also some of the things I mentioned above. Also, what I’m writing is based on the limited information that is publicly available. Apparently, it’s very likely I am missing key details. I’m just summarizing some lessons, based on my limited knowledge and experience, that any threat intelligence team can potentially use from this recent espionage case. If you think I missed any important lessons, please let me know. :)

Written by xorl

May 31, 2020 at 23:26

FIRST Cyber Threat Intelligence Webinar Series: Building an intelligence-driven organization

leave a comment »

Just like for most people that speak at conferences, this year has been quite unusual for me too. Recently, I gave my talk, Building an intelligence-driven organization, and it was a new experience for me. Talking to an industry conference remotely. So, here is how this went.



In 2019 I submitted a talk in the CFP of FIRST Cyber Threat Intelligence Symposium that was scheduled to take place in Zurich in March 2020. I received some feedback and after some back-and-forth, in February 2020 I received an email that a version of my talk with some minor adjustments was accepted. Getting accepted to talk at this event for me was one of the biggest highlights of my professional life in 2020, but as we all know… COVID-19 happened.

Again, after various back-and-forth, the awesome FIRST CTI organisers team decided to run the event online in the first weeks of May 2020 and rename it to FIRST Cyber Threat Intelligence Webinar Series. That worked out nicely, and the entire event was great. Based on this small experience I gained from this, here are some recommendations for any “remote” conference speakers:

  • Find a quiet place
  • Make sure you have good internet connectivity
  • Good audio/video hardware
  • Test your setup and content in a test conference call before the event
  • Test your setup and content a few minutes before the presentation once again
  • Keep everything you might need close by (water, notes, etc.)
  • Turn off mobile phones, pagers, chat applications, or anything else that can cause interruptions or unwanted noise (jewellery, cables/cloths touching the mic, etc)
  • It’s easier to derail when presenting in this format, be focused and plan carefully your talk
  • Depending on the talk, you might not have video which means the non-verbal communication is removed from the equation so you have to rely more on the way you present your content
  • If you do have video, make sure your appearance, the lighting and background are professional and not distracting your audience from the actual content
  • It’s much harder to assess audience’s engagement throughout the talk, so make sure that you ask for a lot of feedback afterwards

Just to be clear, I am not saying that I succeeded in all of the above. Just that I realized the importance of those throughout this process. Hopefully that will be useful to future “remote” presenters. :)

Written by xorl

May 15, 2020 at 09:44

Everything you wanted to know about OPSEC, and some more…

leave a comment »

So… I came across another of those “OPSEC recommendations” posts from a well known cyber security company and that motivated me to clear some things out. Having formally trained on OPSEC, like many of my readers, I am getting annoyed when people abuse very tactical and specific terminology; and one of the most abused is OPSEC. Let me clear out what OPSEC is, and what OPSEC isn’t for you. And hopefully I’ll be using this blog post as a reference instead of having to explain the same thing all the time.



It would be easier to start with the three most common mistakes when it comes to OPSEC references in the internet today. Those three are:

1. OPSEC is NOT Operational Security
OPSEC was first ever officially written/mentioned in 1966 by the US in Operation Purple Dragon. This was an investigation of what went wrong during some combat operations in Vietnam. Among others, this included a process (remember this), of how to perform such investigations prior to operations to avoid such fatal compromises. That process was called OPERATIONS SECURITY. OPSEC is OPERATIONS SECURITY, not Operational Security. Hopefully that clears out the first misconception about OPSEC.

2. OPSEC is not necessarily COMSEC (or even INFOSEC)
Some of the most common “OPSEC tips” you will see people sharing without second thought are things like “use PGP for email”, “don’t send this over unencrypted networks”, etc. Well… Those are not OPSEC (Operations Security). Those are COMSEC (Communications Security) and indeed, under certain conditions COMSEC might be necessary for OPSEC (Operations Security). However, this is not a rule. And just for reference, COMSEC is the discipline of preventing unauthorized interceptions of communication.

3. OPSEC is a process
The last and most important of the three misconceptions is that OPSEC is not a series of predefined tips and tricks. It is a well defined process consisting of five distinct steps. And it doesn’t matter if you are talking about kinetic military operations, cyber, space, or anything in between. OPSEC is a process that applies to all of them. Any operation (because OPSEC is Operations Security) can be protected by employing the OPSEC process. Remember this, OPSEC is a process. Right? OPSEC is a process.



Alright, now that the most common misconceptions are clear, let’s dive into the OPSEC process and how you can apply this to protect your operations. Regardless if you are talking about a playbook of your incident response team, a threat intelligence collection operation, a red team engagement, a counter-fraud investigation, or anything else. The same process applies to all of them. That’s the beauty of OPSEC.


Note: Some organizations define the five steps as 1. Analysis, 2. Understanding, 3. Monitoring, 4. Evaluation, and 5. Countermeasures but in practice it is almost identical tasks to the original process.

Here is a quick breakdown of those steps to make it more understandable. It all starts by initiating an OPSEC review for an operation you are doing and you want to minimize the chance of compromise.

  1. Identification of Critical information: In other words, define what do you have to protect to complete this operation. Is this your source IP address? Is this the tools that you use? Your C&C infrastructure? Where you are physically located? Whatever it is, define it clearly here. If you want to do it the traditional way, then you have to develop a list of the critical information of the four categories referred to as CALI (Capabilities, Activities, Limitations, and Intentions) and then create a CIL (Critical Information List) which is literally a list of what information is critical for the success of the operation.
  2. Threat Analysis: In cyber this usually falls under threat intelligence and it is literally identifying the potential threats for the defined CIL. After completing this you will have a better idea of your adversaries. For example, you are an incident response team and are working on an OPSEC review for your playbook for collecting malware samples. I am randomly assuming (and I might be wrong) that one of your threats would be hiding your source IP/network/fingerprint because you might be collecting malware samples from targeted attacks and doing so from an identifiable source would tip off the adversary of your investigation.
  3. Vulnerability Analysis: Now that you know what are your threats, you have to look for the vulnerabilities the adversaries are more likely to exploit. Using the incident response malware sample scenario, could it be that you have some automated system that fetches those samples? That some personnel isn’t trained and might detonate a sample from an internet facing sandbox? Well, this is the stage where you write those findings/vulnerabilities down.
  4. Risk Assessment: Now that you have an idea of your threats and vulnerabilities, just create your typical matrix of likelihood versus impact and explain what is the impact of each of those vulnerabilities being abused by the adversary.
  5. Appropriate OPSEC Measures: Based on the risk assessment, you prioritize and work out what measures you need to take. Also notice the word “appropriate” here, don’t go crazy. Do what makes sense for the operation security. (Yes, all the tips you see people sharing randomly are OPSEC measures which means they might be completely irrelevant to your operations)



How can you realistically make this work? Pretty easy if you already have some sort of documented processes for your operations and most offensive and defensive security teams do so. Some call them playbooks, others runbooks, plans, etc. The thing is that if you have any of those, then pick one, execute this OPSEC process which shouldn’t take more than a few hours in most cases, and then write down (and ideally automate) in that playbook the OPSEC measures that apply to it. Then, when you do your existing periodic review, if you change something significant initiate a new OPSEC review. You can even start delivering OPSEC briefings on new team members after having a clue of what your OPSEC measures are.

By the way, did I mention OPSEC (Operations Security) is a process? Yes, it’s a process. So, remember this and stop perpetuating misconceptions and misinformation about what OPSEC is, and how it can be applied. The above process (OPSEC is a process) is designed to work with ANY operation if you want to protect critical information of the operation. OPSEC is Operations Security and it is a well defined five-step process to protect critical information.

Thank you.

Useful reading material to better understand OPSEC and use it properly, as it was designed, and without abusing the term because it sounds cool.

Written by xorl

March 29, 2020 at 21:39

Posted in opsec

The 2018 NSA Cyber Exercise (NCX) Module 2 tabletop board game

leave a comment »

Yesterday YouTube suggested me this video from a 2018 event in Maryland, USA by the NSA. It was called NSA Cyber Exercise (NCX) and it had three different modules using the gamification approach. The first was about writing a cyber security policy and was titled Legal & Policy tabletop exercise, the second a tabletop blue/red team exercise called Red versus Blue, and the third a typical CTF game named Live Fire Operation Gongoro Dawn. Due to the pandemic I have some extra spare time so I decided to analyse this video and “reverse engineer” the board game used for the tabletop exercise since it seemed quite nice.



The board game has the red team on the left side and the blue team on the right side. Apart from the two teams, each table also had a third person who is probably the one that keeps track of the score and acts as the guide/narrator and/or observer/judge for the board game. From the video I managed to make a digital copy of the board game which looks like this.



Each square also has some icon representing the equivalent item, but I didn’t want to spend time adding those. Then, you have some decks of cards which are split to the following types of cards.

  • Mapped (Black color)
  • Enumerated (Blue color)
  • Compromised (red color)
  • Bonus (green color)
  • Blue team cards (white back color with NCX logo)
  • Red team cards (white back color with NCX logo)

As you can guess, the black (mapped) cards are placed on top of an item on the board if that item is considered mapped. The same also happens with the blue (enumerated) and red (compromised) cards which are also self-explanatory. Now, the blue and red team cards are different capabilities that each team can take advantage of to execute their strategy. Those cards look like the following where the top part describes the condition, the middle part the capability, and the lower part the impact.



The team cards are pretty simple in terms of their capabilities and it appears that depending on the scenario, the judge/observer is able to provide or take away specific capability cards from the teams. The following capture shows nicely how the teams, judge/observer, and board are placed in the game. On the left side it’s the blue team, in the middle the judge/observer, and on the right it’s the red team.



Although those are kind of self-explanatory too, here are some of the blue team capability cards that were visible in the video. Please note that most of the blue team cards had no condition and their impact was Persistent. Also, note that this is not the complete deck, it’s mostly to give you an idea of what kind of capabilities we are talking about.

  • Security training refresher
  • Internet whitelisting
  • OPSEC review program
  • Rebuild host
  • Password reset policy
  • System log audit
  • Firewall access control updates
  • Conceal server banners
  • Incident response team
  • Patch management program
  • Intrusion detection system
  • Strong passwords
  • Anti-malware controls
  • IP Security (IPSec)
  • Input validation controls
  • Strong encryption
  • Anomaly detection software
  • Web proxy
  • Deploy log server
  • Configuration management review



The red team had many more conditions and variety on the impact. Some of the conditions were things like: Play if the workstations are compromised, Play on mapped only hosts, Play on any compromised host, Play on the internal zone if it is accessible, etc. The same also applies to impact where it is mostly Persistent but some were also Instant. Examples of the red team capability cards you can find here.

  • Ping scan
  • Vulnerability scan
  • Sniffing
  • Reduce operational tempo
  • Port scan
  • Software vulnerability exploit
  • Data injection exploit
  • Pass the hash exploit
  • Cover tracks
  • Cache poisoning exploit
  • Phishing exploit
  • Stolen passwords
  • Cross-Site Scripting (XSS) exploit
  • Broken authentication exploit
  • Server banner grab
  • Build botnet
  • Virus/Worm exploit
  • Open source research
  • Install backdoor and rootkit
  • Zero-Day vulnerability exploit



And of course, there is also a pair of dice which I assume that it was used to determine the result of the proposed action and potentially used for score counting on each round.



Overall it looks like a very nice way for gamification of tabletop exercises for blue/red team engagements and potentially it can be even improved by, for example, using the ATT&CK framework TTPs as red team capabilities and NIST cyber security framework as blue team capabilities. Nevertheless, it is a suggestion with a potential implementation approach based on what NSA did in the 2018 NSA Cyber Exercise (NCX).

Written by xorl

March 28, 2020 at 14:48

Posted in security