xorl %eax, %eax

Archive for the ‘threat intelligence’ Category

The role of linguists in threat intelligence teams

with 2 comments

DISCLAIMER: Just to be clear, the following do not represent my employer and the examples are not from my employer. Threat intelligence is something I really enjoy doing and for this reason I get the opportunity to help many organizations.

No matter how good your malware and intelligence analysts are, in most occasions during an intelligence operation you will end up having to deeply understand a foreign language. The language of your target. Whether this is something as simple as finding the right communication media (forums, messengers, social media, etc.) for collection, or interacting with a threat actor to elicit information, the fact is that a linguist can play a key role to the success of a threat intelligence team.



Like most people in this field, I am constantly trying to learn about foreign cultures relevant to the intelligence requirements. However, that cannot replace the role of a skilled intelligence linguist who is not only an expert in the specific language(s), but they also fully understand things that are much harder to grasp such as: Local culture, customs, habits & traditions, slang, different accents, history, etc. So, although knowledge of foreign languages is a plus for simple tasks, they cannot replace the role of an experienced linguist.

I know that many people enjoy some good ol’ war stories… So, I’ll share with you three quick ones from my personal experience which show the value of having that skillset in your threat intelligence team.

Slang terms
Years ago, I had almost no knowledge of the Russian language and I was collecting intelligence on a Russian-speaking threat actor. The first challenge was identifying and getting access to all the forums the threat actor had access to. And I still recall this… It was the first time I came across one of the most common slang terms in Russian-speaking threat actors, the term “Логи” (literal translation is: logs). I still recall that it took me a few hours to figure out this simple term. For those curious, “Логи” is used to describe compromised data such as accounts with credentials, cookies, etc. typically collected from a piece of malware. So, if someone wants to buy compromised accounts for Example.com derived from a malware they might make a post titled “куплю логи Example.com” (literal translation is: buying Example.com logs). For an intelligence linguist that would have taken them less than a second because it’s such a commonly used slang term.

Local communities
Some years ago I was working on the threat landscape of a foreign company operating in a specific domain in a region of China. For this reason, I spent quite some time trying to become familiar with local threats as well as local threat intelligence experts to get their perspectives. Even though I was physically located in this region of China, it was very challenging since people didn’t trust me, a foreigner, with potentially confidential information and it took a lot of effort from my side. Eventually, one day I managed to build rapport with a local person and within hours that person gave me so much information that it was impossible to collect in months. If a linguist was available at that time, they would have already knew at least 60% of what this person told me, and be able to find the other 40% much faster and cheaper than me due to their skillset.

Historical/cultural information
I recall once an investigation where I was helping with the attribution of a cyber-criminal where we had been able to collect a decent amount of information from the threat actor and we were in the analysis phase of the intelligence cycle with the requirement being de-anonymization/attribution. One of the collected information was a screen recording of the threat actor performing their illegal activity. At some point in the video recording, there were literally two frames where an Arabic language text appears. It was blurry and unclear. However, using a linguist, they informed us that this was a specific expression used by a very specific group of people. This tiny piece of detail helped us uncover the identity of the target. However, that knowledge would be impossible to know without very deep understanding of the local culture and history.

I understand that many of my readers might not have the capacity to have dedicated linguists for all the languages their threats are originating from in their intelligence teams. So, what can you do in this case? Here are some suggestions:

  • First identify the most prominent languages relevant to your intelligence requirements
  • At a minimum encourage and support your threat intelligence personnel to learn those languages by providing trainings, budget, and learning tools
  • If deemed safe, allow your personnel to travel and stay in those foreign countries for sufficient time to understand, at least partially, the culture, habits, customs, etc.
  • There are companies and government organizations offering cultural awareness trainings for different countries. Use those as a means to get your personnel familiar with their targets’ culture and mindset
  • As the team grows, hire dedicated analysts native in the targets’ language(s) and potentially even split the teams to relevant areas of responsibilities (LATAM, APAC, MENA, etc.)

In conclusion, before adding more reverse engineers or DFIR analysts to your intelligence team, I would highly encourage you to consider having some dedicated intelligence linguist(s). That skillset can be a force multiplier for an intelligence team. And if you cannot hire such experts, at a minimum, support your people to grow their intelligence linguist skills as described above.

Written by xorl

August 20, 2020 at 11:03

Lessons from the Twitter Saudi espionage case

leave a comment »

I was recently going through the Saudi Arabia espionage case on Twitter that went public in November 2019. I think there are lots of interesting lessons for any threat intelligence, and security in general, team in this case, which demonstrates a combination of cyber and traditional HUMINT techniques.



There are lots of information out there, but in my opinion the best source is the 27-pages long U.S. Department of Justice criminal complaint which goes through lots of details both on the counter-intelligence operation that the FBI in collaboration with Twitter did, but also all of the activities of the threat actors.

In summary, using a front charitable organization the Saudi intelligence officers organized a tour at Twitter’s office where they made their first contact with the insiders (also Saudi nationals working at Twitter) that they later recruited and used them to access over 6000 Twitter accounts’ data for intelligence collection purposes. After that they had several meetings in various locations (including during Twitter corporate events), and the intelligence officers were paying the insiders through a variety of means (wire transfers, deposits to relatives abroad, companies, etc.) for their services. The intelligence they were after was anything from dissidents, to background checks, and other intelligence collection targets (people that they were tracking).

I was trying to summarize the lessons that a threat intelligence team can take from this corporate espionage case, and here is what I came up with.

  • The insiders were SREs but they managed to obtain access to customer data via internal tools. Monitoring for such activity should be relatively easy with good role definitions and UBA rules and can quickly identify insider threats.
  • In a similar manner, the insider SREs were able to bypass the normal Twitter account takedown/complaint process and do it themselves for accounts requested by their handlers. Like the above, any access to systems outside the team owned services should be something to monitor.
  • The criminal complaint has some references where one of the insider SREs had dozens of calls with his handler during work hours to provide intelligence on specific Twitter accounts. Similarly, there are reports of one the insiders being very stressed, taking unusual days off, etc. The TLs should be trained on picking up those signs and handling them accordingly. It might be personal issues, mental health, but also signs of conducting espionage.
  • Similarly to the above, the insiders were making last minute trips with same-day return, they were getting paid tens of thousands of dollars by their handlers (which likely means that they were also spending more), they were receiving expensive gifts that they have been witnessed wearing and selling, setting up companies, etc. All of those are indicators that a TL should have picked up and reported to the threat intelligence (or security) team to look for signs of insider threat activity.
  • The DoJ document doesn’t provide a lot of details on this, but it seems that the initial meeting was set up trivially without any, even basic, background check on the visitors. At a minimum, the visitors shouldn’t be allowed in all areas, they must always be escorted, and employees should be trained on what can be shared and signs of potential espionage activity by 3rd parties.
  • The insiders were using unconventional means for communication with their handlers including Apple Note, non-corporate GMail accounts, etc. Those are things to consider when building your DLP and decryption strategy. First analyse what users typically use for communication, follow whatever processes for approval your company/governments requires, and monitor them for threat indicators.
  • Another key factor, was the amount of people involved. Just like in most HUMINT collection operations, it was a network of employees that were collaborating. Keep this in mind when conducting such investigations, it’s rarely a single person that is doing everything.
  • Lastly, another great lesson from this case was the fact that one of the insiders left to start his own company to receive the payments from the Saudi handlers, but he maintained access to Twitter’s internal systems via his ex-colleagues. Any off-duty employee account should be closely monitored because if they were to perform any malicious activity it is very likely they will do it either right before leaving, or just after they left. So, if the communication was monitored they might have been able to figure out what happened earlier.
  • When you have clues that you are dealing with a nation-state threat actor, involve the experts (AKA counter-intelligence agencies of your country). They probably have more intelligence than your team on the threat actors, and definitely more experience on how to handle such cases. For the same reason it’s important to have already established a good relationship with those agencies.
  • Lastly, when a private company is against a nation-state, the likelihood of getting any sort of legal implications is minimal. So, what you can do instead is public shaming (like in this case) to raise awareness and show the rest of the world what’s happening. Lots of those “public shaming” can actually lead your government to take a stronger stance (think if all private companies were going public with the espionage cases they had and which country was behind it). So, although it might look like there is nothing you can do, even going public is a great offensive action.

Just to be clear, I’m not bashing on Twitter security. They did an excellent job, including the entire counter-intelligence operation in collaboration with the FBI, the interviews of the insider threat actors, and also some of the things I mentioned above. Also, what I’m writing is based on the limited information that is publicly available. Apparently, it’s very likely I am missing key details. I’m just summarizing some lessons, based on my limited knowledge and experience, that any threat intelligence team can potentially use from this recent espionage case. If you think I missed any important lessons, please let me know. :)

Written by xorl

May 31, 2020 at 23:26

Per country internet traffic as an early warning indicator

leave a comment »

The last few years there is a visible trend of governments shutting down the internet as a means of reducing the impact/controlling outbound communications during major incidents such as riots, conflicts, civil unrest, and other rapidly evolving situations that could pose a threat to national security. This makes monitoring internet traffic a key input for early warning indication of high-impact incidents.



In 2019 there were several examples of this behaviour. Just to reference a few: India, Iran, Indonesia, Iraq, Myanmar, Chad, Sudan, Kazakhstan, Ethiopia, etc. Typically, most of those occurred anything from hours to days before the actual incident unraveled. This means that in all of those cases tracking internet traffic per country could have helped you pro-actively detect an upcoming situation, and together with other enriching sources have sufficient time to make an informed decision.



Similarly, per country internet traffic monitoring is also visibly impacted in other widespread crisis situations such as earthquakes, volcanic eruptions, and other natural disasters. Below you can see an example of the recent (7 January 2020) earthquake in Puerto Rico. The most common patterns in cases of natural disasters is either significant traffic drop due to infrastructure issues like in the case of Puerto Rico, or increase in traffic due to the heighten outbound communications by the majority of the people in the affected geographical region. So, although this by itself could not result in immediate action, it can be automatically enriched by other means such as social media, local sensors & reporting, etc. to provide timely and actionable intelligence.



So, per country internet traffic monitoring can assist your intelligence team as an additional data point to generate actionable and timely intelligence products that will help you protect your assets proactively, and usually before an incident reaches public media.

Written by xorl

January 14, 2020 at 09:51

Left of boom: Do we actually do this?

leave a comment »

I decided to end this year’s blog posts with something slightly different. So, what does “left of boom” really mean? This phrase became increasingly popular in the Intelligence Community after the 9/11 terrorist attacks to describe the mission of the counter-terrorism units within the IC. Meaning, everything we do has to be before a bomb explodes. That is, the left side of the timeline of events that are about to unfold. So, why is this so important for all intelligence teams and do we actually do it?



The first and foremost goal of any (cyber or other) intelligence team is to provide an unbiased and as accurate as possible assessment of an upcoming event which will be used as key input in the decision making process. That word, proactive, encompasses the “left of boom” mentality. However, it happens more rarely than what most businesses would like to admit.

For example, is taking down phishing domains quickly after they become live proactive? Not really. Proactive would have been know that those are going to be used for phishing and take action before they were even up and running.

Is finding leaked credentials or user accounts on some forum proactive? Not really. Proactive would have been knowing that those were leaked before someone shared them in a forum.

Or on the non-cyber side, is reporting that a tornado just hit the location of one of your offices proactive? No. Proactive would have been to have briefed the relevant staff in advance that this was going to happen.

Some might argue that all of the above are proactive and actionable intelligence products, and I could go on with countless more examples trying to counter that argument, but this is not what this post is about. It’s about answering the question, are we “left of boom” or not?



In my opinion, we always have to be moving to the left side of the boom as much as legally and humanly possible. Apparently, as a private business intelligence team you cannot run CNE operations on a threat actor that operates phishing domains against your company. However, you can monitor for new registrations from that threat actor, new TLS certificates, understand their TTPs and monitor/track them closely. For example, do they use specific hosting provider(s)? Is there a pattern on the targets? Operating timezone? Habits? etc.

For all of the above, there are many proprietary and open source solutions that can assist you with the data collection, processing or even the information production in some cases. But turning that data and information into timely and actionable intelligence is something that only a team of skilled individuals can do.

By now the “left of boom” and its importance is probably very clear to the reader. But what about the title’s question, do we actually do it? The answer is no. You can never be enough on the left of the boom. As long as you are striving every single day to get a little bit further to the left, you are on the right path. If you can already identify a new phishing domain the moment it is being registered, then can you identify it even before that? You will realize that after a while under this operating model it will lead you to the actual intelligence that can assist in disrupting those threats once and for all. You will start looking for answers to questions/intelligence requirements such as: Who (as in physical person(s)) is behind this? What is the end-to-end operation they are doing? What is required to get this threat actor criminally prosecuted? etc.

And with this in mind, I am wishing everyone a happy New Year’s Eve and lets all work harder to make sure we are getting more to the “left of boom” in 2020. :)

Written by xorl

December 31, 2019 at 09:50

Growing your intelligence team beyond cyber

leave a comment »

During Recorded Future’s RFUN: Predict 2019 conference in Washington, D.C. Stuart Shevlin, a colleague of mine, and myself recorded a podcast with CyberWire around this topic. Here I would like to expand a little bit more on this area. Note, all of the below are my personal views and do not represent my employer.



Several years ago most businesses with sufficient security resources started their internal CTI (Cyber Threat Intelligence) programs. Slowly but steadily this space grew and formalized more and more. A good example is the SANS CTI course which until a couple of years ago it was still in experimental phase. When I completed it in 2018 it was the first year that you could actually go through GIAC exams to get certified.

On the bright side, intelligence is nothing new. It has been around for thousands of years and because of that it was easy to adapt many of the preexisting knowledge, frameworks and methodologies to the cyber space. On top of that, many people from the IC moved to the private sector which acted as a force multiplier for the cyber threat intelligence teams.

In the beginning, CTI teams were exclusively focused on the cyber aspect. In most cases they were even embedded in the SOC, CSIRT and CERTs. But what changed is that the last few years more and more of the mature teams started providing their services outside the cyber area.

It’s a natural progression. Those of us that started behind the curtain don’t question that. We know that an attacker will not think twice about switching from a spear-phishing campaign to physically installing a rogue AP, from stealing credit cards to stealing and selling PII to a foreign nation-state, or from doing an aggressive DDoS attack to sending a fake bomb threat letter to an office.

As the threat intelligence teams mature in businesses, they become more of intelligence and less of CTI teams. Nowadays, many such teams are responsible for a wide variety of intelligence products ranging from strategic intelligence to operational and of course tactical. To give you an idea, here are a few examples of areas where I am seeing more and more intelligence teams contributing lately.

  • M&A background checks for potentially threats (reputational, security, fraud, etc.)
  • Physical security team(s) support with timely and actionable intelligence on upcoming riots, natural disasters, geopolitical crisis, location-specific threats, etc.
  • Threat landscape reports for business initiatives
  • Liaison with law enforcement and potentially other government agencies for intelligence sharing when appropriate, approved and legal (terrorism, organized crime, child abuse, etc.)
  • Uncovering links between threat actors that operate in multiple domains (not only cyber)
  • Strategic geopolitical risk analysis that could have significant business impact
  • Supporting various security teams by providing reports on TTPs of the threat actors as per context (for example, threats in a specific country for executive travelling protection support)

Although those were some very generic and overly simplified examples it paints a clearer picture of the direction that CTI teams are moving. I expect that in the next few years we will see more and more intelligence professionals transitioning from the public sector to those teams, and more of those teams will keep on expanding the scope. If we think of it in the bigger picture, every company is miniature society and being able to timely inform the decision makers of that society’s upcoming threats is a crucial component. This is where I see the intelligence teams fitting in to the bigger picture.

Written by xorl

December 30, 2019 at 09:40