xorl %eax, %eax

Archive for the ‘security’ Category

Ideas for Software Supply-Chain Attacks Simulation by Red Teams

leave a comment »

The purpose of red teams is to simulate real adversaries to test both the technical security controls and non-technical (e.g. response procedures, DFIR playbooks, and so on) of an organisation. 4 years ago I posted a proposal on how red teams could/should deploy multi-stage C2 infrastructures. Now I’ll highlight another increasing threat for most companies.

Whether this is nation-state actors, shared libraries, hacktivists or anything else in between. Software supply-chain intrusions are getting a lot of attention. So, if you are a red teamer and you’re looking for ideas on how to simulate those, here are a couple of ideas.

Why bother? Well, to provide more value to your customers by a practical assessment on whether they can effectively protect (or at least detect) against supply chain threats.

Internal Code Repositories

Assuming you got access to an endpoint of a developer, administrator, engineer, etc., modify code or configurations in internal code repositories in order to propagate to more systems/networks. For instance check if your “user” can access: Git repositories, CI/CD pipelines, config. management (Saltstack, Ansible, Terraform, Puppet, Cfengine, JAMF, Rudder, Chef, SCCM, etc.), cloud deployment tools, container images, etc. and try to push implants or expand access via those means.

Waterhole-enabled Supply Chain

At some point it is almost certain that you’ll obtain access to something beyond an endpoint. It could be a fileserver, an internal web application server, a cloud/3rd party service, a container running some small service or anything else. Well, instead of trying to pivot via “traditional means” why not modify the service that is offered from this system to push out an implant or take an action to anyone that uses it to increase your access?

In-house Packages/Software

It is possible that you might stumble across some (open source or proprietary) software that is either mirrored internally in some repositories or it’s customised for whatever business reason and hosted in something like a fileserver, a package repository, or something along those lines. Here you could try to trojanize those and wait for them to propagate.

Software Update Solutions

It’s not uncommon for organisations to have automated or semi-automated solutions for performing software updates. If you could modify those updates to include an implant you could very effectively emulate a supply-chain propagation. Hint: Some of those systems rely on inherently insecure protocols (e.g. FTP, TFTP, SMTP, HTTP, etc.) so you could even hijack/MiTM/trojanize them on the network-level if you have access to the links they are passing to/from.

Fleet Management

Similarly to the previous one, even small organisations will rely on one (or more) fleet management solutions and if you manage to get access there it’s, more or less, the same as having a nice C2 preconfigured for you. So, why not use that to expand your access?

Pre-agreed Access

It is possible that as part of your Statement of Work (SoW) you will be given some limited access. If you aim on evaluating the supply-chain capabilities of that organisation, you could ask for access to some internal application or, even better, a 3rd party system/application/service the organisation relies on and use that as your starting point. Meaning, the engagement starts under the assumption that this 3rd party is compromised.

I’m pretty sure there are tons more concepts that a red team could take advantage of depending on the organisation they target. But hopefully the above gives you some ideas on how to evaluate supply-chain threats in a relatively controlled but realistic manner.

Written by xorl

April 7, 2022 at 15:21

Posted in security

Guide on Offensive Operations for Companies

leave a comment »

I’ve been thinking of writing this post for some time, but I decided to finally do it. Everything I wrote here heavily depends on what you are legally allowed to do which, in turn, depends on the country of your legal entity/company, regional laws and regulations, international laws affecting you, as well as the business itself (for instance, a cyber-security firm would have way more freedom compared to a retail business). This is why if you decide to move into offensive operations against your adversaries, you MUST first check your objectives with your legal advisor and get their sign-off.

That being said, there are many levels between doing nothing and hacking-back an adversary. Some of which are pretty common, and others that are only employed by nation-state actors. To simplify the structure, I created a diagram that tries to put them in some sort of framework that will help you decide which offensive operations you are legally allowed and technically capable of performing. Feel free to use this as a starting guide if you aren’t sure on where to start; but do not limit yourself only to what’s mentioned there, develop it further based on your needs and capabilities. It should give you a starting point. Under the diagram you’ll find a brief explanation of what each mentioned name means.

Starting from the low complexity, low business risk and moving we have:

  • Local Deception Operations: All the cyber deception that can be implemented internally in a company’s environment such as honeytokens, honeypots, honey networks, canary tokens, deception/fake networks, etc. in order to lure the adversary into a highly controlled environment and monitor their activities, and/or to quickly detect and deny/disrupt their operation.
    • Offensive action: Tricking the adversary into actions that will give you the detection and response tactical advantage.
    • Complexity: Low/medium
    • Business risk: Minor (due to keeping all those deception operations confidential which could result in a negative impact/perception by employees, as well as complex processes within the security team(s))
  • Infrastructure Takedowns: That is reporting and requesting takedowns of malicious infrastructure through either service providers or directly via the hosting companies. This includes things like request takedown of phishing domains, malware hosting servers, email accounts, etc.
    • Offensive action: Depending on the takedown this could be a degradation, denial, disruption, or destruction operation against an adversary’s infrastructure, inducing them cost to reestablish that.
    • Complexity: Low/medium
    • Business risk: Medium. Process needs to be well-defined to avoid any issues such as requesting takedowns of legitimate infrastructure, having legal issues from the affected companies, avoid leaking sensitive information on the takedown requests, etc.
  • Indirect Public Disclosure: Several threat intelligence vendors and national CERTs allow for anonymized reporting/public disclosure of intelligence reports. This capability allows a company to publicly disclose details that would otherwise have the risks of the “Public Disclosure” operations mentioned later.
    • Offensive action: Forcing the adversaries to change their TTPs (thus inducing cost and delays to their operations), making it globally known what the adversary does and how, which could enable nation-state actors or other companies to use this public material as supportive evidence in more aggressive offensive actions.
    • Complexity: Low
    • Business risk: Minor when the anonymization is done carefully.
  • Active Darkweb Monitoring: By that term I mean any sort of operations to obtain access and monitor your adversaries’ communication channels (e.g. Telegram groups, darkweb forums, etc.) to know as early as possible any offensive actions targeting your business and take appropriate measures. For most companies this is typically implemented that via threat intelligence vendor(s).
    • Offensive action: Infiltrating into the adversaries’ communication platforms and collecting intelligence on their activities.
    • Complexity: Low/medium
    • Business risk: Minor when done via a vendor. Medium when developed in-house as it requires high discipline, processes, OPSEC measures, legal and privacy sign-offs, etc.
  • Collaboration with Authorities: That is proactively reaching out to law enforcement and/or intelligence agencies related to cyber operations to help them in an operation against a specific adversary. For instance, providing them with evidence, information only your company has, etc.
    • Offensive action: Potential for nation-state action against your adversary(-ies) such as prosecution, diplomatic/external actions, sanctions, covert actions, etc.
    • Complexity: Medium
    • Business risk: This imposes a noticeable risk of affiliation of a business with a specific government and/or political party, having accidental involvement in unrelated government issues, becoming an “agent” of the authorities you worked with, seen as a nation-state proxy by other countries/governments, etc.
  • Legal Actions: This involves any sort of legal actions your company might impose to an adversary such as cease and desist letters, seizure of malicious infrastructure, criminal complaints on specific adversaries, sanctions, etc.
    • Offensive action: Active and overt approaches to disrupt and destroy adversarial activities through legal means.
    • Complexity: Medium/high
    • Business risk: Medium/high. This will require experienced investigators, digital forensics experts with practical legal/prosecution experience, processes for building a criminal case and managing the evidence, experienced legal resources, appetite for public exposure, and of course, acceptance that now your adversaries know what you know, and there is always a chance that you might lose the case when it gets to the court.
  • Public Disclosure: This a foreign policy tool of many nation-state actors which can also be implemented in private companies. By making everyone aware of who targeted you, especially for nation-state actors, to the entire world you give ammunition to any other nation-state to use this information against your adversaries, without your direct involvement.
    • Offensive action: Revealing an operation that was aiming on being covert, causing the adversaries to change their tactics, and giving their adversaries the opportunity to use this disclosed material against them.
    • Complexity: Medium/high
    • Business risk: Medium/high. This disclosure might bring a lot of negative press, and will also reveal what you know. This means that those adversaries are likely to use more advanced techniques the next time they’ll go after your company. Furthermore, nation-states might request your support in legal actions. For a less risky approach, check the “Indirect Public Disclosure” operations.
  • Remote Passive SIGINT (Signals Intelligence): This means obtaining signals (typically raw network traffic or raw communications) by third parties such as data brokers or threat intelligence vendors, which can help you proactively discover adversarial activities.
    • Offensive action: Inspecting data collected outside your organization to proactively discovery and deny any adversarial activity against your company.
    • Complexity: Medium
    • Business risk: Minor. The only risk is to make sure you do not use any illegal or shady services, and instead rely on industry standards and well-known vendors.
  • Remote Deception Operations: Such operations include the creation of fake profiles of your company, fake publicly exposed services, fake leaked documents with tracking tokens, etc. This is a lighter version of the “Sting Operations” discussed later.
    • Offensive action: Hunt adversaries by luring them with fake targets so that you can catch them before they target the real assets of the company.
    • Complexity: Medium
    • Business risk: Minor. Mostly around having strong processes to avoid security mistakes which will jeopardise your security posture, and keep those well-managed, but also operating on a need-to-know basis.
  • Data Breach Data Exploitation: This means getting access to data from data breaches and using them to uncover adversarial activities or intelligence which will help you proactively protect your company. Examples include proactively discovering infrastructure used for malicious purposes, accounts used by adversaries, deanonymization, etc.
    • Offensive action: Exploiting data which would otherwise be confidential to the organization that had them, in order to get more insights on your adversaries.
    • Complexity: Medium
    • Business risk: Medium. There is a lot of legal and ethical debate over the data breach data exploitation and that could have some business impact for a company. Additionally, the handling of such data involves some complexity in terms of access management, auditability of who did what and why, retention policies, etc. which means additional resources, technology, and processes will likely be needed.
  • False Flag Operations: An advanced offensive technique to trick your adversaries into a thought process to take advantage over their actions. For instance, make it look like a rival adversary leaked information about them, or have them believe that a rival adversary has already compromised the systems they are in.
    • Offensive action: Dynamically and actively change your adversaries’ TTPs by forcing them into believing that something other than what they see is happening.
    • Complexity: Medium/high
    • Business risk: Medium/high. Those operations need very careful planning, discipline and could easily backfire in a variety of different ways including negative media attention, making your adversaries switch to more advanced techniques, legal actions from government bodies that you might have interfered with their operations, having the opposite effect, etc.
  • CNA Operations: Computer Network Attack (CNA) operations are any activities that will cause degradation, disruption, or destruction of the adversaries infrastructure and resources. Examples include denial of service attacks, seizure of their resources, flooding their resources (e.g. mass mailers, automated calls, etc.), making countless fake accounts on their platforms, spam, feeding them with fake data, etc.
    • Offensive action: Causing the adversaries to focus their efforts on responding to the CNA operation instead of performing their intended malicious activity.
    • Complexity: High
    • Business risk: High. This is a very grey area which might get the company into them being treated as a criminal entity. There needs to be a very thorough legal and business alignment on how, why, who, when, and where those activities will happen, and in most cases it is (legally) impossible for most companies to perform such operations.
  • Sting Operations: Here the defenders could try to pose as criminals to infiltrate a group, or set up a fake website to recruit cyber-criminals, and other similar operations with the end goal to infiltrate the adversary’s entities.
    • Offensive action: Proactively identifying adversarial plans and denying them by applying the appropriate security controls.
    • Complexity: High
    • Business risk: High. For the vast majority of companies out there, it would be impossible to legally do this. However, some might be able to pull this off in collaboration with the authorities. The risk is high and on multiple levels, from public relations, to impacting law enforcement operations, to privacy and legal issues, etc.
  • Takeover: In takeover operations the private company uses their knowledge and resources to take control of infrastructure operated by the adversary. This will not only induce costs to the adversaries for new infrastructure, but it can also reveal details of their TTPs, identifiable information connecting them to their real identities, and so on.
    • Offensive action: Denying access to the adversaries, disrupting or degrading their operations, and collecting a significant part of their digital capabilities and information.
    • Complexity: High
    • Business risk: High. Back in the day, those were a common occurrence but as cyber is becoming more and more of a regulated and controlled space, conducting a takeover could result in very serious legal and PR implications for a business. Nowadays, those are typically limited to specific companies operating in this space and government entities. They can still be performed by others, but it is a complex process with many moving parts.
  • Online HUMINT (Human Intelligence): The purpose of those operations is both to understand and infiltrate into adversarial groups/networks by exploiting human weaknesses (e.g. social engineering, recruiting insiders, etc.), but also to disrupt their operations from the inside. For example, recruit (or become) an influential member and create tensions in the group, make arguments to change the group’s focus from operations to internal conflicts, create division among members, etc.
    • Offensive action: Depending on the level it could be anything from collecting intelligence on the TTPs of the adversary to proactively protect your assets, all the way to creating internal conflicts that will result in disrupting or destroying a group entirely. In some cases, those tensions could go as far as members reporting each other to the authorities.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It is not unheard of that a private company could support those, but the risk is quite high due to the potential impact a business could have from both the adversaries and the involved authorities.
  • 3rd/4th Party Collection: In simple terms this can be considered a step up from the “Takeover” operations discussed earlier. Here the operation doesn’t involve only taking over the adversary’s infrastructure, but using it to collect data from where this infrastructure has access to. For example, you might have taken over a Command & Control server and in there found some VPN connections for a server the threat actors use. You use them to access and collect intelligence and/or disrupt their operations. That could go in multiple levels on the other side too. For instance, use the C&C to send commands on the infected hosts (if an adversary system is infected) and collect data (or perform other actions) there too.
    • Offensive action: Exploitation of adversarial infrastructure in multiple levels, masking your activities using the taken over system(s). This could be used for anything from intelligence collection to disruption, degradation, denial, etc.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It would be very complex and risky for any private company to try to conduct this since it involves breaking into systems in multiple levels.
  • CNE Operations: This is the research to identify and exploit vulnerabilities in order to execute a Computer Network Exploitation (CNE) operation against an adversary. For instance, find a software vulnerability in their malware allowing you to takeover their C&C, or identify a misconfiguration on their operational hosts allowing you to infiltrate into it, etc. This is what is commonly known as hacking-back.
    • Offensive action: Exploitation of adversarial infrastructure. This could be used for anything from intelligence collection to disruption, degradation, denial, etc.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It would be complex and risky for any private company to try to conduct this since it involves breaking into systems.
  • Automated CNE: That is scaling-up the “CNE Operations” by automating the exploitation step. That is, developing the ability not only to take advantage of the identified vulnerabilities in adversarial infrastructure, but automatically (or on-demand via automation) exploiting all existing and newly deployed adversarial infrastructure with no (or minimal) human interaction.
    • Offensive action: Exploitation of adversarial infrastructure. This could be used for anything from intelligence collection to disruption, degradation, denial, etc.
    • Complexity: High
    • Business risk: High. Those operations are typically limited to nation-state actors that have dedicated resources for such covert activities. It would be complex and risky for any private company to try to conduct this since it involves breaking into systems.

Written by xorl

December 28, 2021 at 15:28

Posted in security

Overview of 0days seen in the wild the last 7 years

leave a comment »

Starting in 2019 Google’s Project Zero (P0) team published a tracker for all 0days that were discovered to be exploited in-the-wild (either by Google or others). The data from this tracker start from August 2014 and continue to this day (for this post that is October 2021).

Using this data source, I created some simple graphs to better understand the 0day threat landscape for all of us tasked with cybersecurity responsibilities.

There are however a couple of assumptions I’m making by using this data set, those assumptions are:

  • This is a representative sample of all the 0days used in the wild
  • The data collected in this tracker are accurate

So, let’s start first of all with the trend which is one of least useful graphs but the one people are regularly interested in. The reason why this is generally useless is because the dates are the dates when those 0days were officially patched, not when they were initially acquired by the threat actors, or their first operational use. Additionally, those patching dates rely on the cybersecurity industry and community actively working to discover and remediate 0days in the wild. So, there might have been periods that those companies/people had different priorities and didn’t dedicate as much time on threat hunting and remediation for 0days.

For all those reasons, I do not consider the following one particularly useful but added it here for completeness. If there was any conclusion someone could make from this is that the end of the year and early winter is consistently the period were the least amount of 0day discoveries and mitigations happen.

A far more interesting statistic is which are the top targets. That can help us focus on what is more likely to get hit with 0days in the future. Looking at the statistics for the last 7 years (2014-2021) it is clear that by far the top target is Microsoft (50%), followed by Adobe (13.9%), Google (11.9%), and then Apple (10.3%).

Later in this post I’ll go into more details on the each of the products most commonly targeted on those vendors, which can help us drive our priorities (e.g. where to hunt for more, what to protect more, which are the highest risk vendors) or even understand what most threat actors are interested in acquiring.

The second high-level overview graph that is particularly interesting is the type of vulnerability exploited where we see that 71.1% was memory corruption, and 16% was a logic/design flaw. This is another interesting metric since it can help us prioritize our efforts in those issues.

Interestingly, those are also typically some of the hardest to thoroughly audit (especially in an automated manner) and this might also be a factor on why they were the most frequently used. I mean the combination of being hard to discover vulnerability via the automated means most vendors are using, and the fact that they typically enable a wide variety of exploitation avenues.

Now in the next part I’ll be going through each of the top 5 vendors affected by 0days in the wild and see which of their products were targeted the most. Again, to help us prioritize our efforts accordingly.

#1 Microsoft (50% of all discovered 0days were for this vendor)

  • Windows (46.4%)
  • Internet Explorer (22.7%)
  • Office (13.4%)
  • Windows kernel (8.2%)
  • All the rest: Exchange, VBScript, XML Core Services, Defender (9.2%)

#2 Adobe (13.% of all discovered 0days were for this vendor)

  • Flash (85.2%)
  • Reader (14.8%)

#3 Google (11.9% of all discovered 0days were for this vendor)

  • Chrome (95.7%)
  • Android (4.3%)

#4 Apple (10.3% of all discovered 0days were for this vendor)

  • iOS (55%)
  • WebKit (40%)
  • MacOS (5%)

#5 Mozilla (3.6% of all discovered 0days were for this vendor)

  • Firefox (100%)

The final graph that I found particularly useful to understand the threat landscape is the average patching time per each of those top 5 affected vendors. For this one, please note that the Google tracker does not have a data for all entries. Specifically, 94.85% of the entries include both dates (discovery and patching) so this is what was used for the following calculations.

In total the average patching time was 22.67 days from the discovery time to the patch being publicly available, but below you can also see this metric per vendor.

The average patching time (from discovery to patch being publicly available) for each of the top 5 affected vendors are:

  1. Microsoft: Average of 41.2 days between discovery and patching
  2. Adobe: Average of 8.1 days between discovery and patching
  3. Google: Average of 5.3 days between discovery and patching
  4. Apple: Average of 9 days between discovery and patching
  5. Mozilla: Average of 6.2 days between discovery and patching

Based on the above data, Microsoft is the highest risk vendor if we combine the amount of 0days found in the wild and the average patching time of 41 days. So, maybe your strategy should involve minimizing the use of those products and services, or spending more resources in security controls around them.

Another valuable insight that we can derive is that the most targeted software by adversaries that employ 0day exploits are web browsers and mobile phones. Especially for the latter (mobile phones), it’s an area where most organizations do not pay sufficient attention to secure them, at least not the same level as their core infrastructure services.

I’m certain there are many more assessments that can be made using the above data but hopefully that gives you a starting point and a source of inspiration on how to get strategic value from tactical information such as 0day exploits discovered in the wild.

Written by xorl

October 19, 2021 at 16:26

Dumpster diving is still alive

leave a comment »

I would like to use a recent example to demonstrate that this threat is still valid, and companies should be considering it in their security policies. Especially, in the lockdown/remote working situation that many companies implement at the moment, this is an even bigger threat.

Dumpster diving is nothing more than going through someone’s (company or individual) trash/dumpster to discover proprietary information. Some of the most high-profile cyber-espionage cases that I am aware of had used this technique very effectively. But there is also the cyber-criminal aspect of it. For example, recently someone at Nizhny Novgorod found 10 folders with confidential information from the Vozrozhdenie Bank.



This happened literally days ago in a large organization (around 45k employees) and it shows that this threat is still relevant even for mature companies. Even more so now that many people work from home which implies that they might not have access to the facilities they had in their normal office environments. Here are some recommendations if you don’t already do that:

  • Have a clear policy for document handling/lifecycle
  • Have data classification that aligns with the policy and treats different classifications with proportional measures
  • Provide document destruction/disposal procedures
  • Provide the required equipment/facilities
  • Train, train, and train some more your employees on this threat
  • Use watermarks to identify the source of the leaks
  • Continuously monitor for leaks not only for digital goods, but also for physical ones (like documents, corporate devices, etc.)

Written by xorl

June 15, 2020 at 12:17

Posted in security

The 2018 NSA Cyber Exercise (NCX) Module 2 tabletop board game

leave a comment »

Yesterday YouTube suggested me this video from a 2018 event in Maryland, USA by the NSA. It was called NSA Cyber Exercise (NCX) and it had three different modules using the gamification approach. The first was about writing a cyber security policy and was titled Legal & Policy tabletop exercise, the second a tabletop blue/red team exercise called Red versus Blue, and the third a typical CTF game named Live Fire Operation Gongoro Dawn. Due to the pandemic I have some extra spare time so I decided to analyse this video and “reverse engineer” the board game used for the tabletop exercise since it seemed quite nice.



The board game has the red team on the left side and the blue team on the right side. Apart from the two teams, each table also had a third person who is probably the one that keeps track of the score and acts as the guide/narrator and/or observer/judge for the board game. From the video I managed to make a digital copy of the board game which looks like this.



Each square also has some icon representing the equivalent item, but I didn’t want to spend time adding those. Then, you have some decks of cards which are split to the following types of cards.

  • Mapped (Black color)
  • Enumerated (Blue color)
  • Compromised (red color)
  • Bonus (green color)
  • Blue team cards (white back color with NCX logo)
  • Red team cards (white back color with NCX logo)

As you can guess, the black (mapped) cards are placed on top of an item on the board if that item is considered mapped. The same also happens with the blue (enumerated) and red (compromised) cards which are also self-explanatory. Now, the blue and red team cards are different capabilities that each team can take advantage of to execute their strategy. Those cards look like the following where the top part describes the condition, the middle part the capability, and the lower part the impact.



The team cards are pretty simple in terms of their capabilities and it appears that depending on the scenario, the judge/observer is able to provide or take away specific capability cards from the teams. The following capture shows nicely how the teams, judge/observer, and board are placed in the game. On the left side it’s the blue team, in the middle the judge/observer, and on the right it’s the red team.



Although those are kind of self-explanatory too, here are some of the blue team capability cards that were visible in the video. Please note that most of the blue team cards had no condition and their impact was Persistent. Also, note that this is not the complete deck, it’s mostly to give you an idea of what kind of capabilities we are talking about.

  • Security training refresher
  • Internet whitelisting
  • OPSEC review program
  • Rebuild host
  • Password reset policy
  • System log audit
  • Firewall access control updates
  • Conceal server banners
  • Incident response team
  • Patch management program
  • Intrusion detection system
  • Strong passwords
  • Anti-malware controls
  • IP Security (IPSec)
  • Input validation controls
  • Strong encryption
  • Anomaly detection software
  • Web proxy
  • Deploy log server
  • Configuration management review



The red team had many more conditions and variety on the impact. Some of the conditions were things like: Play if the workstations are compromised, Play on mapped only hosts, Play on any compromised host, Play on the internal zone if it is accessible, etc. The same also applies to impact where it is mostly Persistent but some were also Instant. Examples of the red team capability cards you can find here.

  • Ping scan
  • Vulnerability scan
  • Sniffing
  • Reduce operational tempo
  • Port scan
  • Software vulnerability exploit
  • Data injection exploit
  • Pass the hash exploit
  • Cover tracks
  • Cache poisoning exploit
  • Phishing exploit
  • Stolen passwords
  • Cross-Site Scripting (XSS) exploit
  • Broken authentication exploit
  • Server banner grab
  • Build botnet
  • Virus/Worm exploit
  • Open source research
  • Install backdoor and rootkit
  • Zero-Day vulnerability exploit



And of course, there is also a pair of dice which I assume that it was used to determine the result of the proposed action and potentially used for score counting on each round.



Overall it looks like a very nice way for gamification of tabletop exercises for blue/red team engagements and potentially it can be even improved by, for example, using the ATT&CK framework TTPs as red team capabilities and NIST cyber security framework as blue team capabilities. Nevertheless, it is a suggestion with a potential implementation approach based on what NSA did in the 2018 NSA Cyber Exercise (NCX).

Written by xorl

March 28, 2020 at 14:48

Posted in security