xorl %eax, %eax

Archive for the ‘security’ Category

Dumpster diving is still alive

leave a comment »

I would like to use a recent example to demonstrate that this threat is still valid, and companies should be considering it in their security policies. Especially, in the lockdown/remote working situation that many companies implement at the moment, this is an even bigger threat.

Dumpster diving is nothing more than going through someone’s (company or individual) trash/dumpster to discover proprietary information. Some of the most high-profile cyber-espionage cases that I am aware of had used this technique very effectively. But there is also the cyber-criminal aspect of it. For example, recently someone at Nizhny Novgorod found 10 folders with confidential information from the Vozrozhdenie Bank.



This happened literally days ago in a large organization (around 45k employees) and it shows that this threat is still relevant even for mature companies. Even more so now that many people work from home which implies that they might not have access to the facilities they had in their normal office environments. Here are some recommendations if you don’t already do that:

  • Have a clear policy for document handling/lifecycle
  • Have data classification that aligns with the policy and treats different classifications with proportional measures
  • Provide document destruction/disposal procedures
  • Provide the required equipment/facilities
  • Train, train, and train some more your employees on this threat
  • Use watermarks to identify the source of the leaks
  • Continuously monitor for leaks not only for digital goods, but also for physical ones (like documents, corporate devices, etc.)

Written by xorl

June 15, 2020 at 12:17

Posted in security

The 2018 NSA Cyber Exercise (NCX) Module 2 tabletop board game

leave a comment »

Yesterday YouTube suggested me this video from a 2018 event in Maryland, USA by the NSA. It was called NSA Cyber Exercise (NCX) and it had three different modules using the gamification approach. The first was about writing a cyber security policy and was titled Legal & Policy tabletop exercise, the second a tabletop blue/red team exercise called Red versus Blue, and the third a typical CTF game named Live Fire Operation Gongoro Dawn. Due to the pandemic I have some extra spare time so I decided to analyse this video and “reverse engineer” the board game used for the tabletop exercise since it seemed quite nice.



The board game has the red team on the left side and the blue team on the right side. Apart from the two teams, each table also had a third person who is probably the one that keeps track of the score and acts as the guide/narrator and/or observer/judge for the board game. From the video I managed to make a digital copy of the board game which looks like this.



Each square also has some icon representing the equivalent item, but I didn’t want to spend time adding those. Then, you have some decks of cards which are split to the following types of cards.

  • Mapped (Black color)
  • Enumerated (Blue color)
  • Compromised (red color)
  • Bonus (green color)
  • Blue team cards (white back color with NCX logo)
  • Red team cards (white back color with NCX logo)

As you can guess, the black (mapped) cards are placed on top of an item on the board if that item is considered mapped. The same also happens with the blue (enumerated) and red (compromised) cards which are also self-explanatory. Now, the blue and red team cards are different capabilities that each team can take advantage of to execute their strategy. Those cards look like the following where the top part describes the condition, the middle part the capability, and the lower part the impact.



The team cards are pretty simple in terms of their capabilities and it appears that depending on the scenario, the judge/observer is able to provide or take away specific capability cards from the teams. The following capture shows nicely how the teams, judge/observer, and board are placed in the game. On the left side it’s the blue team, in the middle the judge/observer, and on the right it’s the red team.



Although those are kind of self-explanatory too, here are some of the blue team capability cards that were visible in the video. Please note that most of the blue team cards had no condition and their impact was Persistent. Also, note that this is not the complete deck, it’s mostly to give you an idea of what kind of capabilities we are talking about.

  • Security training refresher
  • Internet whitelisting
  • OPSEC review program
  • Rebuild host
  • Password reset policy
  • System log audit
  • Firewall access control updates
  • Conceal server banners
  • Incident response team
  • Patch management program
  • Intrusion detection system
  • Strong passwords
  • Anti-malware controls
  • IP Security (IPSec)
  • Input validation controls
  • Strong encryption
  • Anomaly detection software
  • Web proxy
  • Deploy log server
  • Configuration management review



The red team had many more conditions and variety on the impact. Some of the conditions were things like: Play if the workstations are compromised, Play on mapped only hosts, Play on any compromised host, Play on the internal zone if it is accessible, etc. The same also applies to impact where it is mostly Persistent but some were also Instant. Examples of the red team capability cards you can find here.

  • Ping scan
  • Vulnerability scan
  • Sniffing
  • Reduce operational tempo
  • Port scan
  • Software vulnerability exploit
  • Data injection exploit
  • Pass the hash exploit
  • Cover tracks
  • Cache poisoning exploit
  • Phishing exploit
  • Stolen passwords
  • Cross-Site Scripting (XSS) exploit
  • Broken authentication exploit
  • Server banner grab
  • Build botnet
  • Virus/Worm exploit
  • Open source research
  • Install backdoor and rootkit
  • Zero-Day vulnerability exploit



And of course, there is also a pair of dice which I assume that it was used to determine the result of the proposed action and potentially used for score counting on each round.



Overall it looks like a very nice way for gamification of tabletop exercises for blue/red team engagements and potentially it can be even improved by, for example, using the ATT&CK framework TTPs as red team capabilities and NIST cyber security framework as blue team capabilities. Nevertheless, it is a suggestion with a potential implementation approach based on what NSA did in the 2018 NSA Cyber Exercise (NCX).

Written by xorl

March 28, 2020 at 14:48

Posted in security

Strategies to protect against JavaScript skimmers

leave a comment »

The Macy’s JavaScript/digital skimmer attack brought some new developments in the web application security threat landscape. Apart from being very targeted, like the majority of those attacks, it was also stealing a very diverse set data. From credit card details, to account credentials, and wallet information. I find it strange that even large corporations fail to protect their assets against the threat of JavaScript skimmers, so here is quick write-up based on my experience. It’s a quick collection of strategies you can follow as a business to thwart this attack vector.

Apparently, not all of them can be applied to all businesses. And not all of them are providing the same level of security. Following the defense in depth principle, the more layers of security you add the higher your chances of thwarting a threat. That being said, I fully understand that some of them might not be feasible for legitimate business reasons. Lastly, this is not a complete list of ALL the potential protections, and none of the below acts as replacement of well-known security controls such as WAF, SDLC, vulnerability scanning, code audits, IPS, etc. So, take this as a list of defense controls recommendations against digital skimmers.



CSP: If your online presence allows it, this is one of the best countermeasures against almost all forms of this type of web based supply-chain attacks. Content Security Policies (CSP) in blocking/enforcing mode can protect you against the most common tactic of JS/digital skimmers delivery which is by modifying a 3rd party script. Basically, using CSP you can block communications to untrusted domains. My recommendation is, if you can implement this, do it. Even if that means doing it only for your payment/account related pages, do it.

Limit 3rd party attack surface: You can easily set up a CI/CD pipeline which fetches the various 3rd party JS that you use, scan them, audit them, ideally have some form of peer review process, and then deploy them in your web hosting platform. Just like any countermeasure, this cannot be applied to everything but the less external 3rd party JS you serve, the less exposure you have to this kind of threat. This is also a detective/early-warning control as it allows you to quickly identify a potentially malicious JS which will allow you to inform that partner and protect even more people. In short, do an assessment to figure out your posture on externally hosted scripts. Then see what percentage of that can work if hosted on your side. Based on that you can make a risk based decision of whether it is worth building and adopting this pipeline and business process or not.

Deep scans: Do frequent deep scans on your web applications and look for new content. In a few cases a simple hash comparison might work, but in today’s dynamic world you probably need something smarter than that. Thankfully, there are so many web analytics and profiling solutions out there that you can easily find something that can identify differences between the old and the new state for specific content. Apparently, in this case what we care about is JS scripts and whether those scripts: a) were recently modified b) have any form of callouts to domains they didn’t had before.

Client side validation: This is more on the grey area and if you decide to do it, I would highly advise you to get approvals from your legal, privacy, ethics, as well as any other regulatory related team or department in your company, and make it crystal clear to your terms & conditions and statements. So, process aside, this is about doing validation of your website’s content on the client side. Some businesses do it, especially on the financial sector, but it’s a grey area as it implies executing non-website related code on the client side. In short, there are many ways to do this independently as well as off-the-shelf options that do it which include hash comparison between the files of the server and client, identification of DOM changes, detection of popular malware families that implement form grabbing or web injections, etc.

3rd party assessments: Simple as it may sound, it is not that common. To summarize, how do you even know that the service/partner you are about to include in your website is not already compromised and serving scripts with digital skimmers? This is not the most common tactic for this specific threat, but it does happen occasionally. On the business side it’s relatively easy to have an extra step in your risk assessment before onboarding a new partner to also have a code audit (ideally both dynamic and static) to look for indicators of digital skimmers. This can also catch other potentially malicious behaviours, or even issues that could have reputational impact such as unsolicited advertisements, unauthorized user tracking, etc.

IOC based scans: Another very useful method is to use your tactical threat intelligence IOCs to look for matches in the JS scripts your website serves. Again, on the practical side this is a relatively trivial task which I suggest you to do from both sides (backend and frontend). Meaning, on the backend side you scan all your web content based on your IOCs and on the frontend you crawl and fetch as much as possible of the files of interest and do a similar scan. Since those attacks are usually very targeted the chances of having IOCs before the attack occurs are low, but it’s so easy to implement that it is usually worth it.

Simulation-based threat detection: This is one of my personally favourite but it is more on the advanced side of the spectrum for most businesses. Basically, you have some sort of continuous simulated visitor activity (yes, bots) on your production/externally facing web applications which profiles the interactions. This could trivially detect any new calls to previously unknown domains, additional JavaScript scripts being invoked or loaded in forms, etc. Apparently, this requires more resources but if you have the capacity is probably something you might want to investigate as an option.

Clean-up your shit: My last one is unfortunately one of the most neglected and it’s not even a security control. Just good software engineering practice. Ensure that you have some sort of profiling/analytics platform that informs the relevant development team(s) when certain scripts have stopped being accessed/used for a specified amount of time. Ideally, after a while you can optimise this further by automatically removing/disabling those scripts from your web platform(s).

One might argue that the Macy’s attack was due to a compromise in their systems or something along these lines. This is apparently a case of industry standard security controls to protect against data breaches/intrusions. I will not spend any time writing about this since it is a relatively trivial challenge for most security people out there. There are decades of work on frameworks, processes, and pretty well defined steps to get to a really mature level without excessive costs. On the other hand, there are not so many things around for digital skimmers. So this post focused explicitly on that. Recommendations for protection against digital skimmers. Those controls provide of course additional capabilities that can be leveraged for other purposes too but the goal behind all the recommendations is this, protections against digital skimmer attacks. Hopefully you’ll find something practical for your business in here.

Written by xorl

December 27, 2019 at 11:19

Posted in security

A short story around WannaCry

leave a comment »

A couple of weeks ago a friend of mine messaged me about the following recently released documentary by Tomorrow Unlocked. The reason was a mention of one of my Tweets (for something that I would least expect, to be honest). Although this has happened many times since I started participating in the security community almost 15 years ago, it was a nice reminder of how small things that you consider silly or unimportant can sometimes make a difference.



Long story short, the documentary implies that Marcus Hutchins, the guy that registered the killswitch domain for WannaCry, realized the impact of him registering that domain from that Tweet. I am not sure if this is really true, but if it is, I’m glad that I helped even a tiny bit in stopping this global threat in this very indirect, kind of silly, way.



This was interesting to me since for a long time I was considering that my greatest contribution to the WannaCry case was just helping with the mutex killswitch (MsWinZoneCacheCounterMutexA) discovery. Nevertheless, that incident passed and a couple of months later, in July 2017, along with a colleague (@IISResetMe) we presented some blue team related learnings regarding WannaCry in an event at Amsterdam. The slides for that are available here.

To summarize, you never know when something that you published or shared is going to help thwart a real threat. So, never stop sharing because if there is one thing that makes the security community great is this. We are all dealing with the same threats. Whether it is a cyber-criminal or a nation-state, even small hints could really help in building the bigger picture and protecting our assets. So, yeah… The security community, crowdsourcing challenges before it was cool. :)

Written by xorl

December 23, 2019 at 15:44

Posted in security

Supply chain attacks, 2018 and the future

leave a comment »

It’s been a while since my last post but I thought that there is some value in this topic… Supply chain attacks are nothing new, they have been around for very long time. What is different though, is the rise of this trend this year outside the common nation-state and cyber-espionage world. Here is a high level overview of what implicit trust connections most organizations have today.



And where it gets even more complicated is that every single instance of the above diagram is also another organization. There are some unforgettable supply-chain attacks such as the RSA SecureID in March 2011 which was later used to comprise many organizations, the MeDoc in June 2017, the NSA hardware backdoors, etc. However, almost all of them were part of cyber-espionage operations, typically from nation-states. Some threat actors are more active than others but overall, it was a nation-state game.

What has changed recently is the rise of the supply-chain attacks outside the cyber-espionage world. We see non-enterprise Linux distributions being targeted such as Gentoo and Arch. Widely used open source projects such as npm modules, Docker images from official public registries, cyber-criminals hacking 3rd parties to eventually attack corporations such as Ticketmaster, backdoored WordPress plugins, browser extensions, and many more examples like these just in 2018.

And the question is, are organizations ready for this type of intrusion? Unfortunately, for the majority of the cases the answer is no. Most organizations implicitly trust what is installed on their devices, what libraries their software utilizes, what browser extensions and tools their employees install and use, etc. How can this be fixed?

The answer is not simple or easy. Everyone likes to say “trust but verify” but to what degree? Some might argue that solutions such as Google’s Titan is the way to go, but most organizations don’t have the resources and capacity to implement this. Should they fall back to the built-in TPM? Identifying all suppliers and doing risk assessments? Detecting anomalies and modifications on 3rd party components? All of them are valid options depending on the organization’s threat model.

But the answer starts with a mindset change. Organizations need to realize and accept that this is a real threat and yes, there are technical and non-technical ways to combat it but it requires conscious effort.

Based on my experience and observations I predict that this will become one of the most popular attack vectors in the near future. The reason is simple. Organizations are focusing more and more improving THEIR security but as we all know for thousands of years, security is as strong as its weakest link. You can have the best security in the world but if you include random JavaScript files without any control from 3rd parties, this is what adversaries will use. Remember this Sun Tzu quote from The Art of War?

In war, the way is to avoid what is strong, and strike at what is weak

So, identify your weak points and don’t limit your view to your organization as a standalone instance. Locate where all the components used are coming from and expand your threat model to cover those as well. As Sun Tzu said, you have to know your enemy, but you also have to know yourself.

If you know the enemy and know yourself,
you need not fear the result of a hundred battles.

If you know yourself but not the enemy,
for every victory gained you will also suffer a defeat.

If you know neither the enemy nor yourself,
you will succumb in every battle.

Written by xorl

July 15, 2018 at 14:20

Posted in security