xorl %eax, %eax

Per country internet traffic as an early warning indicator

leave a comment »

The last few years there is a visible trend of governments shutting down the internet as a means of reducing the impact/controlling outbound communications during major incidents such as riots, conflicts, civil unrest, and other rapidly evolving situations that could pose a threat to national security. This makes monitoring internet traffic a key input for early warning indication of high-impact incidents.

In 2019 there were several examples of this behaviour. Just to reference a few: India, Iran, Indonesia, Iraq, Myanmar, Chad, Sudan, Kazakhstan, Ethiopia, etc. Typically, most of those occurred anything from hours to days before the actual incident unraveled. This means that in all of those cases tracking internet traffic per country could have helped you pro-actively detect an upcoming situation, and together with other enriching sources have sufficient time to make an informed decision.

Similarly, per country internet traffic monitoring is also visibly impacted in other widespread crisis situations such as earthquakes, volcanic eruptions, and other natural disasters. Below you can see an example of the recent (7 January 2020) earthquake in Puerto Rico. The most common patterns in cases of natural disasters is either significant traffic drop due to infrastructure issues like in the case of Puerto Rico, or increase in traffic due to the heighten outbound communications by the majority of the people in the affected geographical region. So, although this by itself could not result in immediate action, it can be automatically enriched by other means such as social media, local sensors & reporting, etc. to provide timely and actionable intelligence.

So, per country internet traffic monitoring can assist your intelligence team as an additional data point to generate actionable and timely intelligence products that will help you protect your assets proactively, and usually before an incident reaches public media.

Written by xorl

January 14, 2020 at 09:51

Left of boom: Do we actually do this?

leave a comment »

I decided to end this year’s blog posts with something slightly different. So, what does “left of boom” really mean? This phrase became increasingly popular in the Intelligence Community after the 9/11 terrorist attacks to describe the mission of the counter-terrorism units within the IC. Meaning, everything we do has to be before a bomb explodes. That is, the left side of the timeline of events that are about to unfold. So, why is this so important for all intelligence teams and do we actually do it?

The first and foremost goal of any (cyber or other) intelligence team is to provide an unbiased and as accurate as possible assessment of an upcoming event which will be used as key input in the decision making process. That word, proactive, encompasses the “left of boom” mentality. However, it happens more rarely than what most businesses would like to admit.

For example, is taking down phishing domains quickly after they become live proactive? Not really. Proactive would have been know that those are going to be used for phishing and take action before they were even up and running.

Is finding leaked credentials or user accounts on some forum proactive? Not really. Proactive would have been knowing that those were leaked before someone shared them in a forum.

Or on the non-cyber side, is reporting that a tornado just hit the location of one of your offices proactive? No. Proactive would have been to have briefed the relevant staff in advance that this was going to happen.

Some might argue that all of the above are proactive and actionable intelligence products, and I could go on with countless more examples trying to counter that argument, but this is not what this post is about. It’s about answering the question, are we “left of boom” or not?

In my opinion, we always have to be moving to the left side of the boom as much as legally and humanly possible. Apparently, as a private business intelligence team you cannot run CNE operations on a threat actor that operates phishing domains against your company. However, you can monitor for new registrations from that threat actor, new TLS certificates, understand their TTPs and monitor/track them closely. For example, do they use specific hosting provider(s)? Is there a pattern on the targets? Operating timezone? Habits? etc.

For all of the above, there are many proprietary and open source solutions that can assist you with the data collection, processing or even the information production in some cases. But turning that data and information into timely and actionable intelligence is something that only a team of skilled individuals can do.

By now the “left of boom” and its importance is probably very clear to the reader. But what about the title’s question, do we actually do it? The answer is no. You can never be enough on the left of the boom. As long as you are striving every single day to get a little bit further to the left, you are on the right path. If you can already identify a new phishing domain the moment it is being registered, then can you identify it even before that? You will realize that after a while under this operating model it will lead you to the actual intelligence that can assist in disrupting those threats once and for all. You will start looking for answers to questions/intelligence requirements such as: Who (as in physical person(s)) is behind this? What is the end-to-end operation they are doing? What is required to get this threat actor criminally prosecuted? etc.

And with this in mind, I am wishing everyone a happy New Year’s Eve and lets all work harder to make sure we are getting more to the “left of boom” in 2020. :)

Written by xorl

December 31, 2019 at 09:50

Growing your intelligence team beyond cyber

leave a comment »

During Recorded Future’s RFUN: Predict 2019 conference in Washington, D.C. Stuart Shevlin, a colleague of mine, and myself recorded a podcast with CyberWire around this topic. Here I would like to expand a little bit more on this area. Note, all of the below are my personal views and do not represent my employer.

Several years ago most businesses with sufficient security resources started their internal CTI (Cyber Threat Intelligence) programs. Slowly but steadily this space grew and formalized more and more. A good example is the SANS CTI course which until a couple of years ago it was still in experimental phase. When I completed it in 2018 it was the first year that you could actually go through GIAC exams to get certified.

On the bright side, intelligence is nothing new. It has been around for thousands of years and because of that it was easy to adapt many of the preexisting knowledge, frameworks and methodologies to the cyber space. On top of that, many people from the IC moved to the private sector which acted as a force multiplier for the cyber threat intelligence teams.

In the beginning, CTI teams were exclusively focused on the cyber aspect. In most cases they were even embedded in the SOC, CSIRT and CERTs. But what changed is that the last few years more and more of the mature teams started providing their services outside the cyber area.

It’s a natural progression. Those of us that started behind the curtain don’t question that. We know that an attacker will not think twice about switching from a spear-phishing campaign to physically installing a rogue AP, from stealing credit cards to stealing and selling PII to a foreign nation-state, or from doing an aggressive DDoS attack to sending a fake bomb threat letter to an office.

As the threat intelligence teams mature in businesses, they become more of intelligence and less of CTI teams. Nowadays, many such teams are responsible for a wide variety of intelligence products ranging from strategic intelligence to operational and of course tactical. To give you an idea, here are a few examples of areas where I am seeing more and more intelligence teams contributing lately.

  • M&A background checks for potentially threats (reputational, security, fraud, etc.)
  • Physical security team(s) support with timely and actionable intelligence on upcoming riots, natural disasters, geopolitical crisis, location-specific threats, etc.
  • Threat landscape reports for business initiatives
  • Liaison with law enforcement and potentially other government agencies for intelligence sharing when appropriate, approved and legal (terrorism, organized crime, child abuse, etc.)
  • Uncovering links between threat actors that operate in multiple domains (not only cyber)
  • Strategic geopolitical risk analysis that could have significant business impact
  • Supporting various security teams by providing reports on TTPs of the threat actors as per context (for example, threats in a specific country for executive travelling protection support)

Although those were some very generic and overly simplified examples it paints a clearer picture of the direction that CTI teams are moving. I expect that in the next few years we will see more and more intelligence professionals transitioning from the public sector to those teams, and more of those teams will keep on expanding the scope. If we think of it in the bigger picture, every company is miniature society and being able to timely inform the decision makers of that society’s upcoming threats is a crucial component. This is where I see the intelligence teams fitting in to the bigger picture.

Written by xorl

December 30, 2019 at 09:40

Strategies to protect against JavaScript skimmers

leave a comment »

The Macy’s JavaScript/digital skimmer attack brought some new developments in the web application security threat landscape. Apart from being very targeted, like the majority of those attacks, it was also stealing a very diverse set data. From credit card details, to account credentials, and wallet information. I find it strange that even large corporations fail to protect their assets against the threat of JavaScript skimmers, so here is quick write-up based on my experience. It’s a quick collection of strategies you can follow as a business to thwart this attack vector.

Apparently, not all of them can be applied to all businesses. And not all of them are providing the same level of security. Following the defense in depth principle, the more layers of security you add the higher your chances of thwarting a threat. That being said, I fully understand that some of them might not be feasible for legitimate business reasons. Lastly, this is not a complete list of ALL the potential protections, and none of the below acts as replacement of well-known security controls such as WAF, SDLC, vulnerability scanning, code audits, IPS, etc. So, take this as a list of defense controls recommendations against digital skimmers.

CSP: If your online presence allows it, this is one of the best countermeasures against almost all forms of this type of web based supply-chain attacks. Content Security Policies (CSP) in blocking/enforcing mode can protect you against the most common tactic of JS/digital skimmers delivery which is by modifying a 3rd party script. Basically, using CSP you can block communications to untrusted domains. My recommendation is, if you can implement this, do it. Even if that means doing it only for your payment/account related pages, do it.

Limit 3rd party attack surface: You can easily set up a CI/CD pipeline which fetches the various 3rd party JS that you use, scan them, audit them, ideally have some form of peer review process, and then deploy them in your web hosting platform. Just like any countermeasure, this cannot be applied to everything but the less external 3rd party JS you serve, the less exposure you have to this kind of threat. This is also a detective/early-warning control as it allows you to quickly identify a potentially malicious JS which will allow you to inform that partner and protect even more people. In short, do an assessment to figure out your posture on externally hosted scripts. Then see what percentage of that can work if hosted on your side. Based on that you can make a risk based decision of whether it is worth building and adopting this pipeline and business process or not.

Deep scans: Do frequent deep scans on your web applications and look for new content. In a few cases a simple hash comparison might work, but in today’s dynamic world you probably need something smarter than that. Thankfully, there are so many web analytics and profiling solutions out there that you can easily find something that can identify differences between the old and the new state for specific content. Apparently, in this case what we care about is JS scripts and whether those scripts: a) were recently modified b) have any form of callouts to domains they didn’t had before.

Client side validation: This is more on the grey area and if you decide to do it, I would highly advise you to get approvals from your legal, privacy, ethics, as well as any other regulatory related team or department in your company, and make it crystal clear to your terms & conditions and statements. So, process aside, this is about doing validation of your website’s content on the client side. Some businesses do it, especially on the financial sector, but it’s a grey area as it implies executing non-website related code on the client side. In short, there are many ways to do this independently as well as off-the-shelf options that do it which include hash comparison between the files of the server and client, identification of DOM changes, detection of popular malware families that implement form grabbing or web injections, etc.

3rd party assessments: Simple as it may sound, it is not that common. To summarize, how do you even know that the service/partner you are about to include in your website is not already compromised and serving scripts with digital skimmers? This is not the most common tactic for this specific threat, but it does happen occasionally. On the business side it’s relatively easy to have an extra step in your risk assessment before onboarding a new partner to also have a code audit (ideally both dynamic and static) to look for indicators of digital skimmers. This can also catch other potentially malicious behaviours, or even issues that could have reputational impact such as unsolicited advertisements, unauthorized user tracking, etc.

IOC based scans: Another very useful method is to use your tactical threat intelligence IOCs to look for matches in the JS scripts your website serves. Again, on the practical side this is a relatively trivial task which I suggest you to do from both sides (backend and frontend). Meaning, on the backend side you scan all your web content based on your IOCs and on the frontend you crawl and fetch as much as possible of the files of interest and do a similar scan. Since those attacks are usually very targeted the chances of having IOCs before the attack occurs are low, but it’s so easy to implement that it is usually worth it.

Simulation-based threat detection: This is one of my personally favourite but it is more on the advanced side of the spectrum for most businesses. Basically, you have some sort of continuous simulated visitor activity (yes, bots) on your production/externally facing web applications which profiles the interactions. This could trivially detect any new calls to previously unknown domains, additional JavaScript scripts being invoked or loaded in forms, etc. Apparently, this requires more resources but if you have the capacity is probably something you might want to investigate as an option.

Clean-up your shit: My last one is unfortunately one of the most neglected and it’s not even a security control. Just good software engineering practice. Ensure that you have some sort of profiling/analytics platform that informs the relevant development team(s) when certain scripts have stopped being accessed/used for a specified amount of time. Ideally, after a while you can optimise this further by automatically removing/disabling those scripts from your web platform(s).

One might argue that the Macy’s attack was due to a compromise in their systems or something along these lines. This is apparently a case of industry standard security controls to protect against data breaches/intrusions. I will not spend any time writing about this since it is a relatively trivial challenge for most security people out there. There are decades of work on frameworks, processes, and pretty well defined steps to get to a really mature level without excessive costs. On the other hand, there are not so many things around for digital skimmers. So this post focused explicitly on that. Recommendations for protection against digital skimmers. Those controls provide of course additional capabilities that can be leveraged for other purposes too but the goal behind all the recommendations is this, protections against digital skimmer attacks. Hopefully you’ll find something practical for your business in here.

Written by xorl

December 27, 2019 at 11:19

Posted in security

A short story around WannaCry

leave a comment »

A couple of weeks ago a friend of mine messaged me about the following recently released documentary by Tomorrow Unlocked. The reason was a mention of one of my Tweets (for something that I would least expect, to be honest). Although this has happened many times since I started participating in the security community almost 15 years ago, it was a nice reminder of how small things that you consider silly or unimportant can sometimes make a difference.

Long story short, the documentary implies that Marcus Hutchins, the guy that registered the killswitch domain for WannaCry, realized the impact of him registering that domain from that Tweet. I am not sure if this is really true, but if it is, I’m glad that I helped even a tiny bit in stopping this global threat in this very indirect, kind of silly, way.

This was interesting to me since for a long time I was considering that my greatest contribution to the WannaCry case was just helping with the mutex killswitch (MsWinZoneCacheCounterMutexA) discovery. Nevertheless, that incident passed and a couple of months later, in July 2017, along with a colleague (@IISResetMe) we presented some blue team related learnings regarding WannaCry in an event at Amsterdam. The slides for that are available here.

To summarize, you never know when something that you published or shared is going to help thwart a real threat. So, never stop sharing because if there is one thing that makes the security community great is this. We are all dealing with the same threats. Whether it is a cyber-criminal or a nation-state, even small hints could really help in building the bigger picture and protecting our assets. So, yeah… The security community, crowdsourcing challenges before it was cool. :)

Written by xorl

December 23, 2019 at 15:44

Posted in security

FIRST TC (Amsterdam 2019): Incident response in the age of serverless

leave a comment »

This year has been super busy. Many new challenges, achievements and learnings. One of my personal highlights for this year was the contribution to the Forum of Incident Response and Security Teams (FIRST) Technical Colloquium in Amsterdam with a presentation about serverless security.

The presentation was titled “Incident response in the age of serverless: A case study on GCP” and it was presented together with one of the smartest security professionals I know, Willem Gerber (@adrellias).

You can find the slidedeck here.

Written by xorl

December 17, 2019 at 13:04

BSides Cyprus 2019: Beyond phishing emails

leave a comment »

This year at the Cyprus University of Technology (CUT) in the heart of Limassol, Cyprus the first ever BSides Cyprus security conference took place. It was a great honour that my talk was accepted. The whole conference was an amazing, well organized event with great atmosphere and lots of great talks. Thanks for everything! Hopefully I’ll see you again next year!

My talk was about spear-phishing delivery techniques beyond email. Anything from using mobile messaging platforms, to popular cloud services, QR codes, all the way to my personal favourite, targeted advertisements on social media platforms.

The slidedeck is available here.

Written by xorl

December 16, 2019 at 17:12