Intelligence requirements: Moving from concept to practice

By Michael DeBolt, VP of Intelligence of Intel 471.

Our industry talks a lot about intelligence requirements. Yet I’ve noticed over the years a lack of practical advice being shared about how to actually work with or implement intelligence requirements as a fundamental component of a cyber threat intelligence (CTI) program. In a future blog, I’ll share how we do things at Intel 471, hopefully to help address this gap.

But for now, let’s tackle the disconnect between the concept and practice of intelligence requirements by looking at a few key benefits and challenges.

I’ll go on a limb and predict that most of the CTI industry is totally on board with the concept of intelligence requirements. There is a ton of really great material out there that covers it extremely well (such as this, this, and, of course, that). Thanks to these resources and others, in the last five years our CTI industry has evolved to appreciate the need for intelligence requirements as fundamental to what we do. This is an exciting and positive step that should be celebrated. Now more than ever before, we understand our overall success as intelligence professionals is measured on our ability to  satisfy the requirements of our stakeholders consistently and ultimately to inform their decisions and actions that protect our organization.

We know intelligence requirements are important. Here are three key reasons why:

Benefit 1: Maximized resources 

Most of us operate in an environment where resources and funding are scarce. A requirements-driven program maximizes our limited time, money and effort by trimming the fat. When done correctly, our human capital and data sources are synchronized, focused and aligned to meet the requirements of our stakeholders. We know exactly what we need to collect, produce, and deliver, and who needs it.

A simplified collection plan showing synchronization between deliverables, sources, stakeholders and intelligence requirements.

Benefit 2: Measured success criteria

There is no ambiguity in what we collect or produce. Each data source, report and deliverable is aimed at satisfying Priority Intelligence Requirements (PIRs) agreed upon by you and your stakeholders. Requirements are frequently revisited with stakeholders to ensure alignment, and any deliverable that regularly falls outside the scope of those requirements requires heavy scrutiny, gap analysis, and justification.

Benefit 3: Demonstrated CTI return on investment

An intelligence program grounded in stakeholder requirements enables objective measurement of intelligence production and impact over time. This helps confidently answer the inevitable question from senior management, “how does our CTI capability provide value to the organization?” 

So the concept and justification for requirements is crystal clear and firm — intelligence requirements are the lifeblood of any CTI program

Which brings us to the question at hand: “How to build a requirements-driven intelligence program?” We appreciate intelligence requirements, but historically, migrating from concept to practice has been challenging due to a number of reasons, namely:

Challenge 1: Prioritization nightmare

Requirements and expectations differ greatly across the various stakeholders supported by our intelligence teams. Many have vastly different and sometimes overlapping requirements and expectations. Others are opaque, confusing or too broad to align specific resources. These factors often create daunting scenarios for intelligence teams to prioritize effort aligned to stakeholder needs, resulting in confusion or a lack of focus with what an intelligence function ultimately collects and produces. A proper requirements-driven program aligns stakeholder priorities to intelligence production.

Challenge 2: “Whack-a-mole” game

Intelligence teams often are inundated with ad-hoc, “block and tackle” requests from stakeholders, leaving them out of position not only to respond and react in a timely and accurate manner to unfolding events, but also to paint situational awareness to key decision makers in advance of ever-evolving methods of those that seek to harm the organization. Without a requirements-driven program, intelligence teams are destined to be reactive and short sighted and continuously will struggle to provide intelligence that informs proactive decision-making, such as threat assessments across industry, supply chain and geographic areas of interest.

Challenge 3: Same goal, multiple languages

While the concept and benefits of a requirements-driven intelligence program are clear, putting it into practice can be very difficult. Fundamentally, achieving this is a team sport that requires synchronization across at least four components — the CTI team, executive leadership, stakeholders and vendors. Yet, each operates differently, speaks their own language and uses various definitions to achieve their end goals. To help bridge this gap, our CTI industry needs a commonly understood and accepted intelligence requirements framework for defining, managing, processing, tracking and producing intelligence aligned to our stakeholders’ needs.

In the next blog, I will detail our practical approach to meet these challenges using our “General Intelligence Requirements” (or “GIR”) framework.

Melting the ‘deep and dark web’ myth and why we hate the phrase

By Michael DeBolt, VP of Intelligence of Intel 471.

It’s not deep. It’s not dark. It’s not the ominous underside of an iceberg.

The deep and dark web, or simply the “underground,” as we like to call it at Intel 471, is an organized ecosystem of products, services and goods consisting of real life suppliers and consumers who can be mapped, tracked, understood and exposed.

As an industry, we do ourselves a disservice by often mystifying and overhyping otherwise straightforward concepts, such as  the “deep and dark web” for the pursuit of catchy sales pitches or taglines. At first glance, this may seem harmless and nitpicky, but the reality is this sensationalization of the underground causes confusion and disorientation, putting our industry at risk of not being able to fully understand and exploit it for intelligence purposes.

So let’s cut through the jargon and look at four essential characteristics of the underground.

The cybercrime underground is…


The popular iceberg metaphor often (and overly) used to illustrate the vastness of the deep and dark web is…., well, flat-out wrong. As Recorded Future correctly points out, the deep and dark web actually is a small fraction of the entire web and it’s not as dark and mysterious as we are led to believe.

So, instead of hyping the “clear, deep and dark” buzzwords, we simply break down the different areas of the web in terms of accessibility and the barrier of entry needed to collect and exploit for intelligence purposes.

Open sources

These resources are any content openly indexed by conventional search engines and publicly accessible via conventional means. In other words, you don’t need specialized software or authorization to access it. Think about open-source news articles, public social media posts and open directories.

Closed sources

This is any nonindexed content gated from public view. Access to closed sources depends on the sensitivity of the underlying content and the privilege needed. Some closed sources require standard username and password access, while others require specialized software and vouches by trusted members of the community. We refer to closed sources where cybercriminals operate as simply the “underground,” which consists of forums, marketplaces, instant messaging platforms, private social media groups and more.


The “deep and dark web” myth would have you believe it is random and chaotic, but that is not so. In fact, the underground is an organized ecosystem of interdependent suppliers, vendors, consumers, facilitators and active participants. Suppliers offer speciality products and services to prop up the cybercriminal marketplace (i.e., malware, infrastructure and more). These actors use carefully constructed public relations and advertising campaigns to promote their offerings. Any successful supplier worth their weight will carry a positive trackable reputation, while fakers and scammers are identified quickly. These observable aspects of the underground let us lower the noise so we can laser focus our intelligence collection efforts on high-value actors.


Unlike the popular belief perpetuated by the “deep and dark web” myth, the underground is a finite space that can be mapped and explained. It consists of hundreds of closed sources, such as forums, marketplaces and instant messaging platforms, where real humans plan attacks, share ideas and do business exchanging data and information. Sure, there are millions of actor handles and indicators in the underground, but only a finite number actually are perpetrating and enabling the majority of cybercriminal activity. Our job as intelligence professionals is to understand how the underground is organized so we can locate, prioritize and collect from the sources and actors that matter.


The underground is organized, finite and propped up by real people with natural human instincts and motivations. Our job is to understand these motivations — whether it be financial gain, political cause, or personal fame — and exploit them for intelligence purposes using a combination of automated collection and human engagement. Automation is useful to help sift through and bubble up content. But because we are dealing with the nuances of human behavior in vetted, closed sources, relying solely on a fully automated collection capability will only surface a fraction of intelligence available in the underground. Skilled, native-speaking researchers and sensitive, well-placed sources must be used to gain, maintain and elevate access to top-tier actors to elicit valuable information that only can be obtained through human-to-human interaction.

Don’t believe the hype

As our CEO Mark Arena previously outlined in detail, the cybercriminal underground is available for us as intelligence professionals to map and exploit proactively. Actors operating in this environment are not wearing black hoodies or Guy Fawkes masks. These are real people who have natural tendencies, motivations, reputations and brands, who can be categorized and tiered according to their reputation, history and background. When we intimately understand their business models, processes, enablers and pain points, we are ready to take action to counter the threat proactively.

No, the criminal underground isn’t dropping its use of Bitcoin anytime soon

By Mark Arena, CEO of Intel 471.

I recently read an article which claimed the “criminal underworld” was dropping its use of Bitcoin. In the past month, Intel 471 has looked closely at the criminal underground to identify if Bitcoin was still strong in its use and whether there were any up-and-coming cryptocurrencies that were gaining traction or which eventually might overtake Bitcoin’s current usage levels.

Overall, Bitcoin still appears to be the most popular cryptocurrency in the underground by far. Given the recent problems with Bitcoin (high fees, slow transactions, ability to track transactions), one would expect a growth of other cryptocurrencies. However, alternative cryptocurrencies still are not widely used as a payment method, at least in part because the payment and escrow systems of most of the criminal marketplaces mainly support Bitcoin only.

Anecdotally, it appears Monero is becoming more popular because:

1. It provides full anonymity; and

2. It is easier to port a miner into a hidden malware for all platforms, including mobile.

We looked on our platform to count the number of mentions per each cryptocurrency. Roughly, the mentions are:

· 50,000 to 85,000 for Bitcoin;

· 2,000 to 14,000 for Ethereum;

· 1,000 to 2,500 for Monero; and

· 1,000 to 2,000 for Litecoin.

Our analysis of criminal underground forum posting timelines show that mentions of Bitcoin steadily grow; Ethereum’s mentions are fewer, but growing slightly; Monero shows some ups and downs, but seems to decline slightly; and Litecoin suddenly appeared in March 2017, but since has been declining.

On another note, one of the top sellers of credit cards in the underground recently wrote he was seeking to add Dash as an alternative cryptocurrency after Bitcoin, which currently is the only cryptocurrency supported there.

In summary, Bitcoin still is the top cryptocurrency used by criminals in the underground by far. More importantly, it is unclear whether any cryptocurrency will overtake Bitcoin, not which cryptocurrency that might be.

Naming malware: What’s in a name?

By Mark Arena, CEO of Intel 471.

This week’s incident with Petya/NotPetya/GoldenEye/Nyetya/Petrwrap has reignited the debate about how security companies name malware. In my opinion, the security industry’s use of different names for the same thing isn’t good for either customers or the industry at large, and it’s something that could be solved without too much effort.

Why do we need consistent naming?

This week I was on a webex with regards to Petya in which there were numerous questions around the naming of the malware. This was clearly confusing for a lot of people who were on the call. When we look into more advanced cyber threat intelligence analysis and tying tools to campaigns and groups, it gets even more complex. An an example, let’s look at “Carbanak”, which is referenced a lot by the security community. The word “Carbanak” is a combination of the words “Carberp” and “Anunak”, which are both trojans/malware used by a variety of different actors and groups. Numerous security companies refer to some malware samples as “Carbanak” and I don’t even know what that means.

What names should we use?

When I think of how we currently name malware, I immediately think of how each language names countries. Let’s look at some examples:

When it comes to naming countries, I’m of the opinion that everyone should call the country by the name the locals call it. And when it comes to malware, I’m of the opinion that we should call the malware the name the bad guys call it — hence my preference for the term Petya. On the (numerous) occasions that we don’t know the name the bad guys call the malware, I’d suggest that the malware be named by the first security company that found it.

An independent and central repository for malware naming

Security companies (Intel 471 included) also need an independent adjudicator to keep us from assigning different names to the same malware. Something like how Mitre handles CVEs could work — just set up a counterpart to “The Standard for Information Security Vulnerability Names”, and call it “The Standard for Information Security Malware Names.”

Being a cyber threat intelligence analyst and operating in the fog of uncertainty

By Mark Arena, CEO of Intel 471.

A lot has been said, blogged and marketed on WannaCry ransomware with many pointing fingers at who they think might be behind it. The objective of this blog isn’t to critique, support or disprove any specific hypothesis. The goal is to highlight what it means to be a cyber threat intelligence professional who will most certainly be faced with the reality of incomplete information and/or different levels of uncertainty. The ability to operate and make assessments under a fog of uncertainty is what intelligence analysts do…it’s a core competency!

Ultimately, those analysts willing to stand up publicly or within their respective organization to make a reasoned and well explained assessment should be applauded. My hats off to those intelligence analysts.

The value of attribution

There is a continuous debate in the information security community about the usefulness of attribution. First, attribution is attainable and happens at various levels all the time. Attribution comes in various levels such as a specific person, a group, a nation-state, an general effort, etc. Attribution absolutely can provide valuable insight that is relevant to decision-making at all levels.

As an example, let’s assume that we are a member of the information security team of an organization that had several machines infected with WannaCry. Our machines contained business critical files that were encrypted. Initially, we believe this ransomware is deployed by financially motivated cyber criminals. Conventional wisdom suggests that if you pay a criminal to decrypt your ransomed files, there’s a high probability they will be decrypted. At this point you advise your executives that if you paid the less than $1,000 USD, your systems will likely be decrypted.

You later learn, having not paid for your files to be decrypted yet, that North Korea is the most likely culprit for the ransomware on your systems. Your organization is based in South Korea and is involved with economic policy research. How would the likely attribution of WannaCry to North Korea change how your organization would respond to the ransomed systems?

Dealing with uncertain answers is the name of the game

As a cyber threat intelligence professional, it’s up to us to use our skills and experience to help our intelligence consumers understand the who, what, why, when, how…and what else. A lot of that involves dealing with uncertainty and a lack of information where (at the time) there is no right or wrong answer. At the end of the day the intelligence game is like criminal profiling. You hope that you will be right 99.9% of the time, but you operate under the assumption that you are right 100% of the time because the benefit of doing so greatly outweighs the disadvantages of doing so.

With regards to WannaCry and the companies that publicly expressed their opinion based on who was behind it, congratulations. I was very impressed with the reasoned arguments and confidence levels expressed. Props especially to Neel Mehta from Google, Kaspersky, Symantec, Digital Shadows (big props!) and of course the Intel 471 team. To those that ridiculed the organizations that had the expertise and courage to make and explain their assertions, hopefully you will be part of the cyber threat intelligence community in the future.

Express your opinion, the reasons behind it, your level of confidence and don’t be afraid to be wrong

Don’t be afraid to express your opinion, why you think it, your level of confidence, multiple hypothesis and even highlight your information gaps. On the Intel 471 side, despite publicly saying that we believed North Korea was the most likely culprit for WannaCry, we spent and continue to spend time and resources on researching financially motivated cyber criminals who might be the culprit. We are effectively researching avenues that would be counter to what we said publicly as our confidence level in our assertion was low.

If you just publish facts, you are more a journalist or police officer, not a cyber threat intelligence analyst

Former CIA Director John Brennan summed up the intelligence game versus evidence gathering well during a recent hearing:

A US Congressman asked if Brennan (former CIA Director) had “evidence” of collusion between Trump and Russia.

“I don’t do evidence,” Brennan replied.

If you sat on the fence of the WannaCry attribution debate and said there wasn’t enough information for you to form an opinion on who the likely culprit was, have a close look at whether in the past you have been simply reporting facts like a reporter or police officer doing an investigation. A true intelligence analyst must have the ability to deal with information gaps and uncertainty as well as effectively fight their biases. It’s a reality of the space we live in.


Who hacked the Democratic National Committee?

By Mark Arena, CEO of Intel 471.

Who hacked the Democratic National Committee?

I’ll preface this post by saying that I possess no information on this incident beyond what has been mentioned in open sources. This post is my personal opinion and is based on my experience researching and tracking both state and non-state cyber threat actors. I’ll also add that Intel 471 does not actively research and track threat actors that are involved with espionage and is focused on financially motivated cyber criminals and hacktivists/politically motivated threat actors.

On June 14, the Washington Post published a story that indicated Russian government hackers had hacked into the Democratic National Committee (DNC). The specific information linking the hack to the Russian government came from the cyber security company CrowdStrike:

One group, which CrowdStrike had dubbed Cozy Bear, had gained access last summer and was monitoring the DNC’s email and chat communications, Alperovitch said.

The other, which the firm had named Fancy Bear, broke into the network in late April and targeted the opposition research files.

I personally know a number of smart people who work at CrowdStrike and I trust them when they say that a specific intrusion or incident is linked to a specific hacking group. With regards to linking intrusion sets to groups, CrowdStrike uses an animal naming scheme to tie intrusion activity and intrusion sets to groups and countries. In this case CrowdStrike said that they observed intrusions in the DNC tied to the groups they call Cozy Bear and Fancy Bear where Bear signifies Russia. I have no doubts at all that CrowdStrike indeed observed intrusion set activity within the DNC’s environment that linked to these groups they had identified and were almost certainly actively tracking.

Guccifer 2.0: A spanner in the works

On June 15, an actor who calls himself Guccifer 2.0 created a WordPress blog where he posted a number of claimed confidential reports from the DNC including one on Donald Trump. In the blog post, an effort appears to be made to say how easy it was to hack the DNC and called into question CrowdStrike who linked two intrusions of the DNC to the Russian government.

For those that aren’t aware, the handle Guccifer was used by a Romanian hacker who was recently extradited to the United States. This actor was involved with hacking high profile people such as politicians and celebrities and publicly releasing their emails. Guccifer currently sits in a jail in Virginia awaiting sentencing.

Russia? Attribution is hard right?

When it comes to attribution of intrusions to groups or specific people, we are really talking about two things:

  • Attribution of the observed intrusion sets (malware, exploits etc) to known intrusion groups. This is where CrowdStrike tied this activity to the groups they call Cozy Bear and Fancy Bear.
  • Attribution of the threat grouping to a specific person, group/organization or nation state where in this case CrowdStrike has clearly singled out Russia. This is a lot harder than the previous point.

On the first point, I have complete confidence that CrowdStrike is able to track and link specific intrusions tools to known groups which they actively track.

On the second point, it is a little more unclear whether this activity is tied to the Russian government and I can’t really comment on that as I don’t have information that supports this or not. This type of attribution is done in a number of ways but is not limited to:

  • Tracking of specific targets/target sectors over a long period of time and mapping that against nation state objectives. Confidential and internal information within the DNC would be of clear interest to the Russian government and other governments. It might also be of interest to a politically motivated hacker who would want to discredit the DNC by publishing their sensitive information.
  • Researching intrusion activity and identifying operational security failures on behalf of the intrusion operators. A good example of this is where iSIGHT Partners was able to tie a claimed Islamic State (ISIL) hacking group to a Russian group they track as APT28.

Guccifer 2.0 did it!

One thing for sure with Guccifer 2.0 is that he clearly has demonstrated access to internal documents of the DNC. Given that, I believe there’s two possibilities:

  • One or both of the groups identified by CrowdStrike is tied to Guccifer 2.0 and this is a disinformation campaign against CrowdStrike and the DNC.
  • Guccifer 2.0 is a distinct threat actor who had access to the DNC’s systems at some point. At no way does this mean CrowdStrike was wrong with linking the activity they saw in the DNC’s environment. I’ve seen numerous occasions where organizations have been compromised by multiple different intrusion groups and the evidence of one intrusion group being active in a victim’s environment doesn’t mean another intrusion group can’t be active in the same environment at the same time.

On Guccifer 2.0 being a possible disinformation operation, I recommend closely looking at Guccifer 2.0’s writing. Based on the style and how it has been done, it looks like it was written by someone who doesn’t speak English as a first language and uses mannerisms used by people based in Eastern Europe or was purposely written like this. I also recommend reading the Twitter timeline for pwnallthethings which talks about claimed operational security (OPSEC) failures on behalf of Guccifer 2.0 and various files that were uploaded online. I’ll add that initially I was surprised by how quickly a disinformation operation could possibly have been executed after the Washington Post article.

Final Remarks

I’ll finish things off by repeating that in my opinion the emergence of Guccifer 2.0 does not at all conflict with CrowdStrike’s findings. Guccifer 2.0 may be a separate actor or may be tied to one or both of the intrusion groups CrowdStrike claims were active inside the DNC.

Cyber Threat Intelligence: Comparing the incident-centric and actor-centric approaches

By Mark Arena, CEO of Intel 471.

When it comes to cyber threat intelligence, the security industry mostly appears to take the view that indicators of compromise (IOCs) are the best approach to initiate/drive the intelligence process. If we take a step back and look at traditional intelligence concepts, we will find the following definition of intelligence:

“Simply defined, intelligence is information that has been analyzed and refined so that it is useful to policymakers in making decisions — specifically, decisions about potential threats to our national security.”

Consumers of indicators of compromise within an enterprise are typically on the ground network defenders yet the definition above shows intelligence defined as being useful to policymakers or executives. Based on this definition, we will make the case that an actor-centric approach to cyber threat intelligence enables predictive analysis and hence is useful to executives within your organization. I’ll preface this blog post by saying that while Intel 471 provides actor-centric cyber threat intelligence collection and information, we are not favoring one approach over the other. Additionally, we are not implying these are the only approaches to building a threat intelligence program. Rather, we believe that any threat intelligence program should include both an incident-centric and actor-centric approach.

Brian Krebs recently wrote an article that illustrated the fact there is real value in adversary or actor-centric intelligence collection when assessing cyber threats and the risk posed by them. The article also highlighted there are efficiency gains to be had through understanding threat actor and groups. Brian sums it up nicely with the following quote from ThreatConnect:

“Now if we consider for a moment the man hours and ad hoc reprioritization for many security teams globally who were queried or tasked to determine if their organization was at risk to Rombertik — had the organizations also had adversary intelligence of Ogundokun’s rudimentary technical and operational sophistication, they would have seen a clearer comparison of the functional capabilities of the Rombertik/Carbon Grabber contrasted against the operator’s (Ogundokun) intent, and could have more effectively determined the level of risk.”

An incident-centric approach

The incident-centric (or IOC-centric) approach typically begins with the detection of an event such as reconnaissance, or compromise. Really we’re operating in an incident-centric approach anytime the intelligence process is initiated and/or driven from IOCs (Indicators of Compromise). For example, a response effort might identify the following that kicks off the intelligence process:

  • Files (filenames, hashes, etc) that are dropped onto the system;
  • Registry keys added/changed;
  • Command and control (C2) server information (domains, URI paths, IP addresses, etc).

Using these IOCs we want to build out an understanding of the tactics, techniques and procedures (TTPs) and the higher-level campaign associated with this event. We are effectively trying to understand:

  • How did the malicious files end up on the compromised computer?

An exploit kit from an innocent user browsing websites?

A targeted spear-phish that was sent to the compromised user?

What exploits/exploit method was used to compromise the system?

  • What malware family was dropped on to the compromised system and what was its functionality?
  • What would the malware and associated access have allowed the threat actor to do on the system or network?

Pros of the incident-centric approach:

  • Direct relevance is established, as the intelligence effort dovetails from an incident response that has already impacted your organization;
  • Potentially allows identification of the threat actors and groups that are targeting your organization;
  • Provides IOCs that can be used to aid in the identification of compromise from the same threat actor, campaign and incidents across an organization.

Cons of the incident-centric approach:

  • Reactive approach initiated after your organization has already been impacted to some degree;
  • Focuses primarily on the attack surface and doesn’t reflect the process that the threat actor needs to go through to impact your organization. For example it doesn’t cover a threat actor seeking:

Exploits to purchase;

Malware to purchase;


  • Difficult to be predictive.

An actor-centric approach

There is continuous debate in the information security community about the usefulness of attribution of threat actors and groups, but we believe that attribution to various levels (person, group, nation-state, etc.) provides valuable insights that support decision-making at all levels.

The actor-centric approach starts with threat actors or groups, which is the reverse of the incident-centric approach. It should be noted that by solely focusing on threat actors that have mentioned your organization, you will lose the ability to be proactive. Brand monitoring can serve a valuable purpose, but we do not believe that it’s effective approach in isolation to collect proactively against threat actors. There are a number of threat actors that are attempting to impact your organization, but you may not observe them mentioning your organization by name. Therefore we believe it is best to focus on all actors, to include enabling actors, that might impact your sector/vertical.

Starting with the threat actors themselves, we want to understand:

  • Who are they?
  • What are their associations with enabling actors and partners?
  • What are their motivations?
  • What are their technical skills and abilities?
  • What are their TTPs?

Once we understand this actor-centric information, we want to fuse this information through analysis and correlation with other intelligence information. Ideally we could then tie their TTPs and campaigns to specific IOCs as well.

Pros of the actor-centric approach:

  • Enables your organization to be proactive and predictive;
  • Provides context around an actor’s motivations and their abilities before an incident occurs;
  • Focused on adversary’s business process rather than just the elements that (could) impact an organization’s attack surface.

Cons of the actor-centric approach:

  • Relevance to your organization might not be readily apparent;
  • It is challenging to gain and maintain accesses where threat actors and groups operate;
  • Requires analytical effort to fuse with your other sources of information;
  • Requires regularly updated prioritization of threat actors to focus on;
  • May be missing IOCs to look for within your organization.


The incident-centric approach is a required aspect of any mature threat intelligence program. On its own, it’s effectively the equivalent of the United States government monitoring Russia’s missile program solely by watching Russian soldiers firing missiles at and inside Ukraine, which they almost certainly are. In that example, you can be sure that the US government monitors Russian defense contractors, enablers and developers of Russian’s missile program at the direct person and organizational level.

With regards to the actor-centric approach, one could argue whether it is actionable or not. On its own and in isolation it probably isn’t, but when fused, stored and correlated with your own organization’s data/information and other sources of information it can be both predictive and actionable. Feeds of IOCs are frequently incorrectly referred to as actionable cyber threat intelligence within the security industry when this is simply raw data and another source of information.

If your organization simply takes external feeds of IOCs and automatically blocks them, you do not have an intelligence program. If you analyze (with a person) multiple sources of information in order to produce an output that is timely, relevant to your organization, and based on predetermined requirements, then you have an intelligence program.

Cyber threat intelligence requirements: What are they, what are they for and how do they fit in the…

By Mark Arena, CEO of Intel 471.

There are many definitions of what is an intelligence requirement but the definition to me that is most accurate is:

“Any subject, general or specific, upon which there is a need for the collection of information, or the production of intelligence.”


With the above definition I want to highlight that an intelligence requirement could be one of two things: something where there is a need for the collection of information or the production of intelligence. Based on this, breaking out these two differing types of intelligence requirements into separate lists is the best approach.

Let’s take an example:
The CISO/CSO (Chief Information Security Officer) of your organization wants to know of any vulnerabilities that are being exploited in the wild that your organization can’t defend against or detect.

As you can see in the above example, breaking out what is needed for the production of intelligence and renaming it production requirements makes it clear that this is what is required for the production of intelligence therefore:

  • Production requirements (what the intelligence consumer needs)

-> Intelligence requirements (what questions do we need our intelligence collection to answer to meet our production requirements)

Now we have two separate tables of what our intelligence consumers need versus what we need to collect. Next we need to identify what observables or data inputs we would need to answer our intelligence requirements, which we’ll call our collection requirements. Lets take another example:

In the above example what we have is:

  • Intelligence requirements (what questions do we need our intelligence collection to answer to meet our production requirements)

-> Collection requirements (what observables/data inputs do we need to answer our intelligence requirements)

By breaking out your organization’s collection requirements this way, it will allow you to better assign the responsibility to collect for a specific collection requirement to a team, capability or external provider which can be regularly assessed to see how it fits into helping meet your organization’s intelligence requirements. When it comes to external providers of intelligence, there’s a number that sit in the intelligence cycle between collection and dissemination. Intel 471 sits primarily in the collection part of the intelligence cycle and works with organizations to more effectively collect against their externally focused collection requirements. When we refer to “externally focused” collection requirements, what we mean is that collection requirements can either be internally focused or externally focused where internally focused collection requirements require visibility on the subject organization’s attack surface. Externally focused collection requirements involves requirements that are adversary/cyber threat actor focused.

The holy grail of cyber threat intelligence prioritization?

The holy grail of cyber threat intelligence prioritization is to have a single long-term prioritized list of production requirements that is updated twice a year. The production requirements should be broad enough to encompass short-term requirements that immediately head to the top of the priority list but are very narrowly focused and only last at maximum 30 days, for example a requirement to assist with incident response to a security breach. In the absence of a single prioritized list, breaking down intelligence priorities by high, medium and low priorities is acceptable and is common practice with most resources going into satisfying high priority items.

Intelligence requirements: Attack surface or adversary focused?

A common question cyber threat intelligence professionals encounter is whether their organization’s intelligence requirements should be attack surface or adversary focused. A single intelligence requirement could be either but not both. An intelligence requirement in the context described above needs to relate to a specific observable that is eventually tasked to a team, capability or external provider so will need to be attack surface or adversary focused.

Actionable intelligence — Is it a capability problem or does your intelligence provider suck?

By Mark Arena, CEO of Intel 471.

Significant numbers of security and threat intelligence vendors spruik their intelligence or data as being the most actionable but is it? In this post I’ll hope to make the argument that whether intelligence is actionable or not is really up to the consumer of said intelligence, not the producer.

Let’s start with an example:

Intelligence is passed to XYZ Corporation, which is a United States based bank, that they will be targeted by an attack from a well known cyber espionage group tomorrow that has previously targeted executives of other banks. The attacker group is known as Team Panda and APT Group 10 and is known to use the PlugX trojan that is delivered via spear-phished emails with .xls (Microsoft Excel) attachments. This attack, due tomorrow, will use the domain as their malware command and control server.

In the above example, one would say that there’s a number of elements of the above intelligence that could lead to it being actionable in most organizations today in the banking sector. Let’s list some of those elements:

  • Time of attack (tomorrow)
  • Likely targets (executives)
  • Trojan to be used (PlugX)
  • Method of dropping the trojan (spear-phished .xls file)
  • Command and control server used (

Now let’s rewind the clock back ten years to 2006 and change the target organization. Using the same above example, the target is now ABC Corporation, which is an Italian e-commerce company and this warning was sent in 2006 for an attack that was to occur the following day. At that time, ABC Corporation did not have any capability to block IP addresses, let alone the ability to seek to block suspicious .xls attachments in emails. What we effectively have here is great intelligence but little to no capability to act upon it.

If at this point we accept that intelligence is actionable based on the organization consuming it, it brings us to the next issue: how does one measure the effectiveness or not of vendors that provide intelligence or intelligence information/data. If you recall a previous blog post I wrote on writing intelligence requirements for your cyber threat intelligence program, you will see that the success or not of your intelligence program is directly linked to how your intelligence program supports the priorities of your business and the risks against it.

Based on the above points, evaluating threat intelligence providers should be based on your intelligence requirements and how intelligence providers measure up against these requirements. Some intelligence providers may simply be providing intelligence or intelligence information that you are not yet able to action. Additionally, some intelligence providers may be geared towards supporting the requirements of different verticals to your own.

Whether intelligence is actionable or not is really a reflection of an organization’s capability, not a quantitative metric for an intelligence producer.

Actionability is often a reflection of capability.

Cyber threat intelligence: Why should I be worried about threats that aren’t specifically…

By Mark Arena, CEO of Intel 471.

When it comes to cyber threat intelligence, the big question that comes to mind when evaluating intelligence or intelligence collection, external from a vendor or internally generated, is whether it is relevant to me and my organization. If you read my previous posts, you would have seen that I measure relevance as whether it satisfies established intelligence requirements. Simply put, actionability is a reflection of internal capability. However, this post is really about explaining the benefits of focusing on threat actors that could impact your organization and not just the threat actors that are already impacting your organization.

One of the common issues I see in the cyber threat intelligence industry is a myopic view of cyber threats whereby we see cyber threats as not being relevant to my organization if my organization isn’t being impacted right now. On that point, I’d like to step back to the overall objective of an intelligence program, which is to reduce risk in an organization whereby risk is the probability of an event occurring multiplied by the impact of the specific event.


We are really trying to reduce two elements of a risk being realized, the probability of a risk occurring or the subsequent impact of the event. There are only two ways we can reduce risk:

  • Block, stop or reduce the probability of an an event occurring
  • Reduce the impact of an event if it occurs or has occurred

If we solely focus on cyber threats where our organization has already been impacted, we have already missed the opportunity to stop an event from occurring. Examples of doing this includes:

  • Looking in your network for known indicators of compromise
  • Monitoring Pastebin for data dumped from your organization
  • Identifying and recovering compromised customer account credentials

The above elements can be valuable in reducing impact for events but if done in isolation, will not provide your organization with the full benefits of having an intelligence program.

At this point you would be wondering how to tackle the probability part of the risk equation. We can do that in a couple of ways but mainly I like to remember these two assumptions:

  • The threat actors that are impacting me are also impacting other organizations like me.
  • The threat actors that are impacting other organizations like me will likely impact me at some point.

At a basic level, this means to proactively examine threat activity against other organizations in your vertical or sector. If you are able to look into this activity and obtain enough detail, then you will be able to proactively block or detect this activity through policy or security control changes.

I sometime describe intelligence as a field as being similar to profiling in the criminal world. A criminal profiler seeks to look at available information and evidence and deduce the likely profile of a perpetrator for a crime. It isn’t an exact science but on the balance of numbers, a criminal profiler should pay off more often than not. Our intelligence program is similar in that it isn’t an exact science but it doesn’t take a big leap in thinking to see that a threat actor affecting an organization in your sector is likely to turn their sights to your organization at some point. This is how we make our intelligence products predictive but we can’t do that if we only shift focus when our organization has already been impacted.