Thursday, March 27, 2014

The value of IP and Domain information


The value of IP and Domain information


Starting with the end in mind, make the data you have consumable (and thus actionable), make sense of the data, drive risk decisions, and share data with trusted partners and shift from routine monitoring to internal threat intelligence. Start with what you can easily work with like IP addresses and domains.

Some debate is raging within enclaves of the internet about the value and accuracy of the APT1 report.  The criticism is in the tacit link between the victims and the numerous sources from a single region of the world, Shanghai China.  The APT1 report shared the indicators of compromise allowing myself and others to compare indicators, signatures etc. and evaluate the conclusions.

I set out to evaluate and compare file names, hashes, IP addresses, Domain and all the other atomic indicators from the report against data I had collected.  The overlap confirmed quite a bit of what I knew prior. How many orgs could go and compare notes at that level?  Not enough.

Anytime a new report is issued with unmasked indicators, each of us should evaluate the findings.  Sharing and tracking starts with internally sourced threat intelligence and I would argue that every organization needs the capability, starting with a simple tracking system.  Any atomic indicator such as hash, IP address, domain, filename, has a half-life of sorts.  As the effectiveness decreases, it is less likely that you will continue to see in any specific remote edge of the internet.  If you mined the APT1 report for indicators, most are useless by now.

The net effect of the APT1 report is higher salaries of these with TI on resumes, and driving business to Mandiant.  It made a bunch of security vendors shift positions and consider how to capitalize on threats.  How does one capitalize on any large list of IPs and domains as found in the APT1 report?  If the thinking is to toss it all into a SIEM you might as well stop reading here.

Threats are associated with IP addresses and domains, but focusing on IP and Domains alone is pointless because the threat will move and you have a stale list, ultimately wasting time on false positives.  At what point does an IP address stop being a threat?  The domains, IP addresses, and AS numbers are part of threat intelligence.  In a simple example, any point of concentration of IP or domains tells you that nearby IPs and domains are worthy of examination during the fleeting time the space is being used.

Why internal threat intelligence?
Internal threat intelligence was initially leveraged by government and the defense industrial base, at least the smarter ones. Then, it was the telecoms and large internet service providers and now energy and financial sectors are making a play for top talent to consume internal TI. How far does it go?   Can we ‘rent’ the skills from TI brokers or commission specialized reports?

Anton Chuvakin in a recent blog explored the difficulties and objectives of internally sourced threat intelligence, it is worth the read.  http://blogs.gartner.com/anton-chuvakin/2014/03/20/on-internally-sourced-threat-intelligence  My short take is that Internal threat intelligence evaluates the same information sourced from incidents as anyone doing monitoring. Internal Threat Intelligence takes a depth and breadth analytical approach with the available information. Internal Threat Intelligence is the split between detection and response providing threat suppression.

Internal Threat Intelligence is the hedge against the immediate threat landscape and what is over the horizon.  In my personal view, threat intelligence is not grounded in large sets of IP and Domains with poor reputations but having context, history, and in the narrative of objectives and actors.  One cannot reach that stratum without a solid foundation to collect and analyze the local information, compare against rational and trusted resources, postulate and test hypothesis and eventually point fingers.  The APT1 report was a confirmation of findings, not a revelation.

I think internal threat intelligence will be a required part of monitoring.  All analyst should seek to track IP and addresses locally. It is not enough to consume external indicators alone. It is not enough to purchase a set of sensors, plugging in data looking for matches.  Monitoring won’t go away but it will become more automated, becoming easier to match events, provide entity specific information, and use large data sources to evaluate relationships, measure impact and finally, drive the incident response. 

Linking events, actions and incident through to actors will continue to be in demand with the cost being reduced by emerging platforms that synthesize internal information and rational findings from the outside world.  Fusion is the by-product of the most useful and reliable sources measured to bring value in understanding threats.  TI is predictive in nature and about the only way divest from a Maginot line (popular commentary made by the RSAC ‘C’ level speaking collective, yet useful in my thinking as well).

Passive DNS


I was introduced to passive DNS as a useful analytical tool a few years ago when someone I once worked with wrote his own variant using Python and MySQL.  What I learned is the value of tracking just the right amount of information exceeds the value of tracking all of the information, and using only public DNS information is futile.  The instant utility of passive DNS for each enterprise seemed evident to me, however this particular implementation suffered performance issues that became unbearable in just a few days. Ultimately, it was scrapped after a month or so.

Before I go further, exact matching IP and Domains is nearly futile and won’t compare to the value of computed and some behavioral indicators.  Understanding why a stream between host contains  'system32' with a file write operation of a ‘rar’ file or batch file has a better chance a detecting pivot for example.  If you intend to track, choose to do it well.  The idea that you could shop and explore known IP or domain address and get a sense of when the query was first made, how many and what neighboring domains and addresses were doing is valuable. 


Passive DNS is not new, but I don’t think the general utility to analyst is well understood.  In my view, SecurityOnion should have the capability.  Tracking means a composite of domains and IP address within your own environment, instantly searchable.  At higher levels, a potential threat detection system with room to innovate.  Florian Weimer introduced passive DNS many years ago (paper http://www.enyo.de/fw/software/dnslogger/first2005-paper.pdf) and it is well worth the read.  Plenty of services exist in the commercial space for analyzing your DNS records for potential threats like OpenDNS’s Umbrella and Damballa.  With the services, you get the value of the analytics at scale, and each has invested in exploring and improving detection, with the most virulent malicious actions being noticed and suppressed with speed. 

However, the targeted attacks may not have enough concurrency to get noticed, that is to say a single domain that does not fall into set characteristic using SVM, or not newly registered, falls below a noticeable threshold used in large-scale detection.  This is where your own passive DNS tracking comes in handy and could be complimentary to any network security monitoring service, network monitoring, and especially local sourced threat intelligence.

Passive DNS version 2


Like my former coworker, I wrote scripts to collect and analyze DNS information but made some design decisions that require a bit of explaining up front.  Before getting into the specific, it is important to know that passive DNS or pDNS is not logging.  That is to say, each record is not being appended to a file.  However, each DNS query and response is being tracked.  If a domain and IP has never been seen before, a new record is created, if already in database, the count is incremented and only date field is overwritten, reducing the amount of data stored.

MySQL was replaced with Redis for speed improvements. In my home spun version, the same fields parsed from any DNS query and response are present including keys for threatening IP addresses and Domains.  Time and time again, analyst visit the web and dig and lookup names, use reputation systems to enumerate all the bad in the world.  Useful and important to validate against the outside world and get a sense of risk, but a step closer to home is far more useful first.  The worst part about this sort of query is the lack of accounting.  Analyst will most likely make the same query time and again or several analyst may make the same query and only tracking confirmed threats.  Lost is the idea of possible threats and sites that are trusted.

In my own variation of pDNS (I call it pDNS2), The basic properties allow analyst to find the most useful information quickly:
·      Seek all domains that end, start, or contain a particular word
·      Seek specific TTL or very low TTLs
·      List all the known threatening IP or domains
·      Contrast threat information against other IPs and domains
·      Export the threatening IP and Domains
·      Return a count of specific subdomains for a given domain example
·      Counts the top ‘hits' for domains in order
·      Query a range of IP addresses
·      Find local resolved IP addresses for parked domains like 127.0.0.1
·      Locate all the domains that point to a single IP
·      Locate all the IP associated with any domain
·      Tag a Domain or IP address with a notion of trust, threat, or interest
·      Search by Date
·      Count how many new domains show up each day
·      Returns Euclidian distance from a queried IP to a tagged threat
·      Find the most unanswered queries by count

For the most part the code is useful as a means to explore and track, initiating a home grown threat intelligence effort. You can find the code on github here: https://github.com/bez0r/pDNS2  (specifically the query tool ‘pdns2_query_api.py’ was released in support of this post)


Advanced pDNS2


With the basics of passive DNS covered, a separate analytical script was developed to explore specific information and calculate a concept of risk.  New domains are checked against a corpus of known good and bad using simple Bayesian ML and another does a random forest walk (concept was from a talk in 2013 by EndGame Systems).  In the Bayesian example below, a check of unknowns was pulled from old ‘conficker’ domains to get a sense of how well it works (source: http://blogs.technet.com/b/msrc/archive/2009/02/12/conficker-domain-information.aspx)

Queries can assist an analyst in finding out the most likely domains a site is trying to squat or mimic.  Any local domain suspected as a potential threat can be submitted to any of the top reputation sites with returned results used to help score the potential threat.


Additional static sources were included to support domain queries from project Sonar (https://community.rapid7.com/community/infosec/sonar/blog/2013/09/26/welcome-to-project-sonar),  so a query returned information about the reverse IPv4, SSL certs, and the usual data around regions and registration.  Scoring threat by known properties such as how new a domain was registered, low TTL, and resource record types like TXT that can be used as a command and control channel can lead to analyst starting with the most probable threats.  One little script goes out and scrapes sites for IP/Dom and another script import/export for STIX files.

Over time the pDNS2 scripts were tied to each sensor so a simple right click would provide context. The pDNS2 tools started out as a way to quickly makes sense of the IP and Domain space, and later, a support system for local threat intelligence and a driver of Analytics. After all this, I converted the basic scripts into glorious functions and tossed the entire tool set into an iPython Notebook where analyst can save and share notebooks.

Conclusions


You won’t find any ‘ground truth’ data above or research, but you should be thinking about elevating monitoring and use the information already at your finger tips to capitalize on ‘internal threat intelligence’.  I did not attend RSAC this year but the undercurrent of talks is in a wide range of hub and spoke information sharing collectives as a service is intriguing.  You can share in the trading of useful information or you can buy a service that will do it on your behalf.  Most organization want to consume indicators, yet lack the ability to organize the right information.  Especially if the information is not directly involved in incidents is too sensitive to share.  This is where Mandiant comes in for the win, sharing IOCs while offering a veil of protection for some victims.

I contend the pDNS2 is trivial to initiate and to get into a workflow but it does not stand alone. Other interesting tools like ‘malcom’ (https://github.com/jipegit/malcom) has overlap and does a far better job at presentation and incorporates several feeds against internal ‘live’ sensors.

This post is making an argument that internal threat intelligence is worth the effort, that tracking data such as IP and Domains is not futile and that now is the time for analyst and monitors to effectively become internal TI.

Tuesday, March 11, 2014

Detecting Malicious Beacons



Overview


I recently read the book called ‘Network Security through Data Analysis: Building Situational Awareness’ by Michael Collins and found it to be useful and a great way to carve and explore threats, one of my main interest.   The book provided a good overview of ‘beaconing’ and offers solutions to detect and alarm.  The book has both breadth and depth but I thought addressing ‘beaconing’ in detail is worth exploring especially in finding those persistent threats.


Beaconing in the broad sense is an effort by an entity to contact another entity repeatedly to either provide status request to establish a communications channel.  The Mars Rover uses the Deep Space Network satellite communications system to beacon and communicate.  Cell phones when turned on, beacon to the nearby cell towers and, your WiFi enabled devices utilize beacon packets which provide a lot of information.  Beaconing is also how malware initiates communications. The issue is that the average network is awash in non-malicious beacons, each has to be ruled out in some way in order to  detect potential threatening beacons.

Network beaconing is unidirectional and repeated over time and can communicate from one host to another or to many other hosts and would use any protocol that can convey a message. A malicious beacon stems from malicious code and its behavior can be consistent, such as every five minutes or it can be transient or conditional making it hard to find.  Luckily most attackers don’t want to get too creative, as they are dependent on the beacon to phone home and know detecting beacons is hard.

Detecting beacons is useful but not idea.  It would be far better to detect the malware prior to execution and even better to have a solid prevention strategy. As of yet, malware can and remains elusive to most forms of detection. Enough information is available to show that the most insidious, targeted threat persist for years with so much effort placed on malware download solutions.  Fact is, malware still infects host and they will beacon to establish connection. Therefore, the discovery of malicious beacons is critical and unless you have a signature, the probability of detection remains low.



What is a beacon

Beacon for the most part is the ‘sleep’ or ‘wait’ state the malware find itself in when executed. Sometime it is a programmable variable, other times it is static.  In some cases it may have variance or a range and sleep for 900 seconds then change to 3600 seconds.  The most consistent and limited sleeping done by any malware increases the odds of detection.  Malware may have different ‘sleep’ state for various processes such as one for ‘phone home’ or, ‘self-update’ and might even use different external host for each process.

When you look at a single beacon in a graph, it appears as self-evident, sometimes called a ‘heartbeat’ for obvious reasons and it demonstrates that consistent interval as show below. The beacon was every 1800 seconds (30 minutes) and used TCP/IP with port 443.  The consistent factors were the destination port, protocol, source and destination IP addresses.

Figure 1 Wireshark IO graph of a malicious TCP beacon





In the simple example above the peak would represent the beacon that is a single TCP packet with the SYN flag in this case, every 30 minutes of time.  Adjusting packets so to align in a set of bins, or buckets of sorts, centered around time, input size, or count or some combination of the three. Visualizing the data based on any of the factors is useful but for now, we will stick to time.  Viewing multiple beacons in a single graphic become confusing quickly and requires the use of bins to sort information.  Depending on resolution, the multiple beacons below can quickly look like a puzzle.  Even if broken into host pairs, a large set of images takes time to review.

Figure 2 Multiple beacons, single plot FAIL
 

It is difficult but not impossible to identify malicious beacons within a large network that has dozen of protocols that beacon such as NTP or services like twitter and anti-virus updates.  First you have to track as much of the network traffic and use the most common properties to eliminate heavily beaconed sites.

In the evaluation of beacon traffic,  look for the timing and variance and start with a reasonable tolerance for both.  In the ‘spectral’ plot below each blue circle is centered based on the mean time in seconds or sleep time. Each time is a representation of a beacon. The variance, representing a simplified allowance for deviation within the timing itself and, the count of instances increasing the size of each blue circle.  In this case bigger blue circle need the most attention.

Figure 3 Top beacons in a single plot success (without labels)


Most of the beacons show above fall below ‘60’ seconds and the blue dot low to the right is at 7220 seconds or exactly 2 hours. The test data used was limited to 100000 TCP SYN connections from a network containing 1500 host over the period of three days.  The traffic was known to have contained actual malware, each attempting external connections. The lower the sleep, the higher tolerance for variance.

Taking a closer look at a region, overlapping beacons that have the same characteristics can be seen.  The labels have random IP addresses but give a good indication that multiple beacons can be reviewed in a single graph and malicious beacons that have the same characteristics are grouped by time.

Figure 4 Malicious beacons

The above shows beacons at seven seconds and at eight seconds, malware attempted to reach out using on rotating ports and different addresses.  Two different internal host were involved reaching out.   While the display shows a randomized IP destination, the domain name could be displayed depending on preference. It is possible that beacons exhibiting the same sleep, variance, and destination port are the same but, infecting different internal. 

Features of beacons

In order to have a reasonable list of beacons, a number of filters have to be applied to the dataset.  Decide the minimum and maximum number of connections can qualify. In this case the minimum was set to 12 and the maximum is 5000.  The next filter is based on time between the first packet and the last, ignoring anything less than 15 minutes.

One other variable tracked is the number of internal host that visit any single external host.  Malware tend to only affect a few host probably five or less while hundreds visit a site like twitter.  Removing the most popular sites increases performance and keeps analyst from chasing the obvious.

The remaining filters are controlling the maximum variance , minimal sleep time and remove and destination ports such as port 25 for email for example.

If the goal is to support continuous beacon detection, the next logical step is remove anything trusted or found to be benign in some way and avoid storing unnecessary data.  Analyst that inspect traffic don’t want to see it again and a 'white list' can be appended with inspected beacons.

Beacon analytical strategy

Environmental conditions drive the analytical strategy. Consider what is allowed to traverse the network and how much control users have.  Environments vary from heavy oversight and strict policies to networks that resemble an unsupervised daycare.  The gain in detection in one network targeted frequently was considerable, an average of six infected host were found through beacon detection per week. Yet another network had 4 positive detections in six months.

Depending on the network, detecting beacons is worth a try and with success, should become standard for analyst.

Collection

Collection is a script, it parses from a network source or flow files. Detection starts with collection of specific network properties from flow and stored in a database.  At a minimum, three days of traffic is probably enough to evaluate for beacons and a week is ideal.  After a week, it would be best to wipe the database and start again.

The more collected the more time it takes to evaluate, Collections should be strategic to the type of traffic known to be malicious by applying filters to the flow capture in advanced.  However, a virtual ‘cleanlist’ can be applied and stored in a key and checked during collection.

Start with TCP packets with the SYN flag set and try other protocols or specific ports to get a sense about what beacons.  UDP is difficult as most of the traffic is beacon like in same way from time checks using NTP or ‘keep alive’ for databases.


Analysis

Analysis is driven by a simple script parsing each flow and does nothing but evaluate for the characteristics previously described and present finding in a tabulated text view XXXXshow below XXXX and into the graph previously show.
For any beacon one has a sense of when it started, how long, how consistent and has a starting point for analysis.

Analysts use the list or graph of suspect beacon traffic by evaluating the risk factors of both the internal and external host.  The history of the associated full packet capture between the host pairs remains a great way to identify threats.  A more advanced approach is to inspect the host itself, specifically recent log events and involved users. The more important analysis is the involved host memory sample looking for the presence of malware.  In some enterprises, it is worthwhile to sinkhole or block any suspected traffic if the means is available.


Unless you fear more dormant beacons and you can consider a simple means to parse and store all the low interval traffic as part of the ‘arctic vortex’, a simple and untested capability available for the most paranoid and targeted among us.
Consider a beacon that sleeps for a month before connecting. Seems somewhat mythical and would require a very patient attacker at the helm with long-term objectives or a backup to other connections.  Traffic that is so infrequent would be filtered out and really most of the threats are immediate and if you have significant coverage or bored, you can store the right data and hunt for the arctic vortex of malware, lying in cold storage waiting for activation. 


Beacon Bits

I wrote and released the basic beacon detection scripts a few years ago but make some improvements last summer including graphing the data.  The next post will cover the tools in detail and offer some test data to get started.
Link: https://github.com/bez0r/BeaconBits

I fully expect to move the variables into a configuration file with more guidance and release a new version soon enough.

 

Conclusion

The book by Michael Collins called ‘Network Security through Data Analysis: Building Situational Awareness’ started this blog and I highly recommend the book to anyone exploring network security.  The book is both a great place to get a sense of how to use the concepts presented in this article and, evaluate other complimentary analytics.

Edited on 14Mar2014 to correct spelling errors.