Tuesday, October 13, 2015

Defending against CEO Fraud


Defending against CEO Fraud



It would seem CEO fraud has accelerated, emerging as a key revenue generation technique for attackers, and could be more profitable than ransomware.  Attackers going after 40K up to 500K beats the $300 and administrative headache per person in a typical PII theft attack. As we tell our clients, the Internet Crime Complaint Center (IC3) estimates that well over 2,000 companies have been victimized by this type of attack and these attackers have absconded with more than $200million.  Brian Krebs has been tracking the attack in greater detail http://krebsonsecurity.com/tag/ceo-fraud/

In about an hour you can inoculate your organization from this growing threat.  Defending against CEO fraud means understanding what makes the attack successful.  It is exploiting human weakness, those with the authority to wire transfer money are instructed by what they believe to be a legitimate authorization from leadership.  In a policy driven non-technical way the fix is validation:  call the CEO or whomever was or is demanding the money directly. However, do not use the phone number in the signature, but use the phone number on record.  More importantly recognize the attempt and have a course of action to defend against it, because the attacker will come back a few months later hoping the next time it works and the validation process has become relaxed.

A simple defense is straightforward, do what the attacker does; research.  Your company is ACME MONEY and you have the domain ACMEMONEY.COM.  We need to know that anyone can buy a domain near the target domain, something that may fool someone on yourfinance team to make a wire transfer.  Ideally we only need to know who the CEO is and  who will do my bidding in finance, three pieces of information.  Start shopping for domains that would look like ACMEMONEY.COM.

The attackers run a script much the same way I would check and mutate the domain name by removing and adding characters and then quickly checking the domain registers to see if they can buy it. This is hard to do by hand and the permutations are extensive so we focus on domains that easily fool the human brain such as extra letters, missing letters, and transposed letters, just like the attackers who probably score it or visually inspect it.


  • Acmemmoney.com
  • Acmemonney.com
  • Amcemoney.com
  • acmemoneey.com
  • acmemoneyacmemone.com
  • acmemonney.com
  • acmemooney.com
  • acmmemoney.com
  • nacmemoney.com
  • acmmoney.com
  • acmemony.com


The extension might be important as well so acmemoney.com could have more than 30 other listings that might fool a human based on the zone and not the TLD.

Acmemoney.band
Acmemoney.associates

Without going too deep into the analysis and making you think it is a futile effort, some action must be taken against those domains. Big corporations might go out and buy them but certainly everyone should go out and block them at the email gateway. 

  1. Write down a list of all the domains you own, which are supporting email services.
  2. For each one, create a realistic permutation list.
    1. Add letters (cheaper for the attacker, most common)
    2. Remove letters.
    3. Switch two of them the trick the eye M and N for example.
  3. At your email gateway, add those domains as drop or block.
    1. Clever administrators, direct that mail to an unattended inbox
    2. Review the inbox and see what attackers are attempting.
  4. If you are so inclined, schedule a script to check for registration of the domains in question or changes in ownership (if any).

In an effort to look at the possible permutations of domains, a simple python script generate 27K possibilities for acmemoney.  Again, our goal is to defend against the ones that fool humans, and the actual useful count is probably around 30 domains or so.  It is worthwhile to query every one of those domains and see if they are parked or purchased.  Tracking the generated domains is also useful for an improved defense. Consider law enforcement when detecting the attempted fraud as a formal investigate is useful when combined with other investigations, especially in monitoring where the wire transfers end.

All domain registrars will take action and seize the fraudulent domains, but only a few prevent the registration of domains that might be used for fraud.  In most cases, the domain registrars don’t care and in a couple of cases, they enable it by advertising tools that enable the attack. Expert services aid the attacker by providing a listing of domains that are available, with an associated score.  An advertisement shows a link to harvested emails and signatures.  Usually calling the company and asking about how to wire transfer or pretending to offer services, anything that generally will get you and email address.

Threat modeling techniques would suggest that another possibility is the person in accounting who authorized the wire transfer could potentially be colluding with the outside party for part of the money.  Cannot say I have seen this but people are devious.

EDIT:  You will find this code to be very useful in the discovery of the domains mentioned above.  Run it against all your domains and ideally, block it at the perimeter or create a set of signatures (snort rules) for each.  https://github.com/elceef/dnstwist
 

Wednesday, December 24, 2014

Practical DNS covert channel detection


Immunity makes outstanding offensive security products and one of my favorites is Innuendo.  Immunity recently demonstrated in a email linked to a video an effective method of exfiltration over DNS channel called ‘INNUENDO DNS CHANNEL Video‘https://vimeo.com/115206626 as pointed out, DNS is not ideal for exfil but more for command and control. Last year I saw Cobalt Strike had incorporated similar capabilities that started me thinking about detection. http://www.advancedpentest.com/help-beacon

 

Several papers have excellent example of the entire different DNS covert channel available on the market today and offer some concepts or proof of concept for detection. I am going to cover only the ones that have appeared to be effective and efficient in my own implementation.

 

Let me start with the end in mind: The detection engine I use today pulls all new domains from the previous or current day, not linked to a trusted DNS record by the last two zones, that have a common IP address linked by count, to the number of unique domain, that exceed a preset entropy threshold. 

 

Not everyone is going to go code up solutions and very few commercial solutions can demonstrate an effective means at detection unless the attackers are banging on DNS.  Probably the quick and cheap solution in detection is to count the number of unique domains that point to a single IP address daily.  If you have the means to calculate through a SIEM for example, you are already ahead of most.

 

Having buckets of domains that all point to a single IP address is the most valuable indication of a highly resilient domain or covert channel.  What is the cost in having a human go look at it every day?  Expensive but imperative if your Sony Entertainment Group a few months ago along with locating beacons and other signature free detection.  If you establish a nuanced set of conditions and leverage automation and notification, you can avoid wasted analytical effort.  

 

Improved detection starts with the means to collect and I generally use a custom implementation of passive DNS previously discussed. The point is to have a local understanding of DNS transactions over time.  A very important detail in tracking requires that IP addresses and domains are stored in an unlinked way. Waste the space and write two records, one for the domain and the subsequent IP and also write a record for the IP and the reverse DNS. If a new domain appears and it points to a different IP, that record will change for the domain, while the new IP record might also be updated.   

 

A need exist to track over days or weeks, it is easy to peruse the domains and count the IP addresses reversed domain queries and come up with a summary count. Content providers along with CDNs can have thousands of domains pointing to a single IP address, very important to have the means to exclude or tag.  Correlation around timing could be included but I prefer to tag domains as ‘trusted’. Some examples of false positives that would trigger include the following.

 

  • 00000000.r.msn.com

  • googlehosted.l.googleuserconent.com

  • gizmodo.com

  • tumblr.com

 

Reviewing all domains and IP relationships is expensive for expansive enterprises but extremely useful. Decision trees / filters ensure you don’t examine all the records, best to look at new domains not previously seen and correlate back against the full set.

 

I don’t have the desire to explain entropy here but I do use  a bit of python code to calculate the entropy of the new domains and alert on it.  If the entropy (based on the score I use) threshold exceeds ‘5’ and is not already trusted based on some factors, it   In a quick example ‘google.com’ has a score of  ‘2.646439’  and ‘http://deviantpackets.blogspot.com’ has a score of ‘4.113362’ because of more characters.

 

Entropy alone will lead to a large list of false positives or, that one covert channel you have no other way of detecting. Many sites use excessively long string of random looking or high entropy in the DNSs 3rd zone plus, even security firms like Tenable Networks use it with the Nessus product. In finding covert channels, entropy is part of a decision tree or a correlation factor.

 

  • Entropy score:5.06236611109

    • Domain:xyz32322-ADSLogge-6Z9K99JF3441-377B035B8.us-west-1.elb.amazonaws.com

  • Entropy Score: 5.01349441779

    • Domain:ZZ-TAC-WebServerho-ZA8QRTMQS29O-1968B81422.us-east-1.elb.amazonaws.com

  • Entorpy Score:5.05400630712

    • Domain:trk327-us-ADSLogge-1MD7A4JGBS8FF-803889B11.us-east-1.elb.amazonaws.com

 

Evaluation of the DNS record type is also useful with TXT records having the ability to convey the most information.  Browsing TXT records once in awhile is part of being vigilant, along with low TTL but I don’t find either are useful alone.  Stumbling on the DNS Covert Channel is not as valuable as a useful workflow.

 

I know that most of the larger security firms have solutions similar to the above that are used much in the same way but I don’t see solutions in the managed security service space.  If you know of a comparable commercial solution I would love to hear about it.

Thursday, March 27, 2014

The value of IP and Domain information


The value of IP and Domain information


Starting with the end in mind, make the data you have consumable (and thus actionable), make sense of the data, drive risk decisions, and share data with trusted partners and shift from routine monitoring to internal threat intelligence. Start with what you can easily work with like IP addresses and domains.

Some debate is raging within enclaves of the internet about the value and accuracy of the APT1 report.  The criticism is in the tacit link between the victims and the numerous sources from a single region of the world, Shanghai China.  The APT1 report shared the indicators of compromise allowing myself and others to compare indicators, signatures etc. and evaluate the conclusions.

I set out to evaluate and compare file names, hashes, IP addresses, Domain and all the other atomic indicators from the report against data I had collected.  The overlap confirmed quite a bit of what I knew prior. How many orgs could go and compare notes at that level?  Not enough.

Anytime a new report is issued with unmasked indicators, each of us should evaluate the findings.  Sharing and tracking starts with internally sourced threat intelligence and I would argue that every organization needs the capability, starting with a simple tracking system.  Any atomic indicator such as hash, IP address, domain, filename, has a half-life of sorts.  As the effectiveness decreases, it is less likely that you will continue to see in any specific remote edge of the internet.  If you mined the APT1 report for indicators, most are useless by now.

The net effect of the APT1 report is higher salaries of these with TI on resumes, and driving business to Mandiant.  It made a bunch of security vendors shift positions and consider how to capitalize on threats.  How does one capitalize on any large list of IPs and domains as found in the APT1 report?  If the thinking is to toss it all into a SIEM you might as well stop reading here.

Threats are associated with IP addresses and domains, but focusing on IP and Domains alone is pointless because the threat will move and you have a stale list, ultimately wasting time on false positives.  At what point does an IP address stop being a threat?  The domains, IP addresses, and AS numbers are part of threat intelligence.  In a simple example, any point of concentration of IP or domains tells you that nearby IPs and domains are worthy of examination during the fleeting time the space is being used.

Why internal threat intelligence?
Internal threat intelligence was initially leveraged by government and the defense industrial base, at least the smarter ones. Then, it was the telecoms and large internet service providers and now energy and financial sectors are making a play for top talent to consume internal TI. How far does it go?   Can we ‘rent’ the skills from TI brokers or commission specialized reports?

Anton Chuvakin in a recent blog explored the difficulties and objectives of internally sourced threat intelligence, it is worth the read.  http://blogs.gartner.com/anton-chuvakin/2014/03/20/on-internally-sourced-threat-intelligence  My short take is that Internal threat intelligence evaluates the same information sourced from incidents as anyone doing monitoring. Internal Threat Intelligence takes a depth and breadth analytical approach with the available information. Internal Threat Intelligence is the split between detection and response providing threat suppression.

Internal Threat Intelligence is the hedge against the immediate threat landscape and what is over the horizon.  In my personal view, threat intelligence is not grounded in large sets of IP and Domains with poor reputations but having context, history, and in the narrative of objectives and actors.  One cannot reach that stratum without a solid foundation to collect and analyze the local information, compare against rational and trusted resources, postulate and test hypothesis and eventually point fingers.  The APT1 report was a confirmation of findings, not a revelation.

I think internal threat intelligence will be a required part of monitoring.  All analyst should seek to track IP and addresses locally. It is not enough to consume external indicators alone. It is not enough to purchase a set of sensors, plugging in data looking for matches.  Monitoring won’t go away but it will become more automated, becoming easier to match events, provide entity specific information, and use large data sources to evaluate relationships, measure impact and finally, drive the incident response. 

Linking events, actions and incident through to actors will continue to be in demand with the cost being reduced by emerging platforms that synthesize internal information and rational findings from the outside world.  Fusion is the by-product of the most useful and reliable sources measured to bring value in understanding threats.  TI is predictive in nature and about the only way divest from a Maginot line (popular commentary made by the RSAC ‘C’ level speaking collective, yet useful in my thinking as well).

Passive DNS


I was introduced to passive DNS as a useful analytical tool a few years ago when someone I once worked with wrote his own variant using Python and MySQL.  What I learned is the value of tracking just the right amount of information exceeds the value of tracking all of the information, and using only public DNS information is futile.  The instant utility of passive DNS for each enterprise seemed evident to me, however this particular implementation suffered performance issues that became unbearable in just a few days. Ultimately, it was scrapped after a month or so.

Before I go further, exact matching IP and Domains is nearly futile and won’t compare to the value of computed and some behavioral indicators.  Understanding why a stream between host contains  'system32' with a file write operation of a ‘rar’ file or batch file has a better chance a detecting pivot for example.  If you intend to track, choose to do it well.  The idea that you could shop and explore known IP or domain address and get a sense of when the query was first made, how many and what neighboring domains and addresses were doing is valuable. 


Passive DNS is not new, but I don’t think the general utility to analyst is well understood.  In my view, SecurityOnion should have the capability.  Tracking means a composite of domains and IP address within your own environment, instantly searchable.  At higher levels, a potential threat detection system with room to innovate.  Florian Weimer introduced passive DNS many years ago (paper http://www.enyo.de/fw/software/dnslogger/first2005-paper.pdf) and it is well worth the read.  Plenty of services exist in the commercial space for analyzing your DNS records for potential threats like OpenDNS’s Umbrella and Damballa.  With the services, you get the value of the analytics at scale, and each has invested in exploring and improving detection, with the most virulent malicious actions being noticed and suppressed with speed. 

However, the targeted attacks may not have enough concurrency to get noticed, that is to say a single domain that does not fall into set characteristic using SVM, or not newly registered, falls below a noticeable threshold used in large-scale detection.  This is where your own passive DNS tracking comes in handy and could be complimentary to any network security monitoring service, network monitoring, and especially local sourced threat intelligence.

Passive DNS version 2


Like my former coworker, I wrote scripts to collect and analyze DNS information but made some design decisions that require a bit of explaining up front.  Before getting into the specific, it is important to know that passive DNS or pDNS is not logging.  That is to say, each record is not being appended to a file.  However, each DNS query and response is being tracked.  If a domain and IP has never been seen before, a new record is created, if already in database, the count is incremented and only date field is overwritten, reducing the amount of data stored.

MySQL was replaced with Redis for speed improvements. In my home spun version, the same fields parsed from any DNS query and response are present including keys for threatening IP addresses and Domains.  Time and time again, analyst visit the web and dig and lookup names, use reputation systems to enumerate all the bad in the world.  Useful and important to validate against the outside world and get a sense of risk, but a step closer to home is far more useful first.  The worst part about this sort of query is the lack of accounting.  Analyst will most likely make the same query time and again or several analyst may make the same query and only tracking confirmed threats.  Lost is the idea of possible threats and sites that are trusted.

In my own variation of pDNS (I call it pDNS2), The basic properties allow analyst to find the most useful information quickly:
·      Seek all domains that end, start, or contain a particular word
·      Seek specific TTL or very low TTLs
·      List all the known threatening IP or domains
·      Contrast threat information against other IPs and domains
·      Export the threatening IP and Domains
·      Return a count of specific subdomains for a given domain example
·      Counts the top ‘hits' for domains in order
·      Query a range of IP addresses
·      Find local resolved IP addresses for parked domains like 127.0.0.1
·      Locate all the domains that point to a single IP
·      Locate all the IP associated with any domain
·      Tag a Domain or IP address with a notion of trust, threat, or interest
·      Search by Date
·      Count how many new domains show up each day
·      Returns Euclidian distance from a queried IP to a tagged threat
·      Find the most unanswered queries by count

For the most part the code is useful as a means to explore and track, initiating a home grown threat intelligence effort. You can find the code on github here: https://github.com/bez0r/pDNS2  (specifically the query tool ‘pdns2_query_api.py’ was released in support of this post)


Advanced pDNS2


With the basics of passive DNS covered, a separate analytical script was developed to explore specific information and calculate a concept of risk.  New domains are checked against a corpus of known good and bad using simple Bayesian ML and another does a random forest walk (concept was from a talk in 2013 by EndGame Systems).  In the Bayesian example below, a check of unknowns was pulled from old ‘conficker’ domains to get a sense of how well it works (source: http://blogs.technet.com/b/msrc/archive/2009/02/12/conficker-domain-information.aspx)

Queries can assist an analyst in finding out the most likely domains a site is trying to squat or mimic.  Any local domain suspected as a potential threat can be submitted to any of the top reputation sites with returned results used to help score the potential threat.


Additional static sources were included to support domain queries from project Sonar (https://community.rapid7.com/community/infosec/sonar/blog/2013/09/26/welcome-to-project-sonar),  so a query returned information about the reverse IPv4, SSL certs, and the usual data around regions and registration.  Scoring threat by known properties such as how new a domain was registered, low TTL, and resource record types like TXT that can be used as a command and control channel can lead to analyst starting with the most probable threats.  One little script goes out and scrapes sites for IP/Dom and another script import/export for STIX files.

Over time the pDNS2 scripts were tied to each sensor so a simple right click would provide context. The pDNS2 tools started out as a way to quickly makes sense of the IP and Domain space, and later, a support system for local threat intelligence and a driver of Analytics. After all this, I converted the basic scripts into glorious functions and tossed the entire tool set into an iPython Notebook where analyst can save and share notebooks.

Conclusions


You won’t find any ‘ground truth’ data above or research, but you should be thinking about elevating monitoring and use the information already at your finger tips to capitalize on ‘internal threat intelligence’.  I did not attend RSAC this year but the undercurrent of talks is in a wide range of hub and spoke information sharing collectives as a service is intriguing.  You can share in the trading of useful information or you can buy a service that will do it on your behalf.  Most organization want to consume indicators, yet lack the ability to organize the right information.  Especially if the information is not directly involved in incidents is too sensitive to share.  This is where Mandiant comes in for the win, sharing IOCs while offering a veil of protection for some victims.

I contend the pDNS2 is trivial to initiate and to get into a workflow but it does not stand alone. Other interesting tools like ‘malcom’ (https://github.com/jipegit/malcom) has overlap and does a far better job at presentation and incorporates several feeds against internal ‘live’ sensors.

This post is making an argument that internal threat intelligence is worth the effort, that tracking data such as IP and Domains is not futile and that now is the time for analyst and monitors to effectively become internal TI.

Tuesday, March 11, 2014

Detecting Malicious Beacons



Overview


I recently read the book called ‘Network Security through Data Analysis: Building Situational Awareness’ by Michael Collins and found it to be useful and a great way to carve and explore threats, one of my main interest.   The book provided a good overview of ‘beaconing’ and offers solutions to detect and alarm.  The book has both breadth and depth but I thought addressing ‘beaconing’ in detail is worth exploring especially in finding those persistent threats.


Beaconing in the broad sense is an effort by an entity to contact another entity repeatedly to either provide status request to establish a communications channel.  The Mars Rover uses the Deep Space Network satellite communications system to beacon and communicate.  Cell phones when turned on, beacon to the nearby cell towers and, your WiFi enabled devices utilize beacon packets which provide a lot of information.  Beaconing is also how malware initiates communications. The issue is that the average network is awash in non-malicious beacons, each has to be ruled out in some way in order to  detect potential threatening beacons.

Network beaconing is unidirectional and repeated over time and can communicate from one host to another or to many other hosts and would use any protocol that can convey a message. A malicious beacon stems from malicious code and its behavior can be consistent, such as every five minutes or it can be transient or conditional making it hard to find.  Luckily most attackers don’t want to get too creative, as they are dependent on the beacon to phone home and know detecting beacons is hard.

Detecting beacons is useful but not idea.  It would be far better to detect the malware prior to execution and even better to have a solid prevention strategy. As of yet, malware can and remains elusive to most forms of detection. Enough information is available to show that the most insidious, targeted threat persist for years with so much effort placed on malware download solutions.  Fact is, malware still infects host and they will beacon to establish connection. Therefore, the discovery of malicious beacons is critical and unless you have a signature, the probability of detection remains low.



What is a beacon

Beacon for the most part is the ‘sleep’ or ‘wait’ state the malware find itself in when executed. Sometime it is a programmable variable, other times it is static.  In some cases it may have variance or a range and sleep for 900 seconds then change to 3600 seconds.  The most consistent and limited sleeping done by any malware increases the odds of detection.  Malware may have different ‘sleep’ state for various processes such as one for ‘phone home’ or, ‘self-update’ and might even use different external host for each process.

When you look at a single beacon in a graph, it appears as self-evident, sometimes called a ‘heartbeat’ for obvious reasons and it demonstrates that consistent interval as show below. The beacon was every 1800 seconds (30 minutes) and used TCP/IP with port 443.  The consistent factors were the destination port, protocol, source and destination IP addresses.

Figure 1 Wireshark IO graph of a malicious TCP beacon





In the simple example above the peak would represent the beacon that is a single TCP packet with the SYN flag in this case, every 30 minutes of time.  Adjusting packets so to align in a set of bins, or buckets of sorts, centered around time, input size, or count or some combination of the three. Visualizing the data based on any of the factors is useful but for now, we will stick to time.  Viewing multiple beacons in a single graphic become confusing quickly and requires the use of bins to sort information.  Depending on resolution, the multiple beacons below can quickly look like a puzzle.  Even if broken into host pairs, a large set of images takes time to review.

Figure 2 Multiple beacons, single plot FAIL
 

It is difficult but not impossible to identify malicious beacons within a large network that has dozen of protocols that beacon such as NTP or services like twitter and anti-virus updates.  First you have to track as much of the network traffic and use the most common properties to eliminate heavily beaconed sites.

In the evaluation of beacon traffic,  look for the timing and variance and start with a reasonable tolerance for both.  In the ‘spectral’ plot below each blue circle is centered based on the mean time in seconds or sleep time. Each time is a representation of a beacon. The variance, representing a simplified allowance for deviation within the timing itself and, the count of instances increasing the size of each blue circle.  In this case bigger blue circle need the most attention.

Figure 3 Top beacons in a single plot success (without labels)


Most of the beacons show above fall below ‘60’ seconds and the blue dot low to the right is at 7220 seconds or exactly 2 hours. The test data used was limited to 100000 TCP SYN connections from a network containing 1500 host over the period of three days.  The traffic was known to have contained actual malware, each attempting external connections. The lower the sleep, the higher tolerance for variance.

Taking a closer look at a region, overlapping beacons that have the same characteristics can be seen.  The labels have random IP addresses but give a good indication that multiple beacons can be reviewed in a single graph and malicious beacons that have the same characteristics are grouped by time.

Figure 4 Malicious beacons

The above shows beacons at seven seconds and at eight seconds, malware attempted to reach out using on rotating ports and different addresses.  Two different internal host were involved reaching out.   While the display shows a randomized IP destination, the domain name could be displayed depending on preference. It is possible that beacons exhibiting the same sleep, variance, and destination port are the same but, infecting different internal. 

Features of beacons

In order to have a reasonable list of beacons, a number of filters have to be applied to the dataset.  Decide the minimum and maximum number of connections can qualify. In this case the minimum was set to 12 and the maximum is 5000.  The next filter is based on time between the first packet and the last, ignoring anything less than 15 minutes.

One other variable tracked is the number of internal host that visit any single external host.  Malware tend to only affect a few host probably five or less while hundreds visit a site like twitter.  Removing the most popular sites increases performance and keeps analyst from chasing the obvious.

The remaining filters are controlling the maximum variance , minimal sleep time and remove and destination ports such as port 25 for email for example.

If the goal is to support continuous beacon detection, the next logical step is remove anything trusted or found to be benign in some way and avoid storing unnecessary data.  Analyst that inspect traffic don’t want to see it again and a 'white list' can be appended with inspected beacons.

Beacon analytical strategy

Environmental conditions drive the analytical strategy. Consider what is allowed to traverse the network and how much control users have.  Environments vary from heavy oversight and strict policies to networks that resemble an unsupervised daycare.  The gain in detection in one network targeted frequently was considerable, an average of six infected host were found through beacon detection per week. Yet another network had 4 positive detections in six months.

Depending on the network, detecting beacons is worth a try and with success, should become standard for analyst.

Collection

Collection is a script, it parses from a network source or flow files. Detection starts with collection of specific network properties from flow and stored in a database.  At a minimum, three days of traffic is probably enough to evaluate for beacons and a week is ideal.  After a week, it would be best to wipe the database and start again.

The more collected the more time it takes to evaluate, Collections should be strategic to the type of traffic known to be malicious by applying filters to the flow capture in advanced.  However, a virtual ‘cleanlist’ can be applied and stored in a key and checked during collection.

Start with TCP packets with the SYN flag set and try other protocols or specific ports to get a sense about what beacons.  UDP is difficult as most of the traffic is beacon like in same way from time checks using NTP or ‘keep alive’ for databases.


Analysis

Analysis is driven by a simple script parsing each flow and does nothing but evaluate for the characteristics previously described and present finding in a tabulated text view XXXXshow below XXXX and into the graph previously show.
For any beacon one has a sense of when it started, how long, how consistent and has a starting point for analysis.

Analysts use the list or graph of suspect beacon traffic by evaluating the risk factors of both the internal and external host.  The history of the associated full packet capture between the host pairs remains a great way to identify threats.  A more advanced approach is to inspect the host itself, specifically recent log events and involved users. The more important analysis is the involved host memory sample looking for the presence of malware.  In some enterprises, it is worthwhile to sinkhole or block any suspected traffic if the means is available.


Unless you fear more dormant beacons and you can consider a simple means to parse and store all the low interval traffic as part of the ‘arctic vortex’, a simple and untested capability available for the most paranoid and targeted among us.
Consider a beacon that sleeps for a month before connecting. Seems somewhat mythical and would require a very patient attacker at the helm with long-term objectives or a backup to other connections.  Traffic that is so infrequent would be filtered out and really most of the threats are immediate and if you have significant coverage or bored, you can store the right data and hunt for the arctic vortex of malware, lying in cold storage waiting for activation. 


Beacon Bits

I wrote and released the basic beacon detection scripts a few years ago but make some improvements last summer including graphing the data.  The next post will cover the tools in detail and offer some test data to get started.
Link: https://github.com/bez0r/BeaconBits

I fully expect to move the variables into a configuration file with more guidance and release a new version soon enough.

 

Conclusion

The book by Michael Collins called ‘Network Security through Data Analysis: Building Situational Awareness’ started this blog and I highly recommend the book to anyone exploring network security.  The book is both a great place to get a sense of how to use the concepts presented in this article and, evaluate other complimentary analytics.

Edited on 14Mar2014 to correct spelling errors.