Tom Olzak

Posts Tagged ‘intrusion’

The Internet is Broken, Part II: NetFlow Analysis

In Application Security, Computers and Internet, Cybercrime, Data Leak Prevention, Data Security, Forensics, Insider risk, Log Management, NetFlow, Network Security, Policy-based access control, Risk Management, Security Management on January 13, 2013 at 21:52

Last week, I introduced the broken Internet, with SIEM technology as a way to help identify bad things happening on your network.  This week, I continue this theme by looking at a technology often deployed with SIEM: NetFlow analysis.

NetFlow is a protocol developed by Cisco.  Its original purpose was to provide transparency into traffic flow for network performance and design analysis.  Today, however, NetFlow has become a de facto industry standard for both performance and security analysis.

Over time, security analysts found that event correlation alone might not be enough to quickly detect anomalous behavior.  NetFlow, in addition to a SIEM portal, allows quick insight into traffic flow.   It helps detect network behavior outside expected norms for a specific network.

NetFlow compatible devices, as shown in Figure 1, collect information about packets traveling through one or more ports.  The collected information is aggregated and analyzed.  If supported, alerts are sent to security personnel when traffic flow through a switch port, for example, exceeds a defined threshold.  (See Figure 2 for a portal example.) This is a good way to detect large data transfers or transfers between a database server and a system with which the server doesn’t usually communicate.

Figure 1: Cisco NetFlow Configuration

Figure 1: Cisco NetFlow Configuration

Figure 2: NfSen Screen Shot (Retrieved from http://www.networkuptime.com/tools/netflow/nfsen_ss.html)

Figure 2: NfSen Screen Shot (Retrieved from http://www.networkuptime.com/tools/netflow/nfsen_ss.html)

For example, assume an attacker gains control of a database administrator’s (DBA) desktop computer.  All access by the DBA’s system will likely look normal: until a NetFlow analysis alert reports large amounts of data passing from a database production server, through the DBA system, and to the Internet.  (Granted, other controls might prevent this altogether… humor me.)  The alert allows us to react quickly to mitigate business impact by simply shutting down the DBA computer.

It isn’t just external attackers NetFlow helps detect.  The infamous disgruntled employee is also detectable when large numbers of intellectual property documents begin making their way from the storage area network to an engineer’s laptop located in his or her home office.  NetFlow analysis can be particularly useful when two or more employees collude to steal company information.

NetFlow analysis is a good detection tool.  It helps support prevention controls we rely on to prevent connections to unknown external systems.   In addition, NetFlow alerting can call our attention to an employee defecting from policy compliance and violating management trust.

Next week, I conclude this series by examining incident response in support of SIEM and NetFlow analysis.

Three controls to deal with a broken Internet…

In Application Security, Business Continuity, Computers and Internet, Cybercrime, Data Leak Prevention, Data Security, Log Management, Network Security, Risk Management, Security Management, SIEM on January 4, 2013 at 17:24

The Internet is broken.  Browsers are gaping holes in our security frameworks.  Certificates are becoming a liability as cyber-criminals or certificate authority negligence weakens our trust in the process.  If we continue to see defense only in terms of preventing the bad guys from getting to our end-point devices, we will surely lose the security war.  The answer is to shift perspective.

First, it’s important we assume that every end user device is potentially infected.  Further, we must assume that one or more of the servers in our data center are infected at any point in time.  This might not be true for all organizations, but it is a smart baseline assumption.  Once we accept that we are vulnerability and likely infected, it is easier to begin supporting preventive controls with comprehensive methods to identify, contain, and manage inevitable breaches of security: SIEM, NetFlow, and response.

Over this and the next two articles, I will take a high-level look at each of these breach-control methods.  Further, I will provide links to resources providing detailed information about how to design and deploy them.

SIEM

SIEM (security information and event management) is a comprehensive approach to assessing system and network behavior.  It requires collection of logs from various devices across the network, including firewalls, IPS/IDS, servers, and switches.  The graphic below depicts a very simple SIEM architecture.  Logs collected by each device are sent near-real-time to a Syslog server.  “Syslog is a standard for computer data logging. It separates the software that generates messages from the system that stores them and the software that reports and analyzes them” (“syslog”, 2013).  This is known as log aggregation.

SIEM Architecture

SIEM Architecture

Aggregated logs are sent to a correlation server for analysis.  The correlation server looks at all events received from across the network and attempts to mine attack patterns or other anomalous behavior.  Anomalous behavior identification is only effective if the SIEM solution is properly tuned.  In other words, the correlation server must know what patterns are normal for your network and which fall outside alert thresholds you set.  For more information about correlation in general, see event correlation at wikipedia.org.

All relevant information is usually available via a portal.  For example, a SIEM management server might post updated correlated results every five to 10 minutes.  Events meeting criteria set by you can also cause alerts to be sent to administrators and security personnel via SMS, email, etc.

Logs can tell us a lot about behavior, but they fall short of providing insight into how data is actually moving across the data center or across our network segment boundaries.  This is the topic of the next article in this series: NetFlow (IPFIX).

References

Syslog. (2013) Wikipedia.org.  Retrieved January 4, 2013 from http://en.wikipedia.org/wiki/Syslog

Security Tip: Patching must include ALL applications

In Cybercrime, Hacking, Patching on October 6, 2009 at 07:14

Once again, patching isn’t just about plugging holes in Windows.  Most if not all applications have security vulnerabilities if someone looks hard enough.  Up until now, however, finding those vulnerabilities was harder than just whacking the OS.  But Microsoft has settled into a patch release routine that, when followed, pretty well hardens servers and user workstations.  And although there are still vulnerabilities, the level of effort required to find and exploit them has become harder—more difficult than shifting focus to widely installed user applications.

Adobe is experiencing attacker-love now.  They are a good target because their reader is everywhere. 

Adobe’s software has increasingly come under attack in recent years as hackers have come to realize that it can be easier to find flaws in popular software that runs on top of Windows than to dig up new vulnerabilities in the operating system itself.

That’s led to a round of new attacks that exploit bugs in products such as Adobe’s Reader, Apple’s QuickTime, and the Mozilla Firefox browser, for example.

It’s a reality that Adobe Chief Technology Officer Kevin Lynch freely acknowledged Monday in a press conference at the company’s annual Adobe MAX developer conference, held in Los Angeles.

Source:  After attacks, Adobe patches now come faster, Robert McMillan, Computerworld, 6 October 2009

But Adobe isn’t the only end user application on your endpoints.  It’s critical to get ahead of the attack curve by developing an overall patch process today, BEFORE that new user productivity tool becomes a target.

Security Risk Extends Beyond Simple Loss of Data

In Business Continuity, Data Security, Government, Insider risk, Mobile Device Security, Network Security, Patching, Risk Management on June 7, 2009 at 14:52

Laptop encryption as a security control has become an expectation rather than an option.  Organizations worried about data breaches and their possible business impact are spending exorbitant percentages of IT budgets to avoid having to tell customers or employees they’ve lost their personal information.  Couple this with regulatory requirements to report certain types of breaches, and laptop encryption becomes as common on mobile systems as Notepad.  But not everyone agrees with this movement to protect laptop data at all costs.

Even the big picture suggests that spending is poorly allocated. “Thieves got 99.9 percent of their data from servers and 0.01 percent from end user systems, but enterprises spend about 50 percent of their security budget on endpoint security,” [Dr. Peter Tippett, founder of ISCA Labs] said. “They should spend more of it on server security.”

“The cause is a problem I call WIBHI, for Wouldn’t It Be Horrible If,” he said.

He added that it explains laptop encryption. He said that we encrypt laptops not because it will protect them better (passwords are good enough for that) but because we don’t have to report a breach if the laptop was encrypted.

Source: Enterprise Security Should Be Better and Cheaper, Alex Goldman, Internetnews.com, 6 June 2009

I make a habit of reading as much as possible about actual breaches, and I agree that we may be overdoing it a bit when we put multiple layers of security on devices which are not typically the primary target of attackers.  But I have three questions for Mr. Tippett.  What about botnets?  What about loss of access to critical systems due to malware-caused enterprise network shutdowns?  And what about the impact on a business if the public discovers encryption—a security control they’ve been told must be implemented or a business is negligent—was not used on a lost laptop containing personal information?

Business risk extends beyond a simple breach.  Its scope must include all possible negative impact scenarios which might be caused by weak endpoint security.  Yes, it is all about the data, including its availability and public perception—not necessarily based on a scientific assessment of actual risk—of how well it’s protected.  So until potential victims, potential customers, careless employees, and knee-jerk-driven politicians are removed from the risk formula, we will likely continue to spend more than might be reasonable and appropriate in a perfect world.

AVSIM: Real world example of the value of offsite backups

In Backup, Disaster Recovery, Hacking on May 18, 2009 at 08:00

The owners of AVSIM, an important resource for Microsoft Flight Simulator users, worked for 13 years to build a well respected site.  Using two servers, they conscientiously backed up one to the other, confident they were protected.  That confidence was shattered this month when a hacker destroyed the site, including both servers.  Since no offsite backup–or even an off-server backup–was available, it was impossible to recover.

There is a lesson here for all organizations.  If you have a server or other storage containing critical business information, make sure it is backed up to an offsite location.  Even if the probability is low that fire, tornadoes, hurricanes, and a host of other natural threats may take out your facility, there is always the hacker community which is always looking for a new challenge.

We always talk about the importance of offsite backups, but sometimes it takes an actual example to make managers sign a check.  Maybe that is the proverbial silver lining in this story.

%d bloggers like this: