Tom Olzak

Posts Tagged ‘extrusion’

The Internet is Broken, Part II: NetFlow Analysis

In Application Security, Computers and Internet, Cybercrime, Data Leak Prevention, Data Security, Forensics, Insider risk, Log Management, NetFlow, Network Security, Policy-based access control, Risk Management, Security Management on January 13, 2013 at 21:52

Last week, I introduced the broken Internet, with SIEM technology as a way to help identify bad things happening on your network.  This week, I continue this theme by looking at a technology often deployed with SIEM: NetFlow analysis.

NetFlow is a protocol developed by Cisco.  Its original purpose was to provide transparency into traffic flow for network performance and design analysis.  Today, however, NetFlow has become a de facto industry standard for both performance and security analysis.

Over time, security analysts found that event correlation alone might not be enough to quickly detect anomalous behavior.  NetFlow, in addition to a SIEM portal, allows quick insight into traffic flow.   It helps detect network behavior outside expected norms for a specific network.

NetFlow compatible devices, as shown in Figure 1, collect information about packets traveling through one or more ports.  The collected information is aggregated and analyzed.  If supported, alerts are sent to security personnel when traffic flow through a switch port, for example, exceeds a defined threshold.  (See Figure 2 for a portal example.) This is a good way to detect large data transfers or transfers between a database server and a system with which the server doesn’t usually communicate.

Figure 1: Cisco NetFlow Configuration

Figure 1: Cisco NetFlow Configuration

Figure 2: NfSen Screen Shot (Retrieved from http://www.networkuptime.com/tools/netflow/nfsen_ss.html)

Figure 2: NfSen Screen Shot (Retrieved from http://www.networkuptime.com/tools/netflow/nfsen_ss.html)

For example, assume an attacker gains control of a database administrator’s (DBA) desktop computer.  All access by the DBA’s system will likely look normal: until a NetFlow analysis alert reports large amounts of data passing from a database production server, through the DBA system, and to the Internet.  (Granted, other controls might prevent this altogether… humor me.)  The alert allows us to react quickly to mitigate business impact by simply shutting down the DBA computer.

It isn’t just external attackers NetFlow helps detect.  The infamous disgruntled employee is also detectable when large numbers of intellectual property documents begin making their way from the storage area network to an engineer’s laptop located in his or her home office.  NetFlow analysis can be particularly useful when two or more employees collude to steal company information.

NetFlow analysis is a good detection tool.  It helps support prevention controls we rely on to prevent connections to unknown external systems.   In addition, NetFlow alerting can call our attention to an employee defecting from policy compliance and violating management trust.

Next week, I conclude this series by examining incident response in support of SIEM and NetFlow analysis.

Fear, Trust, and Desire: Fertile ground for social engineers

In Business Continuity, Content Filtering, Cybercrime, Data Security, HIPAA, Network Security, PCI DSS, Risk Management on April 10, 2009 at 09:42

According to the recently released Microsoft Security Intelligence Report (2H2008), social engineering is taking the lead as the preferred method of network and end-user device malware infection.  Since operating system vulnerabilities are slowly disappearing and more organizations are implementing basic network controls, the easiest way to a target system is via the end-user.

Fear, Trust, and Desire (FTD)

According to the Microsoft SIR, users fall prey to social engineering attacks because of three common modes of human behavior: fear, trust, and desire.  As depicted in Figure 1, each of these behaviors is targeted by specially crafted attacks.

 

Figure 1 (Microsoft SIR)

Read the rest of this entry »

Small botnets more effective at stealing your data?

In Business Continuity, Cybercrime, Data Security on April 1, 2009 at 11:35

Botnets are often viewed as large networks of infected computers, with thousands or millions of compromised systems, across multiple locations, responding to commands from a central command From Politech Blogcenter.  These massive nets still exist, but it might be their smaller cousins you should be more interested in.

Many organizations have gotten smarter about preventing large amounts of information from moving out of their networks.  Anomalous behavior associated with such activities are reasonably easy to see and respond to.  Further, database servers and other devices in the data center are typically hardened and located behind layers of security controls.  So attackers need a better way to steal your data.

Infecting a workstation is not as hard as compromising a server.  After all, many users still help attackers by clicking on links, opening attachments, or downloading free—or pirated—applications.  If the right malware is placed on a computer, it is then a platform which be used to filter for and capture pieces of information as they pass through.  It can also send smaller uploads to the attacker’s system which might easily make it under security’s radar.  Recruiting hundreds of systems like this in an organization can result in a breach on a large scale.

Read the rest of this entry »

Compliance requires people supported technical solutions

In Business Continuity, Cybercrime, Data Security, Hacking, Risk Management on March 28, 2009 at 11:19

Although I agree that reliance on human behavior is not a good way to ensure information security policy compliance, it will always be a factor.

Technology is not the panacea for fraud or executive-level “cooking the books.”  A certain amount of human oversight is necessary to verify that application controls work properly, enterprising employees haven’t found a way around them, and the layered security infrastructure is working as expected.  Further, relying on a 100 percent technical response to an external attack is too costly and prone to being hacked.  So I don’t completely agree with comments recently attributed to Charles Cresson Wood, in which he appears to assert people must be completely removed from the compliance process.

During last week’s SecureWorld Boston, Charles Cresson Wood discussed the need to go beyond development of policy when implementing information security.  In his keynote address, he describes the need for systems which ensure compliance.

A huge problem is that security policies are still too reliant on people, Cresson Wood said.

“If you want a high level of compliance do not rely on humans to get the job done,” he said.

“Things are going too fast in information security. A manual response to distributed denial-of-attacks, for example, is inconceivable,” he added.

Scripted and automated compliance enforcement needs to be put in place, supported by intrusion detection, intrusion prevention and other tools, Cresson Wood said. Security appliances will be documenting and vouching for policies, producing admissible evidence that can be used if disaster strikes and legal issues ensue. “Something like a black box when an airplane goes down,” he said.

Source: Expert Cites Big Problem with Security Policy Compliance, Bob Brown, Network World, 25 March 2009

I agree that writing policies and training employees on what is and is not acceptable behavior is not enough.  I also agree that layered technical controls are absolutely necessary to achieve business objectives defined in the policies.  However, relying completely on technology to safeguard information assets  is a poor business decision.

Read the rest of this entry »

You Just Have to Run Faster than the Bear

In Business Continuity, Cybercrime, Data Security, Hacking, Risk Management on March 23, 2009 at 09:49

For years, large businesses have spent millions to improve information security.  Much of this expense was driven by regulation or fear of public relations issues.  As security around large networks and data repositories improved, however, many small and medium business (SMB) managers didn’t feel the need to spend money on security.  After all, only large targets get hit.  Why should they care?

The reason SMBs should care is simple.  They are typically softer targets than their big brothers.

As large organizations–once easy pickings for business-minded cyber-criminals–strengthened their defenses, the cost associated with unlawfully obtaining valuable information from them increased.  Along with cost increases there was also growing probability of being detected and arrested.  So criminals had to look for less expensive targets with lower personal risk.  They often found them among SMBs.

Read the rest of this entry »

%d bloggers like this: