Tom Olzak

Archive for the ‘Business Continuity’ Category

Home users create security gaps: Fill them

In Access Controls, Application Security, Business Continuity, Cloud Computing, Computers and Internet, Insider risk, iPad, Mobile Device Security, Network Security, Policies and Processes, Policy-based access control, Risk Management on February 13, 2013 at 20:13

In Phishing attacks target home workers as easy ‘back door’ – Techworld.com, John Dunn writes that users fear becoming targets when working at home.  This should surprise no one.  With the rapid growth of BYOD (bring your own device), organizations struggle to close security gaps as they attempt to meet new business requirements of anywhere/anytime delivery of information and business processes. (See The BYOD Trend.)

Smartphones, tablets, and privately-owned laptops are not adequately controlled in most organizations.  Traditional access controls, especially authorization constraints, fail to mitigate risk sufficiently.  One important change organizations can make is to context- or policy-based access controls.  (See Securing Remote Access).

 

 

YAWN!!!!

In Application Security, Business Continuity, Cyber Espionage, Cyber-warfare, Cybercrime, Government, Network Security, Regulation, Security Management on February 10, 2013 at 19:44

Another article from AP today about the U.S. vulnerability to cyber attacks.  No longer news, this kind of information is simply depressing.  Mike Rogers, a member of the House of Representatives, believes that 95% “of private sector networks are vulnerable and most have already been hit.”  Maybe, but nowhere does the article offer actual statistics or source research.  Further, no mention is made of the porous security protecting government agencies.  Figures…

Rogers contends that all the government has to do is share classified threat information and all will be well.  What is he smoking?  Everyone already knows what is needed to protect our national infrastructure.  This looks like a good copout by Republicans: protecting business by doing something useless while convincing the gullible they are doing something worthwhile.  Compromising national security isn’t necessary; all we have to do is start forcing the slackers to meet minimal security requirements.  The Feds should start with their own minimal security guidelines included in FIPS PUB 200.

In my opinion, this grandstanding by legislators needing another law passed to prove their value (God knows something has to) is not helpful.  What is helpful is applying meaningful efforts to identify weaknesses–can anyone say public utilities–and apply the necessary pressure to remove them.  This must happen without whining about cost to affected businesses and industries.  My MBA helps be understand the business side, but my common sense and sense of insecurity drive me to scream, “ENOUGH!!”

The Internet is Broken, Part III: Response

In Application Security, Business Continuity, Disaster Recovery, Hacking, Log Management, malware, NetFlow, Network Security, Policies and Processes, Risk Management, Security Management, SIEM on January 20, 2013 at 23:12

This is the final post in a series about the broken Internet.  In the first, we looked at SIEM.  Last week, we explored the value of NetFlow analysis.  This week, we close with an overview of incident response.

When evaluating risk, I like to use as reference the following formula:

Basic Risk Formula

Basic Risk Formula

Probability of occurrence, broken into threats x vulnerabilities, helps us determine how likely it is that a specific threat might reach our information resources.  Business impact is a measure of the negative affects if a threat is able to exploit a vulnerability.  The product of Probability of Occurrence and Business Impact is mitigated by the reasonable and appropriate use of administrative, technical, and physical controls.  One such control is a documented and practiced incident response plan.

The purpose of incident response is to mitigate business impact when we detect an exploited vulnerability.  The steps in this process are shown in the following graphic.  Following the detection of an incident (using SIEM, NetFlow, or some other monitoring control), the first step is to contain it before it can spread or cause more business impact.  Containment is easier in a segmented network; segments under attack are quickly segregated from the rest of the network and isolated from external attackers.

Response Process

Response Process

Following containment, the nature of the attack is assessed.  Failing to follow this step can result in incorrectly identifying the threat, the threat agent, the attack vector, or the target.  Missing any of these can make the following steps less effective.

Once we understand the who, what, when, where, how, and why of an attack, we can eradicate it.  Eradication often takes the form of applying a patch, running updated anti-malware, or system or network reconfiguration.  When we’re certain the threat agent is neutralized, we recover all business processes.

Business process restoration requires a documented and up-to-date business continuity/disaster recovery plan.  Some incidents might require server rebuilds.  Business impact increases as a factor of the time required to restore business operation.  Without the right documentation, the restoration time can easily exceed the maximum tolerable downtime: the time a process can be down without causing irreparable harm to the business.

Finally, we perform root cause analysis.  This involves two assessments.  One determines what was supposed to happen during incident response, what actually happened, and how can we improve.  The second assessment targets the attack itself.  We must understand what broken control or process allowed the threat agent to get as far as it did into our network.  Both assessments result in an action plan for remediation and improvement.

The Internet is broken.  We must assume that one or more devices on our network is compromised.  Can you detect anomalous behavior and effectively react to it when the inevitable attack happens?

Three controls to deal with a broken Internet…

In Application Security, Business Continuity, Computers and Internet, Cybercrime, Data Leak Prevention, Data Security, Log Management, Network Security, Risk Management, Security Management, SIEM on January 4, 2013 at 17:24

The Internet is broken.  Browsers are gaping holes in our security frameworks.  Certificates are becoming a liability as cyber-criminals or certificate authority negligence weakens our trust in the process.  If we continue to see defense only in terms of preventing the bad guys from getting to our end-point devices, we will surely lose the security war.  The answer is to shift perspective.

First, it’s important we assume that every end user device is potentially infected.  Further, we must assume that one or more of the servers in our data center are infected at any point in time.  This might not be true for all organizations, but it is a smart baseline assumption.  Once we accept that we are vulnerability and likely infected, it is easier to begin supporting preventive controls with comprehensive methods to identify, contain, and manage inevitable breaches of security: SIEM, NetFlow, and response.

Over this and the next two articles, I will take a high-level look at each of these breach-control methods.  Further, I will provide links to resources providing detailed information about how to design and deploy them.

SIEM

SIEM (security information and event management) is a comprehensive approach to assessing system and network behavior.  It requires collection of logs from various devices across the network, including firewalls, IPS/IDS, servers, and switches.  The graphic below depicts a very simple SIEM architecture.  Logs collected by each device are sent near-real-time to a Syslog server.  “Syslog is a standard for computer data logging. It separates the software that generates messages from the system that stores them and the software that reports and analyzes them” (“syslog”, 2013).  This is known as log aggregation.

SIEM Architecture

SIEM Architecture

Aggregated logs are sent to a correlation server for analysis.  The correlation server looks at all events received from across the network and attempts to mine attack patterns or other anomalous behavior.  Anomalous behavior identification is only effective if the SIEM solution is properly tuned.  In other words, the correlation server must know what patterns are normal for your network and which fall outside alert thresholds you set.  For more information about correlation in general, see event correlation at wikipedia.org.

All relevant information is usually available via a portal.  For example, a SIEM management server might post updated correlated results every five to 10 minutes.  Events meeting criteria set by you can also cause alerts to be sent to administrators and security personnel via SMS, email, etc.

Logs can tell us a lot about behavior, but they fall short of providing insight into how data is actually moving across the data center or across our network segment boundaries.  This is the topic of the next article in this series: NetFlow (IPFIX).

References

Syslog. (2013) Wikipedia.org.  Retrieved January 4, 2013 from http://en.wikipedia.org/wiki/Syslog

Cloud Security Standards Excuse

In Application Security, Business Continuity, Cybercrime, Project Management, security, Windows 7 on March 23, 2012 at 15:03

I keep reading articles about how the lack of cloud security standards keeps companies away from cloud services. Isn’t this just an excuse? We have security standards for our own organizations… or we should. We also know what is and is not considered best practice. Further, we should by this time understand how trust works and the controls to implement, monitor, segregate, and secure various trust zones. Isn’t the cloud just another trust zone?

Securing the cloud requires the same diligence we use when securing our data centers. The difference lies in oversight requirements. How do we ensure the service provider is achieving the security outcomes we expect? There are cloud service providers that do get it, providing mechanisms for customer oversight, audits, etc. If the provider in your conference room trying to sell her proposal can’t provide the necessary security assurance methods, find someone else..

Don’t use lack of cloud standards to prevent the potential business benefit of hosted infrastructure or applications.

%d bloggers like this: