Tom Olzak

Posts Tagged ‘Disaster Recovery’

The Internet is Broken, Part III: Response

In Application Security, Business Continuity, Disaster Recovery, Hacking, Log Management, malware, NetFlow, Network Security, Policies and Processes, Risk Management, Security Management, SIEM on January 20, 2013 at 23:12

This is the final post in a series about the broken Internet.  In the first, we looked at SIEM.  Last week, we explored the value of NetFlow analysis.  This week, we close with an overview of incident response.

When evaluating risk, I like to use as reference the following formula:

Basic Risk Formula

Basic Risk Formula

Probability of occurrence, broken into threats x vulnerabilities, helps us determine how likely it is that a specific threat might reach our information resources.  Business impact is a measure of the negative affects if a threat is able to exploit a vulnerability.  The product of Probability of Occurrence and Business Impact is mitigated by the reasonable and appropriate use of administrative, technical, and physical controls.  One such control is a documented and practiced incident response plan.

The purpose of incident response is to mitigate business impact when we detect an exploited vulnerability.  The steps in this process are shown in the following graphic.  Following the detection of an incident (using SIEM, NetFlow, or some other monitoring control), the first step is to contain it before it can spread or cause more business impact.  Containment is easier in a segmented network; segments under attack are quickly segregated from the rest of the network and isolated from external attackers.

Response Process

Response Process

Following containment, the nature of the attack is assessed.  Failing to follow this step can result in incorrectly identifying the threat, the threat agent, the attack vector, or the target.  Missing any of these can make the following steps less effective.

Once we understand the who, what, when, where, how, and why of an attack, we can eradicate it.  Eradication often takes the form of applying a patch, running updated anti-malware, or system or network reconfiguration.  When we’re certain the threat agent is neutralized, we recover all business processes.

Business process restoration requires a documented and up-to-date business continuity/disaster recovery plan.  Some incidents might require server rebuilds.  Business impact increases as a factor of the time required to restore business operation.  Without the right documentation, the restoration time can easily exceed the maximum tolerable downtime: the time a process can be down without causing irreparable harm to the business.

Finally, we perform root cause analysis.  This involves two assessments.  One determines what was supposed to happen during incident response, what actually happened, and how can we improve.  The second assessment targets the attack itself.  We must understand what broken control or process allowed the threat agent to get as far as it did into our network.  Both assessments result in an action plan for remediation and improvement.

The Internet is broken.  We must assume that one or more devices on our network is compromised.  Can you detect anomalous behavior and effectively react to it when the inevitable attack happens?

Review of the ioSafe Solo Backup/DR Drive

In Backup, Business Continuity, Data Security, Disaster Recovery, Physical Security, Risk Management on July 4, 2009 at 17:56

I don’t get excited about technology very much anymore.  After almost 30 years in this business, I’ve become rather jaded to most emerging technology.  So I have one thing to say about the ioSafe Solo drive—WOW!!

I received an evaluation unit from ioSafe a couple of days ago.  It came in a plain white box, but it weighed quite a bit.  Big piece of iron I have to spend an afternoon configuring, I thought.  So I waited until the weekend.  Removing the drive from the box I found the drive unit, a USB cable (which closely resembles the cable I use on my USB printer), and a power cable. The drive unit is about the size of a toaster.  But unlike my toaster, it weighs about 15 pounds. 

The manual wasn’t much.  Since I was connecting the drive to my laptop running Windows XP SP2, the installation instructions pretty much consisted of: 1) plug the drive into an outlet, 2) plug the USB cable into the drive and into the computer, and 3) turn on the drive.  This was good.  I like simple.

I followed the directions, and 20 seconds after I turned on the drive I had a new 500 GB drive connected and ready for action.  According to the manual, Apple computer users will have to do some formatting work before they can use the unit.

Now you might be asking, “so what?”  Well, there is more to this drive than meets the eye.  Within 5 minutes of unpacking the gear, I had a backup drive which provides the following:

  • Fire protection for temperatures reaching 1550 degrees Fahrenheit for 30 minutes (tested per the ASTM E119 protocol)
  • Water protection, tested for immersion up to 10 feet for 72 hours
  • FloSafe air cooled, providing forced air cooling through plastic vents which melt shut to protect the unit when ambient temperature reaches 200 degrees Fahrenheit
  • Metal case which can be easily bolted to the floor or secured with a cable lock
  • A three year warranty and ioSafe’s data recovery services for one year

Additional features include 7200 rpm drives and USB 1.0 and 2.0 support, with data transfer rates up to 480 Mb/s.

I was pretty interested in this drive by this time.  It’s a perfect backup solution for my home office and the restaurant we own.  So I looked up the price.  I was not disappointed.  The ioSafe Solo can be ordered with one of three data capacities, as listed below:

  • 500 GB at $149
  • 1 TB at $229
  • 1.5 TB at $299

You can upgrade the data recovery service from one year to up to five years, adding up to $100 to each of the prices listed.  These are retail prices.  A quick look at Amazon.com shows discounted pricing.  If you are an Amazon Prime customer with free shipping, you can also save the $25 or so it takes to get it to your door.

So my Solo unit sits next to my laptop, quietly protecting my data.  Quiet is relative, but it emits a very, very low hum which is almost undetectable in a quiet room and absolutely absent when listening to Slacker.com.  It looks pretty good, too, with blue lights on the front indicating a power on state. 

This is an excellent drive at an affordable price.  If you currently pay monthly fees to support over-the-Web backups, if you still use backup tapes, or if you have simply decided it’s too much trouble to look for and implement the right backup solution, you should definitely take a look at the ioSafe Solo.  I highly recommend it.

A model for vendor due diligence

In Cloud Computing, Data Security, HIPAA, Policies and Processes, Risk Management, Vendor Management on May 19, 2009 at 03:01

Many organizations today rely on third parties for varying levels of information processing.  This is especially true where hosted services provide core applications required for a critical business process.  Sharing business process implementation with outside entities may require not only sharing of sensitive information.  It may also require reliance on the integrity of financial data derived from vendor systems and imported into an organization’s financial reporting applications.  Although there are countless ways to structure such relationships, one factor remains unchanged across them all; the responsibility for protecting sensitive or regulated  information rests on the shoulders of the organization which collected it from customers and patients, or protects it on behalf of investors (i.e., intellectual property).

The steps necessary to practice due diligence are simple.  When followed, they provide reasonable and appropriate protection.  Figure 1, from a recent ISACA Journal article, depicts a simple model built upon six basic activities, extending from before contract signing through the life of the business relationship (Bayuk, 2009).  Note the recommended organizational entities involved with each activity.

Figure 1

1. Identify data.  There is no reason to provide an entire database to a vendor when a few fields will suffice.  Define the process the vendor you expect the vendor to perform and document the minimum data elements required.  Include only these elements in any transfer of data.  Since your data is already classified (I’m making an assumption here), internal policies dictate how it is to be handled.  Use these policies as the basis for contractual wording which compels the vendor to handle shared information in a way you expect.

2.  Implement internal controls.  Just because you agree not to provide more information than necessary doesn’t mean your staff will comply.  First, they have to know what information is allowed to pass.  Second, controls must exist to monitor for mistakes.

3.  Specify requirements.  Requirements include not only what data is exchanged.  They also have to specify how the data is protected while its moving between networks or at rest.  The requirements should adhere to data classification policies identified in the Identify Data activity.  Identify any additional controls and include them in the contract.

4.  Identify vendor processes.  Up to this point, most of the work revolves around your internal processes and expectations.  Now it’s time to see whether the vendor can meet management’s requirements for safe handling of its information.  Ask questions about basic security controls in place.  Make sure you understand how access is controlled and whether a good disaster recovery plan is in place and tested.  Overall, make sure the security framework, including operating processes, will adequately protect your information.  Will the vendor be able to meet your requirements?  Again, make sure current acceptable controls are included in the contract as well as steps to fill gaps discovered during the process review.

5.  Map 3 and 4.  At this point, you want to identify any issues which might elevate risk to an uncomfortable level.  Verify controls claimed by the vendor actually exist.  Then map the results of 3 and 4.  Are there any gaps which the vendor is either unwilling or unable to remedy?  Report these potential vulnerabilities to management for a risk review.

6.  Make assessment.  Perform this activity at the point at which the vendor and you contractually agreed that all controls were to be in place.  Repeat this assessment periodically during the life of the contract.  Assessments should be performed by your internal audit team or by a disinterested third party.

Bayuk’s model is simple, and it provides a framework upon which to build a vendor due diligence process which works for your organization. 

Works Cited

Bayuk, J. (2009, April).  Vendor Due Diligence, ISACA Journal, v3 2009, p. 34

AVSIM: Real world example of the value of offsite backups

In Backup, Disaster Recovery, Hacking on May 18, 2009 at 08:00

The owners of AVSIM, an important resource for Microsoft Flight Simulator users, worked for 13 years to build a well respected site.  Using two servers, they conscientiously backed up one to the other, confident they were protected.  That confidence was shattered this month when a hacker destroyed the site, including both servers.  Since no offsite backup–or even an off-server backup–was available, it was impossible to recover.

There is a lesson here for all organizations.  If you have a server or other storage containing critical business information, make sure it is backed up to an offsite location.  Even if the probability is low that fire, tornadoes, hurricanes, and a host of other natural threats may take out your facility, there is always the hacker community which is always looking for a new challenge.

We always talk about the importance of offsite backups, but sometimes it takes an actual example to make managers sign a check.  Maybe that is the proverbial silver lining in this story.

Windows Azure: Solving cloud computing issues?

In Business Continuity, Cloud Computing on April 17, 2009 at 13:04

Cloud computing promises to reduce costs as well as improve scalability and availability.  However there are challenges still to be met, challenges which Microsoft is taking head-on.

Microsoft’s Azure is a cloud computing “operating system” which appears to deal with most if not all reasons not to transition critical systems to the cloud.  The video recording of the Azure presentation at PDC2008 is a great introduction.

 

 

%d bloggers like this: