Tom Olzak

Archive for May, 2009|Monthly archive page

System physical security should include mobile device asset management

In Access Controls, HIPAA, Physical Security, Piracy Legislation on May 27, 2009 at 21:43

Some organizations spend a lot of time worrying about administrative (policies) and logical (application and system electronic) access controls without much concern for physical security.  I don’t mean the kind of physical security where you make sure your data center is locked.  I mean the kind of security which allows you to track who has your resources and ensures your organization takes the right steps to quickly mitigate impact.

For example, it doesn’t make much sense to lock the data center when unencrypted, unmanaged mobile devices travel across the country.  The sensitive information stored safely in the data center might as well be in the lobby.  This might seem a basic principle, but many organizations still don’t get it.  Take the US Department of the Interior, for example.  According to a report completed last month by the department’s inspector general, Western Region,

…13 computers were missing and… nearly 20 percent of more than 2,500 computers sampled could not be specifically located.  Compounded by the Department’s lack of computer accountability, its absence of encryption requirements leaves the Department vulnerable to sensitive and personally identifiable information being lost, stolen, or misused.

Source: Evaluation of the Department of the Interior’s Accountability of Desktop and Laptop Computers and their Sensitive Data, U.S. Department of the Interior, Office of the Inspector General, 24 April 2009.

So the IG could verify the loss of 13 unencrypted computers, but about 500 were simply unaccounted for.  The reason? Several of the agencies within the department had no process to track computer inventory.  The following is from a related InternetWorld article:

Despite policies mandated by the Federal Information Systems Management Act and other regulations, including rules that say computers should not be left unattended in plain view and that organizations should establish policies to protect their systems from unauthorized access, the Department of the Interior doesn’t require that any hardware that costs less than $5,000 — that would cover most PCs — be tracked in an asset management system, and the current tracking system doesn’t have proper backing, according to the report.

Source: Department Of The Interior Can’t Locate Many PCs, J. Nicholas Hoover, InformationWeek, 27 April 2009

Most of us agree that encryption is a necessary part of any mobile device security strategy.  But why worry about tracking laptops?  Isn’t encryption enough to render the data on a lost or stolen laptop inaccessible?  Well, it depends.

Many organizations do not use strong passwords.  The reasons vary, including:

  • Users tend to write complex passwords down, leaving then easily accessible
  • Password reset calls constitute a high percentage of help desk calls, rising exponentially as password complexity increases

In other words, strong passwords are often seen as weaker and more costly to the business than simple passwords.  And password complexity tends to remain the same when an organization implements full disk encryption, raising concern about the real effectiveness of scrambling sensitive information.  The complexity of the password and the configuration of the login policy (i.e., history, failed login attempt, etc.) are factors in the strength of any encryption solution.  In any case, encryption solutions should be supplemented to some degree—depending on the organization—by a mobile device physical management process, including,

  • Mobile device assignment process which includes recording employee name and date of assignation
  • Clearly documented mobile device usage and protection policy signed by each employee before he or she receives a mobile device
  • Periodic, random verification that the assigned user still has physical control of the device
  • Strict employee termination process which includes receipt of assigned devices
  • Documented device end-of-life process, including
    • recording receipt of device
    • recording of device disposition, in accordance with the organization’s media sanitation and reuse policy
  • Tested and documented device loss process, including
    • process for reporting a mobile device lost or stolen
    • assessment of the probability of sensitive data breach and notification of affected individuals

A model for vendor due diligence

In Cloud Computing, Data Security, HIPAA, Policies and Processes, Risk Management, Vendor Management on May 19, 2009 at 03:01

Many organizations today rely on third parties for varying levels of information processing.  This is especially true where hosted services provide core applications required for a critical business process.  Sharing business process implementation with outside entities may require not only sharing of sensitive information.  It may also require reliance on the integrity of financial data derived from vendor systems and imported into an organization’s financial reporting applications.  Although there are countless ways to structure such relationships, one factor remains unchanged across them all; the responsibility for protecting sensitive or regulated  information rests on the shoulders of the organization which collected it from customers and patients, or protects it on behalf of investors (i.e., intellectual property).

The steps necessary to practice due diligence are simple.  When followed, they provide reasonable and appropriate protection.  Figure 1, from a recent ISACA Journal article, depicts a simple model built upon six basic activities, extending from before contract signing through the life of the business relationship (Bayuk, 2009).  Note the recommended organizational entities involved with each activity.

Figure 1

1. Identify data.  There is no reason to provide an entire database to a vendor when a few fields will suffice.  Define the process the vendor you expect the vendor to perform and document the minimum data elements required.  Include only these elements in any transfer of data.  Since your data is already classified (I’m making an assumption here), internal policies dictate how it is to be handled.  Use these policies as the basis for contractual wording which compels the vendor to handle shared information in a way you expect.

2.  Implement internal controls.  Just because you agree not to provide more information than necessary doesn’t mean your staff will comply.  First, they have to know what information is allowed to pass.  Second, controls must exist to monitor for mistakes.

3.  Specify requirements.  Requirements include not only what data is exchanged.  They also have to specify how the data is protected while its moving between networks or at rest.  The requirements should adhere to data classification policies identified in the Identify Data activity.  Identify any additional controls and include them in the contract.

4.  Identify vendor processes.  Up to this point, most of the work revolves around your internal processes and expectations.  Now it’s time to see whether the vendor can meet management’s requirements for safe handling of its information.  Ask questions about basic security controls in place.  Make sure you understand how access is controlled and whether a good disaster recovery plan is in place and tested.  Overall, make sure the security framework, including operating processes, will adequately protect your information.  Will the vendor be able to meet your requirements?  Again, make sure current acceptable controls are included in the contract as well as steps to fill gaps discovered during the process review.

5.  Map 3 and 4.  At this point, you want to identify any issues which might elevate risk to an uncomfortable level.  Verify controls claimed by the vendor actually exist.  Then map the results of 3 and 4.  Are there any gaps which the vendor is either unwilling or unable to remedy?  Report these potential vulnerabilities to management for a risk review.

6.  Make assessment.  Perform this activity at the point at which the vendor and you contractually agreed that all controls were to be in place.  Repeat this assessment periodically during the life of the contract.  Assessments should be performed by your internal audit team or by a disinterested third party.

Bayuk’s model is simple, and it provides a framework upon which to build a vendor due diligence process which works for your organization. 

Works Cited

Bayuk, J. (2009, April).  Vendor Due Diligence, ISACA Journal, v3 2009, p. 34

AVSIM: Real world example of the value of offsite backups

In Backup, Disaster Recovery, Hacking on May 18, 2009 at 08:00

The owners of AVSIM, an important resource for Microsoft Flight Simulator users, worked for 13 years to build a well respected site.  Using two servers, they conscientiously backed up one to the other, confident they were protected.  That confidence was shattered this month when a hacker destroyed the site, including both servers.  Since no offsite backup–or even an off-server backup–was available, it was impossible to recover.

There is a lesson here for all organizations.  If you have a server or other storage containing critical business information, make sure it is backed up to an offsite location.  Even if the probability is low that fire, tornadoes, hurricanes, and a host of other natural threats may take out your facility, there is always the hacker community which is always looking for a new challenge.

We always talk about the importance of offsite backups, but sometimes it takes an actual example to make managers sign a check.  Maybe that is the proverbial silver lining in this story.

Biometrics slipping as a viable access control technology?

In Biometrics on May 17, 2009 at 14:07

Looking for a way to implement a second factor of authentication, many organizations have boarded the good ship Biometrics, only to find the vessel adrift due to user, application, and functionality issues.  And this is before they try to integrate their solution into a single-sign on (SSO) environment.  So it’s no surprise that biometrics was given honorable mention in a list of the Top 10 Disappointing Technologies.

Biometrics was supposed to be the magic bullet that solved all our security needs. Look in any film where they are trying to be futuristic or high tech and you’ll see people getting their body scanned as a security measure.

However, the reality has proved less than we were promised. Fingerprint readers are in wide circulation but they are easily fooled these days with cheap materials, or by more direct means. Taiwanese robbers reportedly cut the finger of a man whose car had a fingerprint ignition, something that led scanner manufacturers to install a temperature sensor in future models to prevent a repeat.

Facial scanning was also touted as foolproof, and then quickly found to be anything but. Even DNA fingerprinting is now being questioned, either because the chemistry is defective or the lingering possibility that an individual’s DNA may not be unique. Hell, they still haven’t proved that fingerprints are even unique.

Maybe one day we’ll come up with the ultimate biometric solution but I have my doubts.

Source: Top 10 disappointing technologies, Iain Thomson and Shaun Nichols, vnunet.com, 16 May 2009

Most users will agree that biometrics doesn’t work all the time.  Logging in to a computer once a day with a troublesome biometric sensor isn’t a huge problem.  But when the problem sensor is attached to a shared device (e.g., a nurses station computer) or a time clock, user patience and business productivity both take a hit.

Moving beyond user issues, we arrive at problems integrating with applications.  The biggest problem I’ve found to date is getting a single solution that works across all business applications.  I don’t want multiple fingerprint hash repositories—created by multiple enrollment processes—scattered across the enterprise.

Another application problem is the failure of vendors to understand a fundamental requirement.  Biometrics isn’t just about security.  It’s also about making life easier for the  user population.  For example, shared workstations should allow for a network-level, generic login (with a password from Hades that only Security knows) to eliminate the need for user network logins.  Users should then be able to walk up to a workstation, scan a fingerprint, and access an application session unique to their account.  This should happen even if another user is logged in to the system.  There are products which support this.  However, they don’t always work across all applications, and they are very expensive for organizations with thousands of workstations to support.

Finally, there is the issue of getting the sensors to work without adjusting the sensitivity to the point at which false positives are so high only password access makes any sense.  Functionality is affected by the operating environment and the quality of the sensors used.  In many cases, the cost of getting the right sensor for the environment is too high.

So biometrics languishes, even while many managers rail against using smart cards and other token-based solutions—although most biometrics replacements aren’t too much better in solving functional issues.  The reason is usually the claim that users will forget their tokens.  They don’t want to be bothered with something else to remember.  This argument only stands up when users don’t already need a card to enter the building or other secure area.  Management is also often unwilling to sanction users for not remembering to bring their tokens to the office.

While biometrics promises to solve the world’s authentication and identity verification problems, the reality is that the technology tends to fall short of expectations.  I don’t believe, however, that it is a lost cause.  Reviving it will take vendor focus on value beyond security and a willingness to work with others to develop standards to meet business requirements for a fast, simple, user-acceptable, secure access method.  But it will take a lot of pushing by users to move this damaged ship to port.

Wobbly Security Frameworks are Often Fixed by Turning a Few Screws

In Risk Management, Security Management on May 15, 2009 at 14:00

As security management becomes more integrated into business processes, it’s commonly seen as closely related to risk management.  This is an accurate perspective, as security professionals position controls as ways to mitigate negative business events.  But risk seen in this way is often used as a monolithic tool used to hammer home reasons why executive management should spend more money on security.  Risk is actually an aggregate of many smaller factors which must be addressed if the business is to be adequately protected.  These smaller factors are often without cost in real dollars, and fixing them is a prerequisite for implementing more advanced controls.

Risk Defined

My take on risk is a little different from what you might be used to seeing.  I first start with a standard formulaic model and expand a little, as shown below.

Formula 

Threats are pretty easy to understand when viewed in terms of all the ways people, malware in the wild, and nature can ruin a perfectly peaceful afternoon.  We’ll cover vulnerabilities later.  Target Value is defined in terms of either it’s criticality to the business or its sensitivity.  Sensitive systems and data typically include intellectual property, PII, or ePHI.  Finally, Response is a measure of how well an organization can detect, identify, contain a threat and recover from a security incident.  As shown in the formula, the effectiveness of an organization’s response directly impacts its overall risk. 

This is all very interesting, and it should be pretty familiar to most of you.  But there is another way of looking at risk which helps identify fundamental weaknesses in a security framework.

The Layered Risk Model

The layered risk model is something I use to identify the small things I may have overlooked.  It’s important to fix all the little things, things which taken all together can lower the ROI gained from implementation of sophisticated layered controls. 

Layered Risk Model

In this diagram, risk is depicted as an aggregate of factors contained within four layers.  Each layer has its own level of risk, depending on how well elements within it are managed and what controls might be in place.  Although all are important, I’m focusing on the second layer (from the bottom) for the rest of this article.  For more information about the other risk factors, see A Practical Approach to Managing Information System Risk.

The Little Stuff

Since threats and vulnerabilities together comprise Probability of Occurrence, adjustment of either reduces the possibility of a successful attack.  We have little control over threats, so vulnerability management is our best option.  As you can see from this example, vulnerabilities exist in many forms.

In this particular model, I listed some basic security holes which I call the “little stuff.”  Little stuff in the sense each by itself may be a small vulnerability and is something which is easily addressed.  Together, however, they form a formidable vulnerability layer, easily exploitable by the right attacker.  They are also easily avoided by following fundamental security best practices. 

As the title of this piece infers, tightening a few screws–paying attention to the little stuff–can strengthen your overall control framework.  Once the wobbling ends, you can achieve a better understanding of actual gaps.