Tom Olzak

Archive for the ‘HIPAA’ Category

Health Care Information Security Challenge

In Data Security, HIPAA, Regulation, Security Management on December 27, 2012 at 15:27

In the last week, I’ve read several articles claiming that health care information is a prime target for cyber-criminals in 2013.  While I agree with this, I don’t agree with one of the reasons given.

Some bloggers and journalists claim that the HIPAA has not kept up with technology, and this is the reason health care is at risk today.  I disagree with this.  the HIPAA is strongly aligned with ISO/IEC 27002:2005.  General compliance with the ISO standard of best practice brings a covered entity into compliance with the HIPAA security rule.  Add to this HITECH, Subtitle B, and a covered entity has everything it needs to keep information safe.  In my view, the problem isn’t with the HIPAA; the problem is with perspective.

Compliance is not security: it is not effective risk management.  When I was director of security for a national health care organization, compliance initially went down this path.  C-level management began to ask why risk still existed after we were judged “HIPAA compliant.”  Putting the need in terms of bottom-line risk helped to turn perspectives; it made management look at HIPAA as a starting point, not an endpoint.

Today, many health care organizations are HIPAA compliant, but that does not mean risk has been sufficiently mitigated.  This is also true of publicly traded companies who pass SOX audits.  One of the biggest mistakes we as security professionals can make is allowing our employers or clients believe they are secure simply because they are compliant with a regulation.

So this begs the question… Is the current health care information security challenge a problem with the regulation or a problem with how we view compliance and risk?

Beware Regulatory Hysteria

In Data Security, Government, HIPAA, Policies and Processes, Privacy on June 13, 2009 at 09:18

Regulatory Hysteria: Knee-jerk overreaction to new regulations, often placing individual privacy at risk.

For years, since before HIPAA and SOX, organizations have often overreacted to government mandates.  Some of the blame falls on accountants and security consultants who don’t understand the law, are trying to make a few extra bucks, or are simply covering their own butts. In other cases, organizations simply suffer from what I call regulatory hysteria.  Whatever the reason, overreacting to regulatory requirements can sometimes put customers and employees at greater risk.

Sherri Davidoff writes about a recent incident in which she appears to have been personally involved.  The post, located at, describes the results of the FACTA and its Red Flag Rules on patient privacy.

Sherri was apparently confronted with a notice of a new requirement to produce a photo ID when she visited her doctor.  Since she didn’t have one, the office staff wouldn’t process her for her appointment.  While she stood there, Sherri observed staff scanning patient driver’s licenses for filing in their computer system.  Sherri was upset that she was inconvenienced and about her doctor demanding additional personal information.  Was she justified?  Maybe.

First, the Red Flag Rules are designed to protect us from criminals who seek to steal our identities for financial gain, including using our health insurance.  Health insurance theft is a big problem and growing.  The rules also help ensure someone can’t receive care under your name and have those results placed in your records, with the possible result of you receiving harmful care based on invalid assumptions about your health.  They are a good idea, and Sherri should simply get a photo ID—although there are other ways to verify identity, and the doctor might try to be a little more flexible.

Scanning of licenses or other photo IDs, however, is another matter.  There is no requirement to scan and store proof of identity.  The requirement is to demonstrate documented processes to:

  • Verify a potential patient’s identity
  • Report possible identity theft

This particular case looks like butt-covering rather than reasonable and appropriate compliance with the law.  And even if Sherri did produce a photo ID, how much effort is actually taken by the office staff to verify the ID itself?  What training did the staff receive to help them identify fraudulent documents?  Do they even compare the photo—I mean actually look at it—with the person standing in the reception window?  These are more important considerations than getting a scanned copy of a photo ID.  Finally, does the office staff simply accept verbal confirmation of identity for future visits once a scanned ID is in the system?  I hope their scanner is better than most, or picture quality will be close to worthless.

The other issue Sherri wrote about was her concern about the office potentially storing additional information about her in their computer system.  If the office is HIPAA compliant, and ePHI is protected in accordance with the security rule, this shouldn’t be an issue.  If it isn’t, Sherri has bigger problems than not having a photo ID or having an ID scanned.

My problem with Sherri’s visit is different from hers.  There is apparent compliance with the Red Flag Rules.  However, compliance extends far beyond a simple scan of an ID.  If the office manager simply uses the scans as evidence that an ID was produced without requiring trained employees to follow an actual identity verification process, then there is no compliance—just the appearance of compliance.  I think Sherri should be more concerned with how the office staff verifies her identity during each visit, and whether they are actually compliant with the HIPAA security rule, than whether they require a photo ID.

System physical security should include mobile device asset management

In Access Controls, HIPAA, Physical Security, Piracy Legislation on May 27, 2009 at 21:43

Some organizations spend a lot of time worrying about administrative (policies) and logical (application and system electronic) access controls without much concern for physical security.  I don’t mean the kind of physical security where you make sure your data center is locked.  I mean the kind of security which allows you to track who has your resources and ensures your organization takes the right steps to quickly mitigate impact.

For example, it doesn’t make much sense to lock the data center when unencrypted, unmanaged mobile devices travel across the country.  The sensitive information stored safely in the data center might as well be in the lobby.  This might seem a basic principle, but many organizations still don’t get it.  Take the US Department of the Interior, for example.  According to a report completed last month by the department’s inspector general, Western Region,

…13 computers were missing and… nearly 20 percent of more than 2,500 computers sampled could not be specifically located.  Compounded by the Department’s lack of computer accountability, its absence of encryption requirements leaves the Department vulnerable to sensitive and personally identifiable information being lost, stolen, or misused.

Source: Evaluation of the Department of the Interior’s Accountability of Desktop and Laptop Computers and their Sensitive Data, U.S. Department of the Interior, Office of the Inspector General, 24 April 2009.

So the IG could verify the loss of 13 unencrypted computers, but about 500 were simply unaccounted for.  The reason? Several of the agencies within the department had no process to track computer inventory.  The following is from a related InternetWorld article:

Despite policies mandated by the Federal Information Systems Management Act and other regulations, including rules that say computers should not be left unattended in plain view and that organizations should establish policies to protect their systems from unauthorized access, the Department of the Interior doesn’t require that any hardware that costs less than $5,000 — that would cover most PCs — be tracked in an asset management system, and the current tracking system doesn’t have proper backing, according to the report.

Source: Department Of The Interior Can’t Locate Many PCs, J. Nicholas Hoover, InformationWeek, 27 April 2009

Most of us agree that encryption is a necessary part of any mobile device security strategy.  But why worry about tracking laptops?  Isn’t encryption enough to render the data on a lost or stolen laptop inaccessible?  Well, it depends.

Many organizations do not use strong passwords.  The reasons vary, including:

  • Users tend to write complex passwords down, leaving then easily accessible
  • Password reset calls constitute a high percentage of help desk calls, rising exponentially as password complexity increases

In other words, strong passwords are often seen as weaker and more costly to the business than simple passwords.  And password complexity tends to remain the same when an organization implements full disk encryption, raising concern about the real effectiveness of scrambling sensitive information.  The complexity of the password and the configuration of the login policy (i.e., history, failed login attempt, etc.) are factors in the strength of any encryption solution.  In any case, encryption solutions should be supplemented to some degree—depending on the organization—by a mobile device physical management process, including,

  • Mobile device assignment process which includes recording employee name and date of assignation
  • Clearly documented mobile device usage and protection policy signed by each employee before he or she receives a mobile device
  • Periodic, random verification that the assigned user still has physical control of the device
  • Strict employee termination process which includes receipt of assigned devices
  • Documented device end-of-life process, including
    • recording receipt of device
    • recording of device disposition, in accordance with the organization’s media sanitation and reuse policy
  • Tested and documented device loss process, including
    • process for reporting a mobile device lost or stolen
    • assessment of the probability of sensitive data breach and notification of affected individuals

A model for vendor due diligence

In Cloud Computing, Data Security, HIPAA, Policies and Processes, Risk Management, Vendor Management on May 19, 2009 at 03:01

Many organizations today rely on third parties for varying levels of information processing.  This is especially true where hosted services provide core applications required for a critical business process.  Sharing business process implementation with outside entities may require not only sharing of sensitive information.  It may also require reliance on the integrity of financial data derived from vendor systems and imported into an organization’s financial reporting applications.  Although there are countless ways to structure such relationships, one factor remains unchanged across them all; the responsibility for protecting sensitive or regulated  information rests on the shoulders of the organization which collected it from customers and patients, or protects it on behalf of investors (i.e., intellectual property).

The steps necessary to practice due diligence are simple.  When followed, they provide reasonable and appropriate protection.  Figure 1, from a recent ISACA Journal article, depicts a simple model built upon six basic activities, extending from before contract signing through the life of the business relationship (Bayuk, 2009).  Note the recommended organizational entities involved with each activity.

Figure 1

1. Identify data.  There is no reason to provide an entire database to a vendor when a few fields will suffice.  Define the process the vendor you expect the vendor to perform and document the minimum data elements required.  Include only these elements in any transfer of data.  Since your data is already classified (I’m making an assumption here), internal policies dictate how it is to be handled.  Use these policies as the basis for contractual wording which compels the vendor to handle shared information in a way you expect.

2.  Implement internal controls.  Just because you agree not to provide more information than necessary doesn’t mean your staff will comply.  First, they have to know what information is allowed to pass.  Second, controls must exist to monitor for mistakes.

3.  Specify requirements.  Requirements include not only what data is exchanged.  They also have to specify how the data is protected while its moving between networks or at rest.  The requirements should adhere to data classification policies identified in the Identify Data activity.  Identify any additional controls and include them in the contract.

4.  Identify vendor processes.  Up to this point, most of the work revolves around your internal processes and expectations.  Now it’s time to see whether the vendor can meet management’s requirements for safe handling of its information.  Ask questions about basic security controls in place.  Make sure you understand how access is controlled and whether a good disaster recovery plan is in place and tested.  Overall, make sure the security framework, including operating processes, will adequately protect your information.  Will the vendor be able to meet your requirements?  Again, make sure current acceptable controls are included in the contract as well as steps to fill gaps discovered during the process review.

5.  Map 3 and 4.  At this point, you want to identify any issues which might elevate risk to an uncomfortable level.  Verify controls claimed by the vendor actually exist.  Then map the results of 3 and 4.  Are there any gaps which the vendor is either unwilling or unable to remedy?  Report these potential vulnerabilities to management for a risk review.

6.  Make assessment.  Perform this activity at the point at which the vendor and you contractually agreed that all controls were to be in place.  Repeat this assessment periodically during the life of the contract.  Assessments should be performed by your internal audit team or by a disinterested third party.

Bayuk’s model is simple, and it provides a framework upon which to build a vendor due diligence process which works for your organization. 

Works Cited

Bayuk, J. (2009, April).  Vendor Due Diligence, ISACA Journal, v3 2009, p. 34

Server Virtualization and Control Context

In Access Controls, Data Security, HIPAA, Insider risk, Risk Management on May 6, 2009 at 13:50

Traditional database servers are relatively easy to track. You stand up a physical box and place the database on it. The part where a physical system is needed is monitored closely by business and change managers, due to costs and other constraints. However, this constraint is typically missing from virtualized environments.  Because network infrastructure engineers can bring up a virtual server without much effort, they typically respond quickly to business or IS requests for additional server resources. Risk due to virtualization is easily managed with a little planning, a few processes and policies, and a network segmentation plan which enables engineers to ensure data security without introducing another layer of complexity. The result is a set of control contexts into which database servers are placed based on the classification of the data they store or process.

Control Context Defined

The term “security context” is typically used to describe the framework governing user or application authentication and authorization. It is closely related to the framework of controls used to secure data in a datacenter, but not close enough. This is where a control context fills the gap. A control context is a collection of infrastructure controls which both harden and monitor critical resources and the paths leading to and from them. To better understand this concept, let’s look at Figure 1.

Read the rest of this article at CSO online…

%d bloggers like this: