Tom Olzak

Archive for August, 2009|Monthly archive page

Yes, sensitive data on QA and Development servers is still sensitive

In Access Controls, Business Continuity, Data Security, Network Security, Security Management on August 18, 2009 at 11:48

Any organization with an effective software development lifecycle (SDLC) builds QA and development environments to test new or upgraded systems.  Testing, either unit (developer) or user acceptance (UAT), requires data available to the application which looks very close to production data, including construction of all data dependencies.  The fastest way to make this happen is to copy production data into the test and development databases.  However, perception of the sensitivity of data in these non-production environments is often… well… wrong.

I like to practice data-centric security.  This means security controls are about protecting sensitive data and access by critical systems to that data.  So if someone moves a customer database, for example, to a development server the data should be protected with the same controls used to protect it in production.  Organizations often use a system-centric approach to security, assuming that servers, workstations and data not in the production environment don’t require the same level of trustworthiness.

Research commissioned by enterprise applications vendor Micro Focus and carried out by the Ponemon Institute surveyed 1,350 application development staff at UK and US firms with turnover between $10m (£6.1m) and $20bn-plus.

The past 12 months have seen data breaches at 79 per cent of respondents, with the same amount using live production data in application development and testing. But just 30 per cent of firms mask this data during the process.

Application testing takes place on at least a weekly basis at 64 per cent of companies, with 90 per cent claiming it happens once a month or more. A mere seven per cent of respondents said data protection procedures were more rigorous during development and testing than during normal production.

Source: Lax data masking hits four in five firms, Sam Trendall, CRN, 18 August 2009

Granted, the purpose of the study was ostensibly to promote a data masking solution.  But it demonstrates the need for better focus on non-production data stores.  In other words, data in QA and development systems must be managed with the same rigor as that residing in production.  And if extending security controls to these systems is not feasible, then data masking is necessary.

Internet Security Threats Short-lived?

In malware on August 17, 2009 at 08:24

During my daily review of security RSS feeds, I stumbled upon a PCWorld article entitled Internet Security Threats: Swift and Short-Lived.  The first paragraph read,

Internet security threats such as worms and trojans last for just 24 hours, says Panda Security.

Wow!.  Somebody must have figured out how to cleanse the millions of infected machines connected to the Web, because that is the only way an Internet threat is eliminated.  However, that was not the case.  Instead, this was apparently a statement about the effectiveness of certain AV solutions. 

To make a blanket statement about killing worms and viruses, rendering them impotent, is a little misleading.  Worms and other nasties released into the wild have a life of their own, infesting unprotected systems, waiting for the opportunity to infest computers of users who don’t patch, don’t keep their AV systems up-to-date, or connect to the Web from behind a firewall/router. 

If you want to test just how much bad stuff is still out there, simply attach an unprotected Windows PC directly to your ISP (connected straight into your cable modem, DSL modem, etc.) and let it cook for a few days.  Then, do some surfing and downloading of “free” stuff.  (No, don’t use it to check your bank balance.) Finally, install your favorite AV software, start a scan, and stand back.  After you’ve had your fun, remember to wipe the hard drive before using the machine for anything serious.

Yes, anti-malware defense is rather mature.  Yes, a well-managed system and network can repel most old threats.  But don’t assume they’re not still out there.

Blame the auditors: What a concept!

In Business Continuity, Data Security, Network Security, PCI DSS, Risk Management, Security Management on August 13, 2009 at 08:02

I have never thought of this.  After a breach, just blame the auditors.  Wait.  The reason I hadn’t thought of it is because passing a compliance audit IS NOT ASSURANCE OF SECURITY.  But some still don’t get it.

In an interview with CSO’s Bill Brenner, Heartland Payment Systems’ CEO, Robert Carr, blamed his QSA auditors for a recent (huge) breach.  Because they said his organization was PCI compliant, he felt secure.  Wow.  Security by checklist once again.

Rich Mogull, in an open letter to Carr, makes several excellent points about reliance on compliance instead of solid security practices.  He concludes his letter with,

But, based on your prior public statements and this interview, you appear to be shifting the blame to the card companies, your QSA, and the PCI Council. From what’s been released, your organization was breached using known attack techniques that were preventable using well-understood security controls.

As the senior corporate officer for Heartland, that responsibility was yours.

Source: An Open Letter to Robert Carr, CEO or Heartland Payment Systems, Rich Mogull, 12 August 2009

Rich’s letter is a good read, and it should be circulated widely among security professionals and senior executives. 

Among other things, this is another case where an organization is falling back on a completed checklist representing compliance with the PCI standard, a bare minimum set of security requirements.  But whether you are HIPAA, GLBA, or PCI compliant, checking off on recommended practices doesn’t equal security.

Each of us is responsible for placing compliance activities within the proper context: guidelines within a broader security program.  No regulatory or industry standards can protect our critical infrastructure or sensitive data.  Only an aware, thinking human who actually cares about security—and understands how standards apply within his or her unique environment—can do that.

Hardware Hacking Defense: Can you say physical security?

In Access Controls, Cybercrime, Data Security, Hacking, Security Management on August 5, 2009 at 11:30

I’ve been sort of stuck in the land of physical security lately.  The reason I can’t seem to extricate my brain relates to the dismal facility security many organizations employ.  It’s the lack of good physical security, including employee resistance to challenging strangers browsing the work area, which makes implementation of hardware hacks a real possibility.

Unlike software keystroke loggers and other nasty malware typically obtained via poor user habits—combined with a lack of Web browsing controls—hardware hacks are virtually invisible to AV software.  (See the vendor agnostic whitepaper, Keystroke Logging at http://ow.ly/jaeU.)  For example, a firmware hack for Apple keyboards was demonstrated at DEFCON 2009.  A related video (http://ow.ly/jahK) shows security researcher K. Chen gathering keystrokes from a laptop via a compromised keyboard.  The main difference with this hack is the ability to take over the hardware without taking the keyboard apart to install a logging component.  However, implementation of the hack is similar to other logging issues—physical access to hardware by an attacker means game over.

This hack, and others like it, require physical access to your computers.  How do you keep bad people away from your information resources?

  • Lock your doors.  Only authorized personnel should have access to your business office.  (If you aren’t securing your datacenter, this bullet is meaningless…)
  • Train your employees to notify security—or management if on-site security personnel aren’t available—when someone they don’t recognize is in the office area without a guest badge.  (This assumes your organization actually makes real employees wear employee badges and guests to wear guest badges.)
  • Make sure your employee training includes social engineering issues.  For example, an employee should know that when a stranger tells him or her that they are replacing the widget control on the computer’s frazzilator, there may be something amiss.  In any case, strangers unaccompanied by regular employees—even if carrying a tool bag—are to be considered suspicious and reportable.
  • Even if a person has a guest badge, unexplained lingering around cubicles or use of an employee system should be reported. If unexplained access was gained to a workstation, consider replacing it.  At least ensure,
    • The keyboard is standard company issue.  (You might consider marking keyboards so they are identifiable as yours.)
    • There are no unusual components connected to the keyboard cable.
    • There is no unexplained hardware anywhere in the cubicle.
    • The Event Logs show no trace of an attack.  (Any attacker worth his or her fees will eradicate any traces of unusual activity–if they have enough time.)
    • Your intrusion detection/prevention logs don’t indicate the PC is sending/receiving unusual traffic.