Tom Olzak

Archive for April, 2009|Monthly archive page

Implementing biometrics requires a little thought

In Access Controls, Biometrics on April 30, 2009 at 13:52

Implementing the right biometrics solution is not an easy task.  There are several considerations which, if analyzed carefully, might even result in a decision to look at other identity verification methods.  I’ve written about this in the past, but a recent post to The Daily WTF about an implementation-gone-wrong provides an opportunity to drive home some basic points once again… and apparently it’s needed.

Problem 1, failing to analyze the operating environment

The fingerprint biometrics system described in the article was intended to perform various tasks in a workout facility, including member check-in and check-out.  Check-ins seemed to work OK.  Check-outs, however, were problematic.  Finger characteristics temporarily changed while members were in the facility, caused by contact with normal gym environments, including exposure to water in the pool, sauna, or whirlpool as well as contact with lattice patterns on weight equipment. And the sensor quickly became unusable as it came in contact with a stream of unwashed hands.

Problem 2, failing to understand the technology

No allowance was made for sensor failure.  So no manual workaround was implemented.  If the check-out sensor failed, the only recourse was jumping over the turnstile, which remained locked until a recognizable print was read by the system.  To reduce the number of “jumpers”, the technicians turned recognition sensitivity down low.  In other words, the biometrics system would accept data which fell far short of what it normally considered effective print analysis.  This resulted in a high number of false positives; people were being identified as another member when they placed their digit on the sensor.

Problem 3, failing to understand how members would react to biometrics

It wasn’t clear from the article, but it appeared as if the gym manager jumped into biometrics without a lot of thought, including thinking about whether his customer would decide to change membership to a place where they didn’t have to provide personal information.  Privacy issues is a big reason why biometrics are rejected by employees and customers.  Another reason is the fear of picking up some disease from sensors used by more than one person.  In this case, it was clear many of the members chose to use the old method of using a touch screen to log in.

Lessons to take away

The business made three common errors when implementing new biometrics.  So you don’t make the same mistakes, I’ve provided a list of how to avoid them.

  1. It’s important to understand the environment in which biometrics will be used.  In this case, sweat and grime made the sensors useless.  In manufacturing environments, it might be lubricants or other substances in the air; even the cleanest hands won’t solve this problem.  If the sensors fail often, employees or customers will become frustrated and reject the technology, resulting in employee turnover or lost revenue.  In cases where environmental conditions are not friendly to biometrics, consider tokens such as magnetic stripe cards.
  2. Many business managers don’t understand the pros and cons of biometrics.  For example, I wonder if the vendor told the gym manager that no biometrics solution works on every print every time.  There will be false negatives and false positives.  Adjusting the system to reduce false positives will increase false negatives and vice versa (see Figure 1).  The gym manager, in turning down false negatives, allowed false positives to increase.  Some organizations, in the interest of tight security, go in the other direction, tuning their systems to eliminate false positives.  This increases the false negative rate to a point where the solution might be more trouble than its worth.  In the figure below, the CER is the ideal setting for a biometrics system.  The CER (Cross-over Error Rate) is the point at which the number of false positives and false negatives are equal.  The quality of a biometrics solution is often determined by the size of its CER, usually expressed as a percentage of total scans.  In any case, errors will occur.  Not having a manual workaround is a big oversight.
  3. Figure 1

    Figure 1

  4. Finally, there is the user factor.  Employees and customers may reject biometrics for a variety of reasons, including: fear that the company stores unique personal information and fear of contracting diseases through contact with publicly used sensors.  Another big reason people reject biometrics is frustration.  It probably wouldn’t take many jumps over the turnstile before gym members simply returned back to the old way of logging in and out.  The best way to deal with these issues is to hold open and honest discussions about how the systems work, the health risks involved, and how the organization plans to use the information. Remember, user acceptance doesn’t depend on how you perceive biometrics identity verification. Rather, it depends on how your employees and customers perceive it.

Another insider theft survey…

In Business Continuity, Cybercrime, Data Security, Employee Vetting, Network Security, Risk Management on April 24, 2009 at 10:45

Here is another survey which supports previous findings about employee breach risks.

You think you can trust your work colleagues? According to just-released research more than one third of employees would steal sensitive company information if they thought they could earn a decent price from the theft.

Read the rest of this TechWorld article…

 

Written Policy without Process and Oversight is Just Wasted Effort

In Business Continuity, Data Security, Policies and Processes on April 20, 2009 at 12:05

Whether prompted by regulations or by management intent to comply with security best practices, the first step after creating a security strategy is development of policies.  However, there are some organizations who treat policies as the endgame.  Those taking this approach are not only misguided.  They are potentially exposing their sensitive data and critical systems to greater risk.

The Policy Myth

Security policies are simple statements of intent.  They provide to employees management’s expectations regarding acceptable use, handling, and implementation of critical systems as well as the confidentiality, integrity, and availability of sensitive information.  However, they don’t provide for consistent application of management intent.  Nor do they describe and mandate and system of oversight to ensure compliance and effectiveness.

David Aminzade addresses these issues in an article posted today on Help Net Security.  He begins with,

Most large organizations maintain a detailed corporate security policy document that spells out the “dos and don’ts” of information security. Once the policy is in place, the feeling is of having achieved ‘nine-tenths of the law’, that is, that the organization is in effect ‘covered’. This is a dangerous misconception. Because much like in the world of law and order, while creation of law is fundamental, implementation and enforcement of law is what prevents chaos.

Source: Is having a security policy in place really nine-tenths of the law?, David Aminzade, Help Net Security, 20 April 2009

Making Policies Real

To make policies ‘real’ to technical and business process employees, people must be aware they exist.  They must also follow documented processes intended to result in compliance.  So the next step after policy approval is development of supporting processes.

Processes typically provide step-by-step instructions for how to perform a single task or set of tasks.  If an employee follows a process, he or she will automatically produce compliant outcomes—at least that’s the expectation.  Effective processes are developed in collaboration with all stakeholders, and then introduced to existing employees via training programs.  New hires should be provided with similar training.

This is another point in developing a security program where some organizations stop, with the belief they are now compliant.  Stopping here strengthens their defenses beyond those of the policy-only managers, but it still causes them to fall short of a truly effective security program.

The final step in making policies real to employees and the organization is implementation of oversight tools and processes.  Aminzade included a good-start list in his article:

  • Continuously monitor firewall and other security device changes, compare them to the corporate security policy, and send out alerts if the policy has been violated.
  • Track and report all changes in a uniform, simple and straightforward style.
  • Provide a vendor-neutral, top-down view of all security infrastructure that an executive can understand.
  • Enable security administrators to test a change against security policy before it is implemented, to assess and avoid risk.
  • To this list I would add internal audits, which provide an outsider’s perspective of processes and outcomes.  I would also add both internal and external vulnerability scans and penetration tests. 

    The first bullet should be part of an overall log management process.  I always recommended outsourcing this activity.  It is tedious work which can be done at less cost by a managed security services provider (MSSP).

    Tasks associated with the second and fourth bullet are typically part of a change management process.  Change management

    … deals with how changes to the system are managed so they don’t degrade system performance and availability. Change management is especially critical in today’s highly decentralized, network-based environment where users themselves may be applying many changes. A key cause of high cost of ownership is the application of changes by those who don’t fully understand their implications across the operating environment.

    Source: Implement change management with these six steps, Change Tech. Solutions, ZDNet, 8 January 2004

    Along with oversight, sanctions should be imposed fairly, as close to the actual event as possible, and clearly stated as possible consequences for not following approved procedures.  Formal, documented investigations can help change employee behavior even if their risky actions are not cause for disciplinary action.  Investigations help raise management awareness of potential employee or process issues.

    The final word

    Getting compliant is about documenting management intent, building enforceable processes which produce consistent outcomes, and monitoring to ensure the network is as secure as expected.  Assuming employees will simply act safely because a policy exists or making assumptions about security outcomes is a good way to end up as tomorrow’s media target because of a breach or malware induced network shutdown.

    .NET-Sploit ‘rootkit’: Easy to install, hard to defend against

    In Cybercrime, Hacking on April 17, 2009 at 13:57

     

    Microsoft’s .NET framework apparently contains a weakness which allows a rootkit-like malware infection, difficult to prevent and detect.  Apparently, the only solution is removal of the weakness from .NET.

    The tool, called .Net-Sploit 1.0, allows for modification of .Net, a piece of software installed on most Windows machines that allows the computers to execute certain types of applications.

    Microsoft makes a suite of developer tools for programmers to write applications compatible with the framework. It offers developers the advantage of writing programs in several different high-level languages that will all run on a PC.

    .Net-Sploit allows a hacker to modify the .Net framework on targeted machines, inserting rootkit-style malicious software in a place untouched by security software and where few security people would think to look, said Erez Metula, the software security engineer for 2BSecure who wrote the tool.

    “You’ll be amazed at how easy it is to devise an attack,” Metula said during a presentation at the Black Hat security conference in Amsterdam on Friday.

    .Net-Sploit essentially lets an attacker replace a legitimate piece of code within .Net with a malicious one. Since some applications depend on parts of the .Net framework in order to run, it means the malware can affect the function of many applications.

    For example, an application that has an authentication mechanism could be attacked if the tampered .Net framework were to intercept user names and passwords and send them to a remote server, Metula said.

    Source: Researcher offers tool to hide malware in .Net, Jeremy Kirk, IT World, 17 April 2009

    Windows Azure: Solving cloud computing issues?

    In Business Continuity, Cloud Computing on April 17, 2009 at 13:04

    Cloud computing promises to reduce costs as well as improve scalability and availability.  However there are challenges still to be met, challenges which Microsoft is taking head-on.

    Microsoft’s Azure is a cloud computing “operating system” which appears to deal with most if not all reasons not to transition critical systems to the cloud.  The video recording of the Azure presentation at PDC2008 is a great introduction.