Tom Olzak

Posts Tagged ‘breach’

Important Announcement for MySQL Users

In Application Security, Data Security on May 15, 2010 at 11:34

According to Open MySQL security holes, the newest upgrade to MySQL plugs three important security holes, among others.

Security Tip: It isn’t just about social security numbers anymore

In Access Controls, Cybercrime, Data Security, Hacking on October 2, 2009 at 09:19

A recent breach of a PayChoice Inc. server is evidence that organizations must provide overall controls, not just those targeting popular attack vectors. 

Chris Wysopal, chief technology officer at application security vendor Veracode Inc., said the breach is interesting because it shows that hackers are looking for targets other than credit card numbers and social security numbers to steal.

“The market is saturated with [stolen] credit card data,” Wysopal said. A credit card record that was worth $10 in the underground in 2007 today can be had for about 50 cents, he said.

As a result cybercrooks looking to monetize what they are doing are moving up to higher value attacks where possible, he said.

In this case, the hackers appear to have been trying to install keystroke loggers to get information that would have allowed then to access online banking accounts of PayChoice’s customers, he said. “That is where they would have got tens of thousands of dollars,” had they been able to pull it off.

Source: Large online payroll service hacked, Jaikumar Vijayan, Computerworld, 1 October 2009

This is an example of why security professionals must continue to protect ALL sensitive information regardless of what pops up in the media.  Overall protection requires continuous marketing by security for management buy-in at all levels.

Permissions Creep: The Bane of Tight Access Management

In Access Controls, Data Security, Insider risk, Risk Management on October 1, 2009 at 10:33

Organizational role changes are common.  People are promoted, move from one department to another, or responsibilities change for the roles they’re in.  The result over time, commonly known as permissions creep, is a bunch of user accounts for which least privilege and segregation of duties no longer apply.  The solution is a documented and aggressively followed job change process.

First, let’s look at the issue of job changes.  A job change process should use an authoritative source, such as your human resources system, to track role changes.  If you assign a job code to each employee based on his or her position, then this is pretty easy.  One approach is to compare a nightly extract, including employee ID and job code to the previous night’s run.  A difference in job code indicates a change in position.  If your HR system produces a report listing job changes, then you already have what you need.

For organizations with an automated provisioning system, the next step is easy.  Feed the changes to the provisioning server and let it do its thing.  Otherwise, hand it off to a system administrator for manual changes to directory services and all relevant applications.  Whether automated or manual, the process is the same.  For each affected account, remove all current access and replace it with the approved access for the new job role.  This assumes you’ve defined access by application, AD group, etc. for each job code.  If you haven’t, this is a big job so you’d better get started…

Some admins might simply reverse access based on the original role.  This is not effective, especially for an employee who’s been around a few years.  Exceptions to base access settings may have been added over time as the employee’s manager added additional responsibilities not commonly given.  Changing responsibilities causes problems, particularly when an employee’s job never changes and the job change process isn’t invoked.

If you have employees who have worked for your organization for many years, especially those who demonstrate the ability to perform a wide variety of tasks, they have probably been given special permissions in addition to those approved for their organizational role.  These exceptions were likely approved by a data owner and are on file for the auditors.  So far, so good.  However, the dynamic nature of business inevitably shifts these responsibilities around, removing the need for access but not the actual access itself. 

Dealing with permissions creep caused by variable responsibilities over time requires actual reviews of employee access.  Schedule periodic reviews by data owners, managers, etc.  Use the results of these reviews to adjust access to reflect employee job responsibilities today.

Finally, there is the question of location.  For non-healthcare organizations (HIPAA free), this might not be a problem.  However, when you have to manage patient information visibility based on role and location, access reviews take on an additional dimension.  Make sure reviews and job changes take into account where the employee is working and adjust need-to-know controls accordingly.

Managing permissions creep isn’t exciting, but it is a necessary part of securing information assets.

One-Time Passwords are Not Foolproof

In Access Controls, Cybercrime, Hacking, malware, Password Management on September 18, 2009 at 09:47
Credit: Technology Review

Credit: Technology Review

Many of us started using one-time password devices some time ago.  They typically take the form of “footballs” or smartcards and generate a random—or pseudorandom—string used only as a password for one session login.  This was considered to be “safe enough.”  But now we might have to rethink our approach.

In a recent article by Robert Lemos, he describes an actual theft using a Trojan that rides one-time password sessions. 

The theft happened despite Ferma’s use of a one-time password, a six-digit code issued by a small electronic device every 30 or 60 seconds. Online thieves have adapted to this additional security by creating special programs–real-time Trojan horses–that can issue transactions to a bank while the account holder is online, turning the one-time password into a weak link in the financial security chain. “I think it’s a broken model,” Ferrari says.

Source: Real-Time Hackers Foil Two-Factor Security, Robert Lemos, Technology Review, 18 September 2009

The use of multiple factors of authentication is often viewed as a panacea for sensitive data access control challenges.  However, it was only a matter of time before attackers found a way to exploit these methods.  So what do we do?  How can we ensure our business and personal systems are protected when we perform online transactions, like banking or accessing strategic business data?  There are multiple answers to this question, which implemented together provide a layered approach.

  1. Continue to use multi-factor authentication.  This is still a good way to thwart the majority of attempts to get to your data, and it’s far better than using only a traditional password.
  2. Keep patching and updating your AV solutions.  Patching is still one of the best ways to keep bad stuff off your endpoint devices.  Combined with AV (anti-malware) software, patching can smack down bad stuff crawling over the wire.
  3. Remove local admin access—even for you.  No one should browse the Web while logged in with an account which allows installation of anything on the desktop.  This is much easier with Windows Vista and Windows 7, but the large number of Windows XP systems still running on systems at the office and at home still require some special effort to make this happen.
  4. Consider using a sandbox or virtual machine.  The best way to prevent unwanted software from making a home on your PC is to browse the Web with a browser running in a sandbox.  Products like Sandboxie provide a free solution for isolating any Internet activity to a work area with read only access to the hard drive, system files, etc.  When finished, kill the sandbox and everything picked up along the way simply goes away.  Another approach is using virtual machines.  For home or home office, Sun’s VirtualBox is an excellent choice.  For larger businesses, VMware is an option.  However, beware of using a sandbox or VM for casual browsing and for accessing your bank account.  Remember, anything installing itself in your VM or in your sandbox will function as it would on your actual desktop.

Yes, sensitive data on QA and Development servers is still sensitive

In Access Controls, Business Continuity, Data Security, Network Security, Security Management on August 18, 2009 at 11:48

Any organization with an effective software development lifecycle (SDLC) builds QA and development environments to test new or upgraded systems.  Testing, either unit (developer) or user acceptance (UAT), requires data available to the application which looks very close to production data, including construction of all data dependencies.  The fastest way to make this happen is to copy production data into the test and development databases.  However, perception of the sensitivity of data in these non-production environments is often… well… wrong.

I like to practice data-centric security.  This means security controls are about protecting sensitive data and access by critical systems to that data.  So if someone moves a customer database, for example, to a development server the data should be protected with the same controls used to protect it in production.  Organizations often use a system-centric approach to security, assuming that servers, workstations and data not in the production environment don’t require the same level of trustworthiness.

Research commissioned by enterprise applications vendor Micro Focus and carried out by the Ponemon Institute surveyed 1,350 application development staff at UK and US firms with turnover between $10m (£6.1m) and $20bn-plus.

The past 12 months have seen data breaches at 79 per cent of respondents, with the same amount using live production data in application development and testing. But just 30 per cent of firms mask this data during the process.

Application testing takes place on at least a weekly basis at 64 per cent of companies, with 90 per cent claiming it happens once a month or more. A mere seven per cent of respondents said data protection procedures were more rigorous during development and testing than during normal production.

Source: Lax data masking hits four in five firms, Sam Trendall, CRN, 18 August 2009

Granted, the purpose of the study was ostensibly to promote a data masking solution.  But it demonstrates the need for better focus on non-production data stores.  In other words, data in QA and development systems must be managed with the same rigor as that residing in production.  And if extending security controls to these systems is not feasible, then data masking is necessary.

%d bloggers like this: