Tom Olzak

Archive for the ‘Content Filtering’ Category

They have the tools, just not the will…

In Application Security, Computers and Internet, Content Filtering, Cyber-warfare, Cybercrime, Data Security, Detection Controls on July 10, 2015 at 12:44

As the number of government records stolen increases, we continue asking why so much data was stolen over the past year without detection.  The answer seems to lie in an article by Michael Cooney.  It seems the U.S. government has a detection tool called EINSTEIN, but it is only partially implemented across scattered government networks.

One of the weaknesses in the EINSTEIN implementation is the lack of any behavior analysis.  For the most part, the government is only using signature-based detection.  This is a huge controls vulnerability.

What will it take for our bureaucratic quagmire of a government to implement the right controls.  Yes, all organizations are viable targets for attack.  However, detecting the attacks (e.g., anomalous network/system behavior, unexpected movement of data, etc.) is paramount to a good defense.  Looks like much of the U.S. government either doesn’t get it or doesn’t care.

CryptoWall continues to spread

In Computers and Internet, Content Filtering, Cybercrime, Data Security, Ransomware on July 3, 2015 at 04:00

CryptoWall, an instance of ransomware, is a growing threat.  Attackers use it to hold an organization’s resources hostage until they get something of value.  This costs Americans millions… and it’s getting worse (FBI, 2015).

Ransomware, like CryptoWall and Cryptolocker, encrypts media on the infected machine and all media attached to the machine.  It then demands hundreds or thousands of dollars before the attackers agree to decrypt the hostage data.

Defense against this attack method is getting harder, as attackers find new ways to deploy CryptoWall and Cryptolocker.  Advanced attack techniques often leverage human vulnerabilities to bypass security controls.

The FBI provides a long list of defensive measures.  However, businesses should begin by implementing a short list of controls that protect against all types of advanced malware, not just ransomware:  Web filtering, spam filtering, email malware filtering, and (likely most important) deny users local administrator access.  This is in addition to best practices that should already be in place, including network segmentation with an application server abstraction layer (end-user device-to-application servers-to-database servers) to help isolate critical data from infected end-user devices.

Policies are not enough to protect mobile data…

In Access Controls, Application Security, Content Filtering, Data Leak Prevention, Data Security, Mobile Device Security, Policies and Processes, Policy-based access control, Risk Management, Security Management on December 29, 2012 at 12:27

Policy is not enough.  Ensuring sensitive information is handled in accordance with internal policy and regulatory constraints requires monitoring of all activities associated with it.  In other words, inspect what you expect… continuously.  Further, too much reliance on human behavior is a recipe for security disaster.

This week, we learned that the University of Michigan Health System, via a vendor, lost about 4000 patient records.  The vendor, apparently authorized access, copied patient records from a database to an unencrypted device.  The device, left unattended in a vehicle, was then stolen.  Sound familiar?  It should.  This scenario appeared many times in news articles over the last several years.  While the players differed, the gaps leading to the losses were largely the same.

This set of conditions is growing more common.  They are strengthened with an increasing number of devices filling the role of insecure mobile data storage, as the BYOD (bring your own device) phenomenon continues to complete its hold on business operations.  Managers and business owners who believe they can simply write a policy, train employees, and move on to the next challenge are kidding themselves.

(For a detailed look at how competing interests apply pressure every day to employees trying to do the right thing, see Bruce Schneier’s Liars and Outliers.)

So what can we do to protect ourselves from becoming the topic of yet another subject in an article about mobile data loss?  Plenty.

For traditional access control environments…

First, ensure your policies have teeth.  For example, what are the sanctions for a vendor or employee who fails to follow policy?  Next, implement reasonable and appropriate technical controls to monitor traffic (e.g., IPFIX data) and aggregated logs (i.e., SIEM).  IPFIX, for example, provides near real time information about anomalous data flows: like a vendor copying 4000 records from a database.  Finally, implement a process whereby IPFIX and SIEM alerts prompt an immediate review of who did the copying, what they copied the data to, and whether the target device is in compliance with policies addressing data on the device category into which it falls.  For example, if security sees a data transfer to a mobile device, they should confirm that the device is encrypted and the user authorized to carry the data out of the building…

For policy-based organizations…

As BYOD expands the corporate attack surface, policy-based access controls augment the steps listed above.  By default, do not allow anyone to copy data to a mobile device that does not meet policy requirements for data protection.  Policy-based controls authorize user access based on user role, the device used, the location of the user/device, the data and processes accessed, day of the week and time access is requested, and the device’s compliance with security policy.  All of this is automated, preventing reliance on human behavior to protect data.

(For more information on policy-based access controls (also known as context-based access controls), see Chapter 9: Securing Remote Access. )

Again, policies are not enough.  Without technical controls, they rely on human behavior to protect data.  This is a bad idea.  Instead, implement technical controls as far as is reasonable for your organization, and then monitor for compliance to ensure people, processes, and technology are producing expected security outcomes.

Fighting Unwanted Browsing: Web filtering is not always effective

In Access Controls, Business Continuity, Content Filtering, Data Leak Prevention, Insider risk, malware on September 23, 2009 at 12:22

Many organizations use Web filtering to block employee access to “unsuitable” sites.  Blocking usually takes the form of products like WebSense and services such as OpenDNS (from free, through SMB and Enterprise).  However, savvy employees will find a way around these controls. 

Definitions of what constitutes an unsuitable site vary from business to business, but there is a general set of objectives which typically underlies them all.

  • Prevent viewing of pornography, hate sites, or any other material which may be interpreted as creating a hostile work environment
  • Prevent activities which may put the organization at risk, such as visiting sites
    • which present a known high risk of infecting the network with malware
    • which provide an easy way for employees to wile away the workday focused on social networking, shopping, sports, or other non-business related media

Whether an organization uses Web filtering to achieve one or all of these objectives, users will find a way around restrictions.  One of the best ways is to encrypt outgoing sessions with a client-based or hosted proxy.  Yes, most if not all Web filters allow you to block access to these sites.  And yes, restricting employee rights to install applications can help.  However, there are services which circumvent both controls.

Web filters rely on their ability to see destination information and compare it to a database of blocked sites, usually organized by category.  If a user connects to an external proxy service (not in the blocked sites list) via SSL/HTTPS, no traffic from the end-user device to the Internet is visible to the Web filter.  The result?  The user can browse to any and all sites on the Web.

Take, for example, Megaproxy.  Figure 1 is the message I receive on my test machine if I try to go directly to the Megaproxy site.  Why?  Because the site is considered a proxy site.  All proxy sites must be blocked—as they are on this network–or Web filtering is the proverbial exercise in futility.  But Megaproxy provides an easy way around this.

Figure 1: Megaproxy blocked

Figure 1

The Megaproxy service periodically changes the URL used to get to the proxy sign-on prompt shown in Figure 2.  So Web filtering vendors have to play catch-up to block the current URL.  This is only possible when using the for-fee service, which a user can simply set up from home.  The fee is so low that any user with a strong desire to break out of IS constraints imposed on browsing will quickly get out the credit card.  I’ve been testing the same URL for about three weeks now with no problem.

Figure 2: Megaproxy login

Figure 2

Once logged on, the service asks for the URL for the page I want to visit, as shown in Figure 3.  The Web filter system I’m testing blocks remote access services, such as GoToMyPC.  So, I entered gotomypc.com. 

Figure 3: Enter URL

Figure 3

Figure 4 shows the result; I easily access gotomypc.com with full functionality.  I could just as easily access playboy.com.  Note that I have to enter all addresses for sites I want to visit into the address bar provided by Megaproxy.  If I use the standard browser address bar, I will leave Megaproxy, and my traffic will once again be visible to the filtering solution.

Figure 4: gotomypc.com

Figure 4

Megaproxy is not malware.  Nor is it intended to make your life as a security professional miserable.  It is designed to provide safe browsing from hotels, airports, and other hot spots.  The changing URL allows use of secure browsing even if the hotspot tries to prevent it by blocking proxy access.

The bottom line? An organization cannot rely on Web filtering alone to prevent unsuitable Web behavior.  Rather, other controls—preventive and detective, administrative and technical—must support filtering.  For example, some organizations simply block all SSL traffic not explicitly approved for business purposes.  If your organization is using Web filtering, take a look at the gaps unique to your organization and plug them.

Fear, Trust, and Desire: Fertile ground for social engineers

In Business Continuity, Content Filtering, Cybercrime, Data Security, HIPAA, Network Security, PCI DSS, Risk Management on April 10, 2009 at 09:42

According to the recently released Microsoft Security Intelligence Report (2H2008), social engineering is taking the lead as the preferred method of network and end-user device malware infection.  Since operating system vulnerabilities are slowly disappearing and more organizations are implementing basic network controls, the easiest way to a target system is via the end-user.

Fear, Trust, and Desire (FTD)

According to the Microsoft SIR, users fall prey to social engineering attacks because of three common modes of human behavior: fear, trust, and desire.  As depicted in Figure 1, each of these behaviors is targeted by specially crafted attacks.

 

Figure 1 (Microsoft SIR)

Read the rest of this entry »

%d bloggers like this: