Tom Olzak

Archive for the ‘malware’ Category

The Internet is Broken, Part III: Response

In Application Security, Business Continuity, Disaster Recovery, Hacking, Log Management, malware, NetFlow, Network Security, Policies and Processes, Risk Management, Security Management, SIEM on January 20, 2013 at 23:12

This is the final post in a series about the broken Internet.  In the first, we looked at SIEM.  Last week, we explored the value of NetFlow analysis.  This week, we close with an overview of incident response.

When evaluating risk, I like to use as reference the following formula:

Basic Risk Formula

Basic Risk Formula

Probability of occurrence, broken into threats x vulnerabilities, helps us determine how likely it is that a specific threat might reach our information resources.  Business impact is a measure of the negative affects if a threat is able to exploit a vulnerability.  The product of Probability of Occurrence and Business Impact is mitigated by the reasonable and appropriate use of administrative, technical, and physical controls.  One such control is a documented and practiced incident response plan.

The purpose of incident response is to mitigate business impact when we detect an exploited vulnerability.  The steps in this process are shown in the following graphic.  Following the detection of an incident (using SIEM, NetFlow, or some other monitoring control), the first step is to contain it before it can spread or cause more business impact.  Containment is easier in a segmented network; segments under attack are quickly segregated from the rest of the network and isolated from external attackers.

Response Process

Response Process

Following containment, the nature of the attack is assessed.  Failing to follow this step can result in incorrectly identifying the threat, the threat agent, the attack vector, or the target.  Missing any of these can make the following steps less effective.

Once we understand the who, what, when, where, how, and why of an attack, we can eradicate it.  Eradication often takes the form of applying a patch, running updated anti-malware, or system or network reconfiguration.  When we’re certain the threat agent is neutralized, we recover all business processes.

Business process restoration requires a documented and up-to-date business continuity/disaster recovery plan.  Some incidents might require server rebuilds.  Business impact increases as a factor of the time required to restore business operation.  Without the right documentation, the restoration time can easily exceed the maximum tolerable downtime: the time a process can be down without causing irreparable harm to the business.

Finally, we perform root cause analysis.  This involves two assessments.  One determines what was supposed to happen during incident response, what actually happened, and how can we improve.  The second assessment targets the attack itself.  We must understand what broken control or process allowed the threat agent to get as far as it did into our network.  Both assessments result in an action plan for remediation and improvement.

The Internet is broken.  We must assume that one or more devices on our network is compromised.  Can you detect anomalous behavior and effectively react to it when the inevitable attack happens?

Android security…?

In Application Security, Certificates, Cybercrime, Data Security, Hacking, malware, Mobile Device Security, security, Security Management on March 6, 2011 at 20:09

A recent blog, Frequency X Blog, examines the latest Android malware, DroidDream.  The hole that allowed this is as big as they get.

Not all Windows XP security solutions meet expectations

In malware on April 14, 2010 at 08:03

This is one more example of why home users and organizations must assess the effectiveness of a solution before relying on it to protect against legacy and emerging threats.

See:  A third of Windows XP security solutions failed independent tests.

Trojan Defense: Configuring Your SOHO or Personal Infrastructure

In Business Continuity, malware, Patching, Security Management on April 10, 2010 at 08:46
Trojans continue to be a serious Internet threat and arguably the most insidious. As with any malware defense, making the right choices—and teaching users to do the same—is the only effective control. Further, continuous vigilance is required to detect and react to Trojan polymorphism.

The Challenge

Typically, Trojans gain access to a computer to collect data. The data collected are used by the Trojan’s distributor, directly or indirectly, to make money or for other gainful purposes. To achieve fiscal objectives, black hats go to great lengths to surreptitiously deliver their code and keep it secret.

To prevent anti-malware (AM) software from detecting and eliminating Trojans during delivery or implementation, developers are going as far as encrypting questionable payloads. According to a recent Kaspersky Labs Threat Post:

Once the malware is on the machine, anti-malware products may detect it as a malicious file. But this process is much more difficult if the Trojan itself is encrypted. Dmitry Bestuzhev, a malware analyst for Kaspersky Lab in Latin America, has been following the evolution of Brazilian banker Trojans, and has noted a recent change in their sophistication

A new (for Brazil) concept takes place between second and third stages when the Trojan.Downloader downloads and installs the Banker. On the one hand Brazilian coders obfuscate the download links using several techniques and on the other hand now they also crypt the Banker to be downloaded to the system.

It’s a crypted (specially packed) PE file. The coders from Brazil use this technique to prevent an automated malware analysis and monitoring mode by AV companies. This sample downloaded as it is on the server won’t be functional on the user machine unless it’s decrypted. The decryption mechanism in this case is included into the initial Trojan.Downloader, which first downloads malware, and then decrypts it to be able to infect the user machine (Fisher, 2010).

Once a Trojan successfully takes up residence on a computer, it begins collecting banking and other sensitive information for later transmission to its home server. And even if it is detected, cleaning steps short of a complete wipe and replace of all content will likely fail.


Fighting Unwanted Browsing: Web filtering is not always effective

In Access Controls, Business Continuity, Content Filtering, Data Leak Prevention, Insider risk, malware on September 23, 2009 at 12:22

Many organizations use Web filtering to block employee access to “unsuitable” sites.  Blocking usually takes the form of products like WebSense and services such as OpenDNS (from free, through SMB and Enterprise).  However, savvy employees will find a way around these controls. 

Definitions of what constitutes an unsuitable site vary from business to business, but there is a general set of objectives which typically underlies them all.

  • Prevent viewing of pornography, hate sites, or any other material which may be interpreted as creating a hostile work environment
  • Prevent activities which may put the organization at risk, such as visiting sites
    • which present a known high risk of infecting the network with malware
    • which provide an easy way for employees to wile away the workday focused on social networking, shopping, sports, or other non-business related media

Whether an organization uses Web filtering to achieve one or all of these objectives, users will find a way around restrictions.  One of the best ways is to encrypt outgoing sessions with a client-based or hosted proxy.  Yes, most if not all Web filters allow you to block access to these sites.  And yes, restricting employee rights to install applications can help.  However, there are services which circumvent both controls.

Web filters rely on their ability to see destination information and compare it to a database of blocked sites, usually organized by category.  If a user connects to an external proxy service (not in the blocked sites list) via SSL/HTTPS, no traffic from the end-user device to the Internet is visible to the Web filter.  The result?  The user can browse to any and all sites on the Web.

Take, for example, Megaproxy.  Figure 1 is the message I receive on my test machine if I try to go directly to the Megaproxy site.  Why?  Because the site is considered a proxy site.  All proxy sites must be blocked—as they are on this network–or Web filtering is the proverbial exercise in futility.  But Megaproxy provides an easy way around this.

Figure 1: Megaproxy blocked

Figure 1

The Megaproxy service periodically changes the URL used to get to the proxy sign-on prompt shown in Figure 2.  So Web filtering vendors have to play catch-up to block the current URL.  This is only possible when using the for-fee service, which a user can simply set up from home.  The fee is so low that any user with a strong desire to break out of IS constraints imposed on browsing will quickly get out the credit card.  I’ve been testing the same URL for about three weeks now with no problem.

Figure 2: Megaproxy login

Figure 2

Once logged on, the service asks for the URL for the page I want to visit, as shown in Figure 3.  The Web filter system I’m testing blocks remote access services, such as GoToMyPC.  So, I entered 

Figure 3: Enter URL

Figure 3

Figure 4 shows the result; I easily access with full functionality.  I could just as easily access  Note that I have to enter all addresses for sites I want to visit into the address bar provided by Megaproxy.  If I use the standard browser address bar, I will leave Megaproxy, and my traffic will once again be visible to the filtering solution.

Figure 4:

Figure 4

Megaproxy is not malware.  Nor is it intended to make your life as a security professional miserable.  It is designed to provide safe browsing from hotels, airports, and other hot spots.  The changing URL allows use of secure browsing even if the hotspot tries to prevent it by blocking proxy access.

The bottom line? An organization cannot rely on Web filtering alone to prevent unsuitable Web behavior.  Rather, other controls—preventive and detective, administrative and technical—must support filtering.  For example, some organizations simply block all SSL traffic not explicitly approved for business purposes.  If your organization is using Web filtering, take a look at the gaps unique to your organization and plug them.

%d bloggers like this: