Tom Olzak

Posts Tagged ‘log management’

Three controls to deal with a broken Internet…

In Application Security, Business Continuity, Computers and Internet, Cybercrime, Data Leak Prevention, Data Security, Log Management, Network Security, Risk Management, Security Management, SIEM on January 4, 2013 at 17:24

The Internet is broken.  Browsers are gaping holes in our security frameworks.  Certificates are becoming a liability as cyber-criminals or certificate authority negligence weakens our trust in the process.  If we continue to see defense only in terms of preventing the bad guys from getting to our end-point devices, we will surely lose the security war.  The answer is to shift perspective.

First, it’s important we assume that every end user device is potentially infected.  Further, we must assume that one or more of the servers in our data center are infected at any point in time.  This might not be true for all organizations, but it is a smart baseline assumption.  Once we accept that we are vulnerability and likely infected, it is easier to begin supporting preventive controls with comprehensive methods to identify, contain, and manage inevitable breaches of security: SIEM, NetFlow, and response.

Over this and the next two articles, I will take a high-level look at each of these breach-control methods.  Further, I will provide links to resources providing detailed information about how to design and deploy them.


SIEM (security information and event management) is a comprehensive approach to assessing system and network behavior.  It requires collection of logs from various devices across the network, including firewalls, IPS/IDS, servers, and switches.  The graphic below depicts a very simple SIEM architecture.  Logs collected by each device are sent near-real-time to a Syslog server.  “Syslog is a standard for computer data logging. It separates the software that generates messages from the system that stores them and the software that reports and analyzes them” (“syslog”, 2013).  This is known as log aggregation.

SIEM Architecture

SIEM Architecture

Aggregated logs are sent to a correlation server for analysis.  The correlation server looks at all events received from across the network and attempts to mine attack patterns or other anomalous behavior.  Anomalous behavior identification is only effective if the SIEM solution is properly tuned.  In other words, the correlation server must know what patterns are normal for your network and which fall outside alert thresholds you set.  For more information about correlation in general, see event correlation at

All relevant information is usually available via a portal.  For example, a SIEM management server might post updated correlated results every five to 10 minutes.  Events meeting criteria set by you can also cause alerts to be sent to administrators and security personnel via SMS, email, etc.

Logs can tell us a lot about behavior, but they fall short of providing insight into how data is actually moving across the data center or across our network segment boundaries.  This is the topic of the next article in this series: NetFlow (IPFIX).


Syslog. (2013)  Retrieved January 4, 2013 from

Written Policy without Process and Oversight is Just Wasted Effort

In Business Continuity, Data Security, Policies and Processes on April 20, 2009 at 12:05

Whether prompted by regulations or by management intent to comply with security best practices, the first step after creating a security strategy is development of policies.  However, there are some organizations who treat policies as the endgame.  Those taking this approach are not only misguided.  They are potentially exposing their sensitive data and critical systems to greater risk.

The Policy Myth

Security policies are simple statements of intent.  They provide to employees management’s expectations regarding acceptable use, handling, and implementation of critical systems as well as the confidentiality, integrity, and availability of sensitive information.  However, they don’t provide for consistent application of management intent.  Nor do they describe and mandate and system of oversight to ensure compliance and effectiveness.

David Aminzade addresses these issues in an article posted today on Help Net Security.  He begins with,

Most large organizations maintain a detailed corporate security policy document that spells out the “dos and don’ts” of information security. Once the policy is in place, the feeling is of having achieved ‘nine-tenths of the law’, that is, that the organization is in effect ‘covered’. This is a dangerous misconception. Because much like in the world of law and order, while creation of law is fundamental, implementation and enforcement of law is what prevents chaos.

Source: Is having a security policy in place really nine-tenths of the law?, David Aminzade, Help Net Security, 20 April 2009

Making Policies Real

To make policies ‘real’ to technical and business process employees, people must be aware they exist.  They must also follow documented processes intended to result in compliance.  So the next step after policy approval is development of supporting processes.

Processes typically provide step-by-step instructions for how to perform a single task or set of tasks.  If an employee follows a process, he or she will automatically produce compliant outcomes—at least that’s the expectation.  Effective processes are developed in collaboration with all stakeholders, and then introduced to existing employees via training programs.  New hires should be provided with similar training.

This is another point in developing a security program where some organizations stop, with the belief they are now compliant.  Stopping here strengthens their defenses beyond those of the policy-only managers, but it still causes them to fall short of a truly effective security program.

The final step in making policies real to employees and the organization is implementation of oversight tools and processes.  Aminzade included a good-start list in his article:

  • Continuously monitor firewall and other security device changes, compare them to the corporate security policy, and send out alerts if the policy has been violated.
  • Track and report all changes in a uniform, simple and straightforward style.
  • Provide a vendor-neutral, top-down view of all security infrastructure that an executive can understand.
  • Enable security administrators to test a change against security policy before it is implemented, to assess and avoid risk.
  • To this list I would add internal audits, which provide an outsider’s perspective of processes and outcomes.  I would also add both internal and external vulnerability scans and penetration tests. 

    The first bullet should be part of an overall log management process.  I always recommended outsourcing this activity.  It is tedious work which can be done at less cost by a managed security services provider (MSSP).

    Tasks associated with the second and fourth bullet are typically part of a change management process.  Change management

    … deals with how changes to the system are managed so they don’t degrade system performance and availability. Change management is especially critical in today’s highly decentralized, network-based environment where users themselves may be applying many changes. A key cause of high cost of ownership is the application of changes by those who don’t fully understand their implications across the operating environment.

    Source: Implement change management with these six steps, Change Tech. Solutions, ZDNet, 8 January 2004

    Along with oversight, sanctions should be imposed fairly, as close to the actual event as possible, and clearly stated as possible consequences for not following approved procedures.  Formal, documented investigations can help change employee behavior even if their risky actions are not cause for disciplinary action.  Investigations help raise management awareness of potential employee or process issues.

    The final word

    Getting compliant is about documenting management intent, building enforceable processes which produce consistent outcomes, and monitoring to ensure the network is as secure as expected.  Assuming employees will simply act safely because a policy exists or making assumptions about security outcomes is a good way to end up as tomorrow’s media target because of a breach or malware induced network shutdown.

    %d bloggers like this: