Friday 28 January 2011

A few good metrics for continuous monitoring.

This article outlines a number of metrics that can be used in order to facilitate a continuous security monitoring methodology.

Tuesday 11 January 2011

Psychological Screening

The fallout from the recent spate of wikileaks revelations seems to have brought about a sharper focus on the role of the employee in an organisation, and the fact that there is a risk of data breech or theft by the individuals you employ. Two recent articles focus on the need for pre-screening, and ongoing monitoring of psychological issues. A seminar last year from an internal threat analyst at a large UK bank provided a fascinating insight into the factors that can result in an employee being tempted to steal information. Factors include personal financial problems, and so on. It also looks like the federal government are screening for psychological factors such as despondence and grumpiness. If these are actual detection points, then I think I need to install some monitoring software on my system to ensure that I don't steal anything...

The psychological factors are important, but only when allied with appropriate monitoring and controls. One example being the lack of architecture testing that can lead to individuals gaining access rights where none should exist. Proper testing and change control needs to be implemented, but when we consider the number of applications multiplied by the number of users multiplied by the number of permissions, factor in lack of proper monitoring and evaluation of actual permission needs, employees leaving and so on, then we start to get into real difficulty when it comes to this form of assurance.




Tuesday 10 August 2010

Thunder Struck

Security experts have managed to DDoS systems with a $6 investment, and the use of their Thunder Clap program. I am surprised that it has taken somebody so long to do this in an official capacity.

I wonder when stolen credit card details will be used to create EC2 accounts, and do something far more devastating.

Sunday 11 July 2010

Continuous Monitoring and an 85% drop in Risk

This Article reminds me of the presentation I saw by Allen Paller at Infosec last year. Alan Paller is the director of Research at SANS. He presented a testimonial of his work within the State Department to congress. He highlighted the development and use of a continuous monitoring methodology, which has led to an 85% drop in measured risk. In part this was achieved by using a continuous monitoring approach which was an IT-driven system and replaced the existing paper-based reporting system. What made this approach effective was the usage of the metrics that were Comparative, Numeric, Reliable and Authoritative. At infosec Paller went into a little more detail about these aspects.

Comparative: Comparative monitoring needs to be something that can be used to show the relative quality between the effort undertaken by different teams. This creates a healthy climate of competition and motivation

Numeric: Existing FISMA standards dictate that the reporting systems should produce a risk report every few months or quarter. This is a terrific delay in terms of response time, and ability to gain instant situational awareness. One important factor in ensuring success was to automate the measurement of these controls. Without it, the overhead would have been a barrier to it being effective. The monitoring period was reduced to 72 hours thus having the effect of allowing a better response time, as well as illustrating gains being made.

Reliability: Based on repeatable tests, two or more evaluators would get the same results.

Authoritative: By getting a consensus from an acknowledged group of experts allows you to get buy-in from the very individuals who will be assessed by the measurements.

It is an interesting approach as it advocates a far more scientific methodology to monitoring . However, Paller also highlighted the fact that the human element needed to be considered as equally as important. Therefore, be fair when measuring metrics. If a team cannot change or effect something, it is not fair to measure it. Finally celebrate success and make progress visible.

MESP and EaaS

I've just watched a fascinating documentary about the Rolls Royce jet engine company. I had always thought of them as a Plain Old Engine Company. To my surprise and delight, it turns out that they're an EaaS and an MESP.

As part of their offering, they provide a service package with each engine. The Airline companies are guaranteed an engine, no matter what, and only pay for the miles they clock up. This is, rather wonderfully, Engine as a Service (EaaS).

In addition, each serviced engine has a near real-time monitoring system (approx 90 sec delay), which allows engineering staff located at a operations centre to follow and respond to any particular issues flagged by the monitoring system. Any reported problems are then dealt with by ground crews at the appropriate airport at which the plane will land. The data sent in to the operations centre is analysed, and deviations from the norm are flagged and followed up. This, of course, sounds exactly like the Managed Security Service Provider (MSSP) model that has expanded a great deal in recent years. However, this instance should probably be called a Managed Engine Service Provider (MESP).

What fascinates me about this model is the emphasis on an engineering approach from the testing, build and deployment of each engine component, right through to the assembly, commissioning, and even in-air monitoring of the system. I wonder how this impacts on the false positive metrics that the operation centre employees must deal with, and whether there is as large a problem as it is with an MSSP engineer monitoring the voluminous output from a computer system.

There's an old(ish) debate within the software engineering community that states there's not enough emphasis on the engineering side. Lack of engineering approaches has led to code defects, and therefore the abundance of security issues that need to be dealt with. Combine that with the fact that we look for security events in stupendous volumes of data that is highly unlikely to yield useful information, and is often not fit-for-purpose, then you've got a serious problem when trying to hunt down serious problems. Not all security problems are based on defects caused by lack of standards, but it would be interesting to see how a modern MSSP would look if we had an operating system equivalent of Rolls Royce.

Thursday 8 July 2010

Top 10 reasons your security sucks

There's a great post over at infosec-island, commenting on the cultural, procedural and technical problems that appear to be still present in infosec environments.

All of the reasons are pretty much spot-on, but the following stuck out from our technology perspective:

6. The tools you use are ineffective (they don’t really work) and inefficient (they cost way too much)

5. Your security vendor is lying to you and why shouldn’t they, you believe them

2. Your dealing with the exact same problems you dealt with a decade ago, only it seems so much worse today then back then

It seems to me that reasons 2, 5 and 6 are interlinked. After speaking with someone we know at a local data centre, as well as folks at a local MSSP, it's clear that the lack of innovation in this domain is stark to say the least. As Niladri highlights, all of the trade shows he's been to in the past, in his other life in other industries, have had some form of innovation. Information Security? Nope. Nada. Nothing. The path that most of these guys are on is BIGGER FASTER MORE of the same old stuff that's becoming increasingly obsolete.






Tuesday 22 June 2010

Centre of Excellence Launch

We have been invited to speak at the Symposium on Security and Cybercrime, which will also see the launch of the Centre of Excellence for Security and Cybercrime. The purpose of the Centre is to bring together business, law enforcement and academia in order to educate, inform and disseminate best practice through knowledge transfer and other such placement activities. The Centre is an exciting prospect for Scotland, and we're very much looking forward to being part of it. We'll be talking about some of the threats facing the virtualisation technology being used to power to cloud.

The event is being held at Edinburgh Napier University's Craiglochart campus, and is free to register at: