Tuesday, 10 August 2010

Thunder Struck

Security experts have managed to DDoS systems with a $6 investment, and the use of their Thunder Clap program. I am surprised that it has taken somebody so long to do this in an official capacity.

I wonder when stolen credit card details will be used to create EC2 accounts, and do something far more devastating.

Sunday, 11 July 2010

Continuous Monitoring and an 85% drop in Risk

This Article reminds me of the presentation I saw by Allen Paller at Infosec last year. Alan Paller is the director of Research at SANS. He presented a testimonial of his work within the State Department to congress. He highlighted the development and use of a continuous monitoring methodology, which has led to an 85% drop in measured risk. In part this was achieved by using a continuous monitoring approach which was an IT-driven system and replaced the existing paper-based reporting system. What made this approach effective was the usage of the metrics that were Comparative, Numeric, Reliable and Authoritative. At infosec Paller went into a little more detail about these aspects.

Comparative: Comparative monitoring needs to be something that can be used to show the relative quality between the effort undertaken by different teams. This creates a healthy climate of competition and motivation

Numeric: Existing FISMA standards dictate that the reporting systems should produce a risk report every few months or quarter. This is a terrific delay in terms of response time, and ability to gain instant situational awareness. One important factor in ensuring success was to automate the measurement of these controls. Without it, the overhead would have been a barrier to it being effective. The monitoring period was reduced to 72 hours thus having the effect of allowing a better response time, as well as illustrating gains being made.

Reliability: Based on repeatable tests, two or more evaluators would get the same results.

Authoritative: By getting a consensus from an acknowledged group of experts allows you to get buy-in from the very individuals who will be assessed by the measurements.

It is an interesting approach as it advocates a far more scientific methodology to monitoring . However, Paller also highlighted the fact that the human element needed to be considered as equally as important. Therefore, be fair when measuring metrics. If a team cannot change or effect something, it is not fair to measure it. Finally celebrate success and make progress visible.

MESP and EaaS

I've just watched a fascinating documentary about the Rolls Royce jet engine company. I had always thought of them as a Plain Old Engine Company. To my surprise and delight, it turns out that they're an EaaS and an MESP.

As part of their offering, they provide a service package with each engine. The Airline companies are guaranteed an engine, no matter what, and only pay for the miles they clock up. This is, rather wonderfully, Engine as a Service (EaaS).

In addition, each serviced engine has a near real-time monitoring system (approx 90 sec delay), which allows engineering staff located at a operations centre to follow and respond to any particular issues flagged by the monitoring system. Any reported problems are then dealt with by ground crews at the appropriate airport at which the plane will land. The data sent in to the operations centre is analysed, and deviations from the norm are flagged and followed up. This, of course, sounds exactly like the Managed Security Service Provider (MSSP) model that has expanded a great deal in recent years. However, this instance should probably be called a Managed Engine Service Provider (MESP).

What fascinates me about this model is the emphasis on an engineering approach from the testing, build and deployment of each engine component, right through to the assembly, commissioning, and even in-air monitoring of the system. I wonder how this impacts on the false positive metrics that the operation centre employees must deal with, and whether there is as large a problem as it is with an MSSP engineer monitoring the voluminous output from a computer system.

There's an old(ish) debate within the software engineering community that states there's not enough emphasis on the engineering side. Lack of engineering approaches has led to code defects, and therefore the abundance of security issues that need to be dealt with. Combine that with the fact that we look for security events in stupendous volumes of data that is highly unlikely to yield useful information, and is often not fit-for-purpose, then you've got a serious problem when trying to hunt down serious problems. Not all security problems are based on defects caused by lack of standards, but it would be interesting to see how a modern MSSP would look if we had an operating system equivalent of Rolls Royce.

Thursday, 8 July 2010

Top 10 reasons your security sucks

There's a great post over at infosec-island, commenting on the cultural, procedural and technical problems that appear to be still present in infosec environments.

All of the reasons are pretty much spot-on, but the following stuck out from our technology perspective:

6. The tools you use are ineffective (they don’t really work) and inefficient (they cost way too much)

5. Your security vendor is lying to you and why shouldn’t they, you believe them

2. Your dealing with the exact same problems you dealt with a decade ago, only it seems so much worse today then back then

It seems to me that reasons 2, 5 and 6 are interlinked. After speaking with someone we know at a local data centre, as well as folks at a local MSSP, it's clear that the lack of innovation in this domain is stark to say the least. As Niladri highlights, all of the trade shows he's been to in the past, in his other life in other industries, have had some form of innovation. Information Security? Nope. Nada. Nothing. The path that most of these guys are on is BIGGER FASTER MORE of the same old stuff that's becoming increasingly obsolete.






Tuesday, 22 June 2010

Centre of Excellence Launch

We have been invited to speak at the Symposium on Security and Cybercrime, which will also see the launch of the Centre of Excellence for Security and Cybercrime. The purpose of the Centre is to bring together business, law enforcement and academia in order to educate, inform and disseminate best practice through knowledge transfer and other such placement activities. The Centre is an exciting prospect for Scotland, and we're very much looking forward to being part of it. We'll be talking about some of the threats facing the virtualisation technology being used to power to cloud.

The event is being held at Edinburgh Napier University's Craiglochart campus, and is free to register at:



Wednesday, 9 June 2010

Cloud Insecurity

After a disappointing perspective on Cloud Security form InfoSec this year , where statements like "Cloud is Outsourcing Mach 2" were made, I saw a ray of light at the e-crime Cloud Security Forum. It was the first conference this year that seemed not to be influenced by the economic benefits of moving to the Cloud (fast!).

My Neighbour is a Hacker
The real dire consequences of not covering angles like multi tenancy and cloud cleansing where highlighted. If I knew that my next door neighbour was a serial killer I probably wont sleep at night. The same goes for the public Cloud , I share my resources with my neighbour , how many attack vectors does that open! How can I be sure that instance of VM that I created and then discarded was actually cleaned properly in the slack memory. Verizons answer to this is dedicated blades on demand , how scalable or easy to use will this be? we would have to wait and find out.

Logs? What Logs?

If you share resources with your neighbours surely you share logs too! how do we audit logs that have intermingled entries from you neighbour. Surely this affects data protection and privacy laws. This simple fact also affects every single SIM and SIEM tool out there. These tools already promise too much and deliver too little , this added handicap will make them totally inapplicable to the cloud as these tools primarily rely on logs.

Relying on logs to provide security and control never seemed right to me. Logs should only be for Operations and maintenance really. I am sure with enough duct tape we would still be able rely on logs for security , but do we really want to repeat mistakes for the Cloud?

IDS? What IDS?

Traditional NIDS cannot be applied in the cloud , hey you share the same connection! Network Virtualisation is something that is not considered by your Cloud Provider or is outright uneconomically . How can you apply IDS to shared traffic ? you cant ! The way forward is host based real time and distributed intrusion detection systems harnessing the power of the cloud. Why not?

Micheal Clark from Verizon stressed this point , the need for a HIDS and File integrity monitoring is far higher than Anti-Viruses when it comes to the Cloud.


Real Time
I really enjoyed Verizon's forensic perspective to the cloud infrastructure. The inadequacies of existing forensics techniques were demonstrated e.g. how do you image a cloud , is it economically viable? no its not, all the economic benefits of moving to the cloud will be negated if it needed to be done in a forensically sound manner. Moreover the cloud is dynamic , at one moment a server is there the next its gone! The only way to be forensically ready is to track the cloud in real time in all its dynamic glory.

The Super Super Users

One of the unsolvable corner cases of security has always been the super user how do you control the admin accounts? No simple way with existing tools really ! Internal fraud has always been the most expensive for companies and one of the hardest to control.
With cloud this problem will increase many folds as the super user will now be external. I call these users THE SUPER SUPER USERS employed by the Cloud Service Providers. They are like super heroes ( or villains) who can do anything( to the servers running your VM). Not only that but blame your neighbour for it and get away with it.

Security By Design

A major paradigm shift is required to deal with the Cloud Insecurity. This is our opportunity to have security by design and not by necessity . Stop reliance on logs and design other real time events that are designed with security in mind , agnostic of the platform . Let not design patchy-duct-taped-together-security this time round! can we? Please!!!




Friday, 30 April 2010

Too Big to Fail

In recent years, we have become accustomed to phrases such as too big to fail. This is a notion that has been applied to once august financial service organisations, and is a philosophy that drives to a certain degree the bail out that Greece has requested.

During our team field-trip to this year's InfoSec, I started thinking about this idea in relation to the cloud, and whether it is a phrase we can look forward to hearing with respects to out industry. It seems like a pretty fanciful notion looking at the world from 2010, but I don't think it is quite as ludicrous as some of the cloud security panel members thought.

We're currently on the verge of the cloud revolution, with a great deal of players trying to assert themselves as the platform/service/layer of choice for everyone to run their apps or store their data. The big guys in the ring on this one are geared up for a massive bun-fight over their position. It's a fairly logical conclusion that, as with previous IT technology markets, there will be dominant layer providers. Therefore, it is not unreasonable to conceive of a single company, which provisions for a massive amount of data, failing. This failure could be financial or otherwise. Now, imagine this provider has the data and apps for numerous local government agencies, charities, businesses, the loss of which would have an indelible impact on national, or international, economies.

I agreed that there are a lot of what ifs for this to happen. However I get the sense from tapping on the wall of knowledge of the experts on the panel, that we're on a real frontier here and once again making it up as we go along.