Tuesday, 10 August 2010

Thunder Struck

Security experts have managed to DDoS systems with a $6 investment, and the use of their Thunder Clap program. I am surprised that it has taken somebody so long to do this in an official capacity.

I wonder when stolen credit card details will be used to create EC2 accounts, and do something far more devastating.

Sunday, 11 July 2010

Continuous Monitoring and an 85% drop in Risk

This Article reminds me of the presentation I saw by Allen Paller at Infosec last year. Alan Paller is the director of Research at SANS. He presented a testimonial of his work within the State Department to congress. He highlighted the development and use of a continuous monitoring methodology, which has led to an 85% drop in measured risk. In part this was achieved by using a continuous monitoring approach which was an IT-driven system and replaced the existing paper-based reporting system. What made this approach effective was the usage of the metrics that were Comparative, Numeric, Reliable and Authoritative. At infosec Paller went into a little more detail about these aspects.

Comparative: Comparative monitoring needs to be something that can be used to show the relative quality between the effort undertaken by different teams. This creates a healthy climate of competition and motivation

Numeric: Existing FISMA standards dictate that the reporting systems should produce a risk report every few months or quarter. This is a terrific delay in terms of response time, and ability to gain instant situational awareness. One important factor in ensuring success was to automate the measurement of these controls. Without it, the overhead would have been a barrier to it being effective. The monitoring period was reduced to 72 hours thus having the effect of allowing a better response time, as well as illustrating gains being made.

Reliability: Based on repeatable tests, two or more evaluators would get the same results.

Authoritative: By getting a consensus from an acknowledged group of experts allows you to get buy-in from the very individuals who will be assessed by the measurements.

It is an interesting approach as it advocates a far more scientific methodology to monitoring . However, Paller also highlighted the fact that the human element needed to be considered as equally as important. Therefore, be fair when measuring metrics. If a team cannot change or effect something, it is not fair to measure it. Finally celebrate success and make progress visible.

MESP and EaaS

I've just watched a fascinating documentary about the Rolls Royce jet engine company. I had always thought of them as a Plain Old Engine Company. To my surprise and delight, it turns out that they're an EaaS and an MESP.

As part of their offering, they provide a service package with each engine. The Airline companies are guaranteed an engine, no matter what, and only pay for the miles they clock up. This is, rather wonderfully, Engine as a Service (EaaS).

In addition, each serviced engine has a near real-time monitoring system (approx 90 sec delay), which allows engineering staff located at a operations centre to follow and respond to any particular issues flagged by the monitoring system. Any reported problems are then dealt with by ground crews at the appropriate airport at which the plane will land. The data sent in to the operations centre is analysed, and deviations from the norm are flagged and followed up. This, of course, sounds exactly like the Managed Security Service Provider (MSSP) model that has expanded a great deal in recent years. However, this instance should probably be called a Managed Engine Service Provider (MESP).

What fascinates me about this model is the emphasis on an engineering approach from the testing, build and deployment of each engine component, right through to the assembly, commissioning, and even in-air monitoring of the system. I wonder how this impacts on the false positive metrics that the operation centre employees must deal with, and whether there is as large a problem as it is with an MSSP engineer monitoring the voluminous output from a computer system.

There's an old(ish) debate within the software engineering community that states there's not enough emphasis on the engineering side. Lack of engineering approaches has led to code defects, and therefore the abundance of security issues that need to be dealt with. Combine that with the fact that we look for security events in stupendous volumes of data that is highly unlikely to yield useful information, and is often not fit-for-purpose, then you've got a serious problem when trying to hunt down serious problems. Not all security problems are based on defects caused by lack of standards, but it would be interesting to see how a modern MSSP would look if we had an operating system equivalent of Rolls Royce.

Thursday, 8 July 2010

Top 10 reasons your security sucks

There's a great post over at infosec-island, commenting on the cultural, procedural and technical problems that appear to be still present in infosec environments.

All of the reasons are pretty much spot-on, but the following stuck out from our technology perspective:

6. The tools you use are ineffective (they don’t really work) and inefficient (they cost way too much)

5. Your security vendor is lying to you and why shouldn’t they, you believe them

2. Your dealing with the exact same problems you dealt with a decade ago, only it seems so much worse today then back then

It seems to me that reasons 2, 5 and 6 are interlinked. After speaking with someone we know at a local data centre, as well as folks at a local MSSP, it's clear that the lack of innovation in this domain is stark to say the least. As Niladri highlights, all of the trade shows he's been to in the past, in his other life in other industries, have had some form of innovation. Information Security? Nope. Nada. Nothing. The path that most of these guys are on is BIGGER FASTER MORE of the same old stuff that's becoming increasingly obsolete.






Tuesday, 22 June 2010

Centre of Excellence Launch

We have been invited to speak at the Symposium on Security and Cybercrime, which will also see the launch of the Centre of Excellence for Security and Cybercrime. The purpose of the Centre is to bring together business, law enforcement and academia in order to educate, inform and disseminate best practice through knowledge transfer and other such placement activities. The Centre is an exciting prospect for Scotland, and we're very much looking forward to being part of it. We'll be talking about some of the threats facing the virtualisation technology being used to power to cloud.

The event is being held at Edinburgh Napier University's Craiglochart campus, and is free to register at:



Wednesday, 9 June 2010

Cloud Insecurity

After a disappointing perspective on Cloud Security form InfoSec this year , where statements like "Cloud is Outsourcing Mach 2" were made, I saw a ray of light at the e-crime Cloud Security Forum. It was the first conference this year that seemed not to be influenced by the economic benefits of moving to the Cloud (fast!).

My Neighbour is a Hacker
The real dire consequences of not covering angles like multi tenancy and cloud cleansing where highlighted. If I knew that my next door neighbour was a serial killer I probably wont sleep at night. The same goes for the public Cloud , I share my resources with my neighbour , how many attack vectors does that open! How can I be sure that instance of VM that I created and then discarded was actually cleaned properly in the slack memory. Verizons answer to this is dedicated blades on demand , how scalable or easy to use will this be? we would have to wait and find out.

Logs? What Logs?

If you share resources with your neighbours surely you share logs too! how do we audit logs that have intermingled entries from you neighbour. Surely this affects data protection and privacy laws. This simple fact also affects every single SIM and SIEM tool out there. These tools already promise too much and deliver too little , this added handicap will make them totally inapplicable to the cloud as these tools primarily rely on logs.

Relying on logs to provide security and control never seemed right to me. Logs should only be for Operations and maintenance really. I am sure with enough duct tape we would still be able rely on logs for security , but do we really want to repeat mistakes for the Cloud?

IDS? What IDS?

Traditional NIDS cannot be applied in the cloud , hey you share the same connection! Network Virtualisation is something that is not considered by your Cloud Provider or is outright uneconomically . How can you apply IDS to shared traffic ? you cant ! The way forward is host based real time and distributed intrusion detection systems harnessing the power of the cloud. Why not?

Micheal Clark from Verizon stressed this point , the need for a HIDS and File integrity monitoring is far higher than Anti-Viruses when it comes to the Cloud.


Real Time
I really enjoyed Verizon's forensic perspective to the cloud infrastructure. The inadequacies of existing forensics techniques were demonstrated e.g. how do you image a cloud , is it economically viable? no its not, all the economic benefits of moving to the cloud will be negated if it needed to be done in a forensically sound manner. Moreover the cloud is dynamic , at one moment a server is there the next its gone! The only way to be forensically ready is to track the cloud in real time in all its dynamic glory.

The Super Super Users

One of the unsolvable corner cases of security has always been the super user how do you control the admin accounts? No simple way with existing tools really ! Internal fraud has always been the most expensive for companies and one of the hardest to control.
With cloud this problem will increase many folds as the super user will now be external. I call these users THE SUPER SUPER USERS employed by the Cloud Service Providers. They are like super heroes ( or villains) who can do anything( to the servers running your VM). Not only that but blame your neighbour for it and get away with it.

Security By Design

A major paradigm shift is required to deal with the Cloud Insecurity. This is our opportunity to have security by design and not by necessity . Stop reliance on logs and design other real time events that are designed with security in mind , agnostic of the platform . Let not design patchy-duct-taped-together-security this time round! can we? Please!!!




Friday, 30 April 2010

Too Big to Fail

In recent years, we have become accustomed to phrases such as too big to fail. This is a notion that has been applied to once august financial service organisations, and is a philosophy that drives to a certain degree the bail out that Greece has requested.

During our team field-trip to this year's InfoSec, I started thinking about this idea in relation to the cloud, and whether it is a phrase we can look forward to hearing with respects to out industry. It seems like a pretty fanciful notion looking at the world from 2010, but I don't think it is quite as ludicrous as some of the cloud security panel members thought.

We're currently on the verge of the cloud revolution, with a great deal of players trying to assert themselves as the platform/service/layer of choice for everyone to run their apps or store their data. The big guys in the ring on this one are geared up for a massive bun-fight over their position. It's a fairly logical conclusion that, as with previous IT technology markets, there will be dominant layer providers. Therefore, it is not unreasonable to conceive of a single company, which provisions for a massive amount of data, failing. This failure could be financial or otherwise. Now, imagine this provider has the data and apps for numerous local government agencies, charities, businesses, the loss of which would have an indelible impact on national, or international, economies.

I agreed that there are a lot of what ifs for this to happen. However I get the sense from tapping on the wall of knowledge of the experts on the panel, that we're on a real frontier here and once again making it up as we go along.

Wednesday, 14 April 2010

Don't Listen to IT Security Professionals

There, that title got your attention. In same way the title of this interesting article manages it. It turns out that certain IT security professionals are not using anti malware/virus protection on their machines. I think that the pros in the article are talking about desktop machines, and not a server environment. One quote in particular is interesting and amusing:

"I've never used AV software and I've never once been infected with a virus."

My first reaction to that is - if you don't have detection capability, how do you know? OK, flippancy aside, there are some important things about this topic, which are mentioned in the article, and some that are not. It's not a secret that the current technologies for detecting malware and viruses has been increasingly bad at their job. We're no longer in the days where vendors can claim 99.999% detection rates, simply due to the fact that the signature-based approach has been out manoeuvred by the polymorphic nature of the malware out there. Amongst the problems with signatures are the fact that they take human intervention to be created, they require multiple databases, create massive databases, which impact on endpoint performance - it's well documented. So, in terms of the latest and greatest threats, sure, don't rely on what you're doing, and stick to best practices.

When it comes to the 'typical user' (this is an entirely different topic, and should be looked into further as it's an interesting concept- more on this later) the security pros in the article are right - keep your AV. However, I would say the same is true for them as the regular user. How do you know you're not infected unless you've been checked? The need for this software is there, as a baseline, and it should be used as part of a holistic strategy.

In terms of the weaknesses in the technology, these can be addressed in a number of ways. There's a shift going on in terms of the manner in which we provision for security with best practice, architecture, and so on. So, that's the holistic part. In terms of detection, there are some new and interesting technologies around that can take a look at behaviours, thus cutting out the need for the old signature-approach.

So, don't listen to IT security pros, as they very often don't practice what they preach.

Tuesday, 30 March 2010

TJX and internal monitoring

Albert Gonzalez gets 20 years for the identity thefts he perpetrated with accomplices. They are being convicted of what is being branded as the biggest cybercrime identify theft targeting credit card data thus far. The attacks occurred over a 17 month period from 2005 - 2006, and saw the team break into the networks of a number of US retailers.

There is a lot to learn from this case; the attack was classic security process failure with the attackers able to war drive and break in through weaknesses in the wireless infrastructure; lack of internal monitoring and controls, and so on. These are facts and issues that will be picked over for some time.

Technical interest aside, the motivation of the tech guys who helped write some of the software is what interests me. Three of them have recently been sentenced; Stephen Watt gets 2 years , Christopher Scott gets 7 years, and Humza Zaman gets 4 years. It turns out that Watt and Zuman both had highly paid jobs, and a promising future in front of them. Zuman, in particular, is of interest as he worked for Barclays bank as a network security manager, and sent Gonzalez ATM system logs . Watt was a programmer at Morgan Stanley, yet is doesn't appear that he revealed or stole anything from that organisation. These facts are very interesting, as Zuman is represents the classic insider threat. In this instance he got caught, but only because this was such a high-level case.

There's some salacious stuff in the press at the moment about the motivation of these individuals, and the reasons why they would involve themselves, and it is pretty hard to tell whether this is the absolute truth, or whether it has been overblown to a certain degree. However, the factors that motivate well paid, intelligent and successful people to commit such crimes are of interest to the internal risk teams. Drugs and sex form a large part of the allegations, yet from preliminary reading, it seems that these were merely the results of their success, and the motivation was linked to comradeship, a sense of being, and identity.

I went to a financial crime conference in December of 2009, and the head of risk of a large bank gave an overview of the internal monitoring and controls the bank implements to detect the probability that an employee might steal. Risk indicators include changes in personality, addiction, changes in personal relationships, and so on. It would be interesting to understand whether strong social links such as these (the members were part of a local chapter of 2600) are included in this analysis. Large banks, such as Barclays and Morgan Stanley do conduct this type of analysis on their employees, and it's likely better internal monitoring, of both technical (by TJX and the businesses attacked) and human (by the banks) resources probably could have averted these crimes.

Thursday, 25 March 2010

How do you trust a thief?

We seem to be entering into the bizarre realm of cross-over realities. There are legitimate stores selling rookits for the general public to install and spy at their own discretion, as well as the equally interesting crossover in the malware market, now dubbed as crimeware. The use of the SaaS model by criminal gangs selling their warez is nothing new, but a report by the CTU at SecureWorks outlines the new ways in which these groups are trying to protect their kit. From the report:

"The author has gone to great lengths to protect this version using a Hardware-based Licensing System. The author of Zeus has created a hardware-based licensing system for the Zeus Builder kit that you can only run on one computer. Once you run it, you get a code from the specific computer, and then the author gives you a key just for that computer."

So if the user changes a bit of their hardware, you're stuffed. Unlike the Microsoft model of hardware change for XP, there's no chance of calling a criminal underground rep and getting a reactivation code. However, it wouldn't surprise me if some form of call-centre or automated system was set up in the near future to support these things.

This raises some interesting questions about the manner in which these businesses operate. OK, so the author of the software wants to protect their warez, and make a profit out of them. The elegance of the software is apparent in the design of modules that can be bought for extra fees to augment the base install. However, as highlighted in a recent presentation by Thorsten Holz from the Technical University Vienna, a lot of the underground channels are filled with people trying to rip each other off. Where does the trust lie among thieves? Will this inability to trust ultimately stop the industry from growing? Will we see additional support services growing up round these illegal services to provide for the arbitration needed to instil trust?

There's already a shift in some of the Russian forums after the arrest of one of the American-based perpetrators of the the TJX hack. There is now the need for 3-factor authentication for anyone to be admitted into the forums, and two of these factors are based on reputation and personal knowledge. Also, you must be able to speak Russian. This is interesting as it could limit expansion through the lack of trust between members offering the necessary technical services. The network effect of the internet will be curtailed by this lack of trust.

It could be the case that we see these gangs adopting, and innovating, trust and identity services, which are seen as an answer to the problems facing the legitimate services offered online.

Tuesday, 9 February 2010

Inquisitive Systems Team Blog

This is the Inquisitive Systems Team Blog. The team consists of Jamie Graves, Niladri Bose and Andrew Kwecka.

We are IT professionals working in the security arena, with a passion for IT/Information security, data protection, system security, and the latest thinking in security thinking, products and research.