By Adam Cecchetti
Last year I spoke at the AMW conference on "When are we going to fix this mess?". I decided to extend and revisit some of my thoughts from the original talk in this three part post. My slides which are here. Some of these points will be expanded into a separate posts as they warrant additional discussion. .
There has been a lot of deja vu in the media lately with the idea that "Thing X gets hacked"! This should not be a surprise anymore as many things are just now being built with security in mind. All of the things (SCADA, Planes, Robots, Pacemakers, SoCs, Radios, Cars, Clouds, Networks, Drivers, Mobile Apps, Singing Fish) are for the most part designed and run on general purpose CPUs. If an attacker can find a certain type of bug with enough effort and time they get control of nearly anything. The problem scope is big, expanding, and a large part of the press and research is focused on discovering more things can be hacked.
Premise 0: Everything that was not designed with security can be hacked.
It is all a matter finding out all the ways, and how much time an attacker has to find and exploit an issue. Which is part of the good news, security issues have to be found they can't be created by an attacker. They are a finite resource. For systems that have been through the Secure Development Lifecycle and a security review, in theory, that finite resource is greatly reduced or the remaining bugs aren't useful to an attacker.
Now the bad news, new ones are created daily as systems are built, as the layers of abstraction pile up, and vulnerable code gets reused. To make matters worse complex side effects emerge as the result of complicated user stories and usage patterns. Computers are great at copying things and security defects are no exception. Security issues that haven't been fixed can be inherited from one system to the next, like a bad gene or a re-gifted fruit cake.
Premise 1: Computers are awesome!
They don't LET you do anything. They DO anything you ask them to, even with DRM blurring the lines a bit.
General computation is a fundamental premise that enables us to connect every corner of this world, build virtual worlds inside it, and reach out to the stars. We can build anything with them! However, the out of the box experience for a computer with no software isn't all that fun for a general consumer. So the industry helped a lot of people along the way and learned a lot of lessons. As with all green fields nothing we originally built is given by default. Over the years we learned many of the lessons around networking, distributed computing, reliability, availability, and now security.
Even so we've built some amazing things with them. I have a device in my pocket that weighs 173 grams, can take HD video, edit it, render effects, and post it to be shared with anyone on nearly any part of the planet, instantly. Not a bad start for an industry that is ~60 years old.
Premise 2: We're at an odd juncture, again.
User habits are changing again. As the PC once consumed all markets now the mobile device and tablet have begun to re-consume the same markets. When user habits change, where critical user data is produced and stored subsequently shifts. This doesn't mean that the old systems vanish overnight it means that the user data replicates about every 5 years. If you told someone in 1980s that you deposited money into a bank account by taking pictures of a check with your phone they'd assume you were crazy. But for some banks when you do that today the account total changes in the same mainframe that a human teller updated in the 1980s. In a few revolutions of user patterns this will seem both silly and archaic. However, there's a chance it will still be supported 30 years from now for those aging Millennials that just won't let go of their mobile phones.
In the meantime how user critical data is created and stored seems to ping pong every 10 years or so with a bit of overlap.
- 1970: Mainframe : Centralized
- 1980: Personal Computer : Distributed
- 1990: Web/Email : Centralized
- 2000: Social Networks : Distributed
- 2010: Cloud : Centralized
- 2020: Internet of Things : Distributed
- 2030: Internet of Me : Centralized
The result is the echos and ghosts of dead, dying, and zombie usage models that leave data and code everywhere in the forms of legacy detritus. As more and more time passes critical user data ends up in indefensible places.
Premise 3: The Internet finally showed up.
The amount of air gap between our lives and the Internet is shrinking daily. In time to be able to function in a modern society it will be entirely gone. Good or bad, a significant amount of effort will be required if an individual wants to separate their life from the Internet in the future. Where will that take us? Well some strange places.
As a fun thought exercise in 5 years most cars will self-drive people home from work and they'll need something to do while stuck in traffic. Some of these people will start to daily live stream while they sit in traffic. Other people sitting in traffic in other parts of the world will need something to watch while their car drives them home. Which means we're building a world where people sitting in traffic can watch other people sit in traffic around the globe.
Premise 4: Everybody Bugs
Everyone makes mistakes from the first time builders to the most seasoned engineers. Mistakes are proof that someone did something, and not all mistakes are the end of the world. However, not all bugs are created equal in the eyes of the impact that can occur.
Security issues are bugs with interesting side effects. Hacking is finding and using side effects that were not intended in the original system build or design. Exploitation, by extension, is programming with side effects.
All of these things add up to an unknown number of ways to take control of an environment. As the digital landscape expands the impact of cleverly programmed side effects does as well.
Occasionally, an idea makes it's way into the security community that we'll build things perfectly every time. The absurdity of this concept is not lost on anyone that's actually shipped anything. Systems both simple or complex require a large amount of functional and state testing to get correct, the scope of this problem ends up being often exponentially expanding and vastly a problem of sorting complexity and a tactical usage of testing time. The best of security testers understand where to look and how to think to reduce that scope and time spent searching the unknown space.
In part 2 we'll continue to discuss the scope of the problem and some of the times the security industry has won in the past.