Another year, another Pwn2Own contest.
TL;DR results for 2016:
- Prize money: about half a million USD.
- All major browsers successfully exploited: Chrome, Safari, Edge
- All attacks bypassed all exploitation countermeasures (e.g., Sandboxing, address randomization) to successfully escalate all the way to root/SYSTEM level privileges
- Nobody broke through the VM
Kernel security sucks and will always suck.
Security mechanisms enforced by the kernel have more holes than swiss cheese. From the point of view of an advanced attacker code running in a "sandbox" as an unprivileged user is going to escalate to root/SYSTEM level privileges 100% of the time.
There are too many lines of code, the attack surface is too large and human programmers are too imperfect.
The kernel is an unreliable primitive from a security standpoint. You just can't trust it.
End-point security is still terrible and the same will be true 10 years from now (hello Pwn2Own 2026!) unless there is a radical change in software architecture that acknolwedges the cold hard realities of computer security over the wishy washy desires of senior executives.
Contests like Pwn2Own are just showing us the tip of our collective vulnerability iceberg: there are hundreds if not thousands of zero day exploitable holes lurking under the surface of all sufficiently complex software. Especially the software implemented in high-performing yet error prone low-level languages. This includes all browsers and operating systems. Quinn Norton has it exactly right: Everything is broken.
To get a hint of what lurks beneath the surface, read up on VUPEN security, now rebranded zerodium, a 0day market which pays top dollar (up to 1 million USD) to hoard exploits and lists the NSA amongst its clients. By comparison the bug bounties offered by most vendors are chump change.
It gets worse. Even if a genie granted us one wish and patched all existing vulnerabilities that wouldn't help for long because software is a fast moving target. Thanks to new development, vulnerabilities are likely opening up at a faster rate than they're being detected and patched.
Any conventional up-to-date computer with a browser can be compromised if you're willing to make the effort to develop zero day exploits and risk sacrificing the exploit if your attack is detected.
Speaking of detection, unless you're attacking Kaspersky or other high-value targets it usually won't be and even then the exploits you sacrifice are probably just a tiny part of your arsenal as an advanced attacker. Case in point, the attackers that went after Kaspersky sacrificed multiple zero days in their attempt. They had to know there was a high risk of detection but they took the risk anyway. Why? Kaspersky think it was hubris, but I'll bet it's because they could afford to lose a handful of zero days. There's more where those came from.
Nearly everyone in the world is always just one wrong click away from being totally pwned.
Advanced attackers are unimpressed that your system is fully patched. For high risk applications being fully patched does as much good as running an antivirus. Which isn't saying much.
What you're really achieving when you play the security patch treadmill game is that you're undemocratizing illict access to your systems. Keeping the script kiddies at bay while maybe forcing more advanced attackers to factor in the risk of sacrificing a zero day from their arsenal. That's it.
The probable ubiquity of hardware backdoors in Intel & AMD chipsets is in practice somewhat irrelevant, since software is by far the weakest link in the chain and will remain so for the foreseeable future.
Vulnerabilities in low-level (C/C++) code are still extremely relevant and the cost of attack is pretty low for client-side and privilege escalation attacks. A few weeks of a single skilled researcher's time.
By now all the big companies have strong security awareness and yet none of them are managing to prevent modestly motivated attackers from achieving full remote code execution with system privileges.
This state of affairs will not change until the fundamental security architecture of our systems changes. I expect to see more hardware enforced containment baked into the operating systems of the future.
Examples of this trend in the wild:
- Qubes OS
- Microsoft Windows 10 Enterprise using the hypervisor to secure the LSA. This is mostly security theatre at present, but if the trend continues it could be useful.
The stats for publically released exploits don't tell the whole story
If you look at the stats for the exploits being publically released you'll notice low-level vulnerabilities have gone way down. I used to take that as a sign that there were less of these issues to exploit, and that's probably true to a degree, but there's an important cultural and economic aspect to this as well.
I suspect part of the reason we're not seeing more Pwn2Own level exploits being released in public is that a lucrative private market has risen to disincentivize free disclosure while simultaneously, the cost of fully weaponizing vulnerabilities has risen due to exploitation countermeasures. The people willing and capable of paying the toll have better uses for their skills than giving them away. Like selling exploits privately for up to a million dollars.
Containment is the only realistic defensive strategy and hypervisors are the only semi-reliable primitive from which you can architect reasonably secured systems.
Sure, there are likely undiscovered zero day "escape from VM" vulnerabilities in all of them, but hypervisors are a much smaller and simpler than operating systems kernels so they have much smaller attack surfaces.
They're also not moving as fast as other targets so stamping out all the exploitable bugs should be an achievable goal eventually.
VMs are also easy to set up as honeypots since the host has complete transparent access to all the guests resources, but not vice versa. Attackers will think long and hard before risking the sacrifice of a zero day in a hypervisor.
Decentralization is a good thing because big organizations of all stripes can not be trusted to resist attack, uphold their own policies or keep our secrets.
Their attack surface is too large and too complicated. Too many assumptions have to hold for their security not to crumble like a house of cards in the face of an advanced attack.
Since all of the big companies are eating their own vulnerable dog food (and each others) and they're such irresistibly juicy targets we should assume they are all deeply compromised by a plethora of intelligence agencies, organized crime and clever individuals.
The degree to which it is reasonable to let someone else safeguard your secrets is not just how much you trust them not to abuse that power themselves, but also how much you trust them not to be abused.
That should be prime and center in the discussion regarding mass surveillance, government mandated backdoors and how much of our private information we feel safe handing over to companies like Google and Facebook. Well intentioned checks and balances at a legal level won't protect against hackers that have pwned your sysadmin's laptop.
I believe the problem with trusting big organizations is inherently unfixable.
Like most of us, big organizations will always prioritize getting things done over a serious attempt at closing off all avenues of attack, which is the way it should be. It's also what public opinion and public markets demand. Companies that over-prioritize security will go out of business. Governments that over-prioritize security may end up looking like North Korea.
But in a world where we don't collectively trust big organizations to maintain our security and keep our secrets, attacks, while still possible, would be much harder to pull off. They'd have to pick us off one by one.
If a true security renaissance ever takes place the driving force will not be personal computers or mobile computing but self driving cars and their like.
Autonomous self-driving cars will give hackers the power of life and death over anyone that uses them and eventually over anyone that shares the road with these hackable computers on wheels.
Just think about that for a moment. Plausibly deniable death from afar. An unfortunate accident or the perfect crime?
On the other hand, so many people are killed in road accidents due to human error that society may accept/repress that risk and work to raise the bar so assassination by hacking is something only the most rich and powerful actually have to worry about. Gulp. I hope.