‘Against an adequately skilled, adequately funded adversary, our defences don’t work’

Cryptologist Bruce Schneier tells RSA conference that focus should be on dealing with fallout of cyberattacks

Bruce Schneier, chief security officer at Resilient Bruce Schneier, chief security officer at security company Resilient, says the Sony attack, believed to be carried out by North Korea, exposed many of the major risks related to hacking.
Bruce Schneier, chief security officer at Resilient Bruce Schneier, chief security officer at security company Resilient, says the Sony attack, believed to be carried out by North Korea, exposed many of the major risks related to hacking.

Last year's massive cyberattack on Sony – presumed to have been a nation state attack orchestrated by North Korea – presents many of the most pressing issues of catastrophic risk, says well known cryptologist and author Bruce Schneier, chief security officer at security company Resilient. In a talk at the RSA security conference in San Francisco, Schneier considered the timeline of the attack, and the response to it. During the event, hackers penetrated Sony's network, stole data, and then embarrassed the company by slowly releasing private emails from executives, salary details, copies of unreleased films, and other sensitive information. The hack, which occurred over several weeks in November and December 2014, is believed to have been done in response to the studio's release of the Seth Rogen comedy The Interview, with a plot that revolves around a plan to assassinate North Korean leader Kim Jong-un.

The attack started in November with a spear-phishing attack – a targeted email scam used to get access to confidential information. Schneier said details of the attack were still not known, but the hackers quickly obtained administrator credentials and penetrated the company’s network. Once they were in and downloading data, a skull and crossbones appeared on the screen of a compromised computer.

“This is where the hackers made a mistake, because if you’re going to penetrate a network and download data – put the picture up after you download the data,” he says. Sony employees saw the image and quickly pulled the plug on the computer, stopping the data transfer.

The cyber attack on Sony last year encapsulated many of the biggest risks related to hacking
The cyber attack on Sony last year encapsulated many of the biggest risks related to hacking
The cyberattack on Sony was believed to have been carried out by North Korea in response to the release of The Interview, a film which revolves around a plot to assassinate Kim Jong-un. Photograph: Reuters
The cyberattack on Sony was believed to have been carried out by North Korea in response to the release of The Interview, a film which revolves around a plot to assassinate Kim Jong-un. Photograph: Reuters

Interestingly, he notes, the US intelligence community “realised right away this was a big deal”, and started to examine evidence from the hack on the same day. This, he says, likely indicates the state had additional information pointing towards an action that went beyond a typical hack.

READ MORE

Then the data releases began. In late November, the hackers, who called themselves the Guardians of Peace, uploaded four previously unreleased movies on BitTorrent. Four more major leaks occurred at the start of December.

These included "Sony executives griping about stars, and insulting the US president". The NBC television network in the US stated that the hack pointed to North Korea – the hackers had also referred to the still-unreleased film The Interview – but the FBI replied that it had no reason to believe North Korea was involved.

In mid-December, the Guardians of Peace made threats to attack theatres showing the film.

“The FBI then makes a statement, concluding the North Korean government is responsible,” says Schneier. “Now, attribution is a big deal in the US. We rarely do it. And the president makes a statement.”

But, he said “really nobody believes them. It’s amazing. The FBI and the US president get up and say, we believe it’s North Korea, and we say, we don’t believe it.” There are many reasons why, he says. For one, “Sony is a company hackers have loved to hate.”

But the FBI reiterated that North Korea was behind the attack, noting the hackers had at times been sloppy, and logged on from North Korean IP addresses. Yet this statement was still met with scepticism, Schneier says. All of this “encapsulates a lot of issues of catastrophic risk that I’m currently thinking about,” he says.

"First, the rhetoric of fear is real, and it's scary." Newt Gingrich called the attacks "pure cyberwarfare" while Senator John McCain said they were a manifestation of a new type of warfare.

“There was lots of hyperbole, and this really skews debates. We’re suddenly talking about war, not hacking.”

Second, this type of attack presents a threat to everybody, he notes. It was both focused and highly skilled. “Against an adequately skilled, adequately funded adversary, our defences don’t work, period.”

Third, Schneier says, “Sony had really lousy security.” The company’s chief security officer “had left a few months before this attack because he just gave up trying to do good”. One document indicated the company was primarily focused on putting in the minimum it needed to meet legal requirements, but says Schneier, “you’ve got to do more than that.”

Fourth, “what’s happening is, we’re increasingly seeing warlike tactics used in broader cyberattacks. It gives capabilities to people who didn’t have capability before.” Today’s sophisticated NSA programmes become tomorrow’s PhD theses and the next day’s hacker tools, he notes.

Also, perpetrators all have access to the same tactics, and the same malware. So, you end up with “a legitimate discussion on whether the Sony attack was done by a couple of guys, or a nation state with a $20 billion budget. Now that is a weird debate to have.”

Finally, he says, “attack attribution is hard. Packets don’t come with return addresses and it’s really easy for an attack to seem to come from somewhere else.” Providing evidence is also tricky. “I mean, the US attributed this attack to North Korea, and we don’t believe them.”

Uncertainty of attribution means it is harder to determine who, exactly, provides defence to the organisation under attack. “The legal framework depends on who’s attacking you and why, but in this case, the two things you don’t know is who’s attacking and why.”

In other words, Schneier says, “whose job is it to defend Sony, before we know whose job it is to defend Sony?” For example, executives “very quickly shifted from ‘we are protecting our company’ to ‘we are protecting our own careers.’ It was chaos; they really didn’t know what to do.”

If it’s impossible to prevent this sort of attack, we have to then learn how to deal with them, he says. “Figuring out how to thrive despite them may be the best. More of a transparency mindset might be valuable. Don’t care if your salaries are published.” But to take the issue further, Schneier says he wanted to look at ways of dealing with “truly catastrophic risk” – say, a cyberattack on a nuclear facility – “and with existential risks”.

More technology means more potential for damage, because it makes for greater efficiencies. This in turn encourages authorities to argue for mass surveillance, on the basis that “we cannot even have one person who might make a nuclear bomb,” Schneier notes. “The problem with that is, it doesn’t work.”

Surveillance might work to uncover a conspiracy between a number people to do something, but not for the acts of an individual, because a single person is too unpredictable. Another temptation is to introduce greater control on technology users. Schneier expects to see attempts to restrict what people can do with computers or 3D printers, for example. But again, he says, “It doesn’t work. You can stop some of the people some of the time, but it won’t work.”

So what do we do? Secure against the actual, physical threats as much as possible, he says, and also, aim for more agile response. Respond fast enough, and people can at least mitigate if not prevent disaster. The difficulty, he says, is that “we don’t know what ‘good enough’ security looks like. And that raises a fundamental question: will any of this work?

“My question, very broadly asked, is have I just described a fundamental limitation on the evolution of intelligence and technology? That it’s physically impossible for a civilisation to get over that critical problem?”