Just 12 hours after the London Bridge attack, the Prime Minister was clear where at least some of the blame lies.
“We cannot allow this ideology the safe space it needs to breed,” Theresa May said on the steps of Downing Street.
“Yet that is precisely what the internet – and the big companies that provide internet-based services – provide.”
It was extraordinary in the aftermath of the recent attack, where facts remain unclear and where victims have not yet been named, for the Prime Minister to make such a political response.
She continued: “We need to work with allied, democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremism and terrorist planning.
“And we need to do everything we can at home to reduce the risks of extremism online.”
She also posted her statement to Facebook, and the social media giant was the first to respond.
You don’t have to read between the lines to see that Facebook is not happy. The company believes that it already does a lot.
Facebook’s policy director wrote: “Using a combination of technology and human review, we work aggressively to remove terrorist content from our platform as soon as we become aware of it.
“If we become aware of an emergency involving imminent harm to someone’s safety, we notify law enforcement.
“Online extremism can only be tackled with strong partnerships. We have long collaborated with policymakers, civil society, and others in the tech industry, and we are committed to continuing this important work together.”
Sources at other technology companies are also keen to stress how much, in their view, they work with government.
They were surprised to be singled out by the PM so quickly.
“Today isn’t the day for a public row,” one source told me.
In her statement, the PM was driving at two different agendas: “spreading extremism” and “planning attacks”.
On the first, it is reasonable for tech companies to do a little more. Over the last few years, they’ve become much quicker at removing flagged videos and accounts.
And here the Government’s and the technology companies’ interests are pretty aligned: social networks do not want harmful content on their sites – to keep both users, and advertisers, happy.
The problem comes with content that is not obviously terrorist and thus illegal. When does free speech become extremism?
The Government itself has struggled to define extremism legally, and technology companies are not going to be able to do any better.
The second part – “terrorist planning” – is a reference to the problems posed by end-to-end encryption. Encryption keeps users secure but also makes it hard for governments to monitor communications.
The issue is a long-running one. The Westminster and Manchester attacks brought it back to our attention.
What is often missed is that the Government already has the legal powers to order companies to provide communications unencrypted. But it has never enforced them.
That may well change now. And it will lead to a very big showdown indeed.
(c) Sky News 2017: Tension between Theresa May and Facebook over extremism online
This content is published through a licensing agreement with Acquire Media using its NewsEdge technology.