Facebook, Google and Twitter all rushed to defend themselves on Sunday against criticism from Theresa May, the British prime minister, in the wake of the London Bridge terror attack.
Why are the tech companies facing pressure again?
A series of terrorist attacks in the UK, France and Germany has raised concerns over the way social media platforms monitor and take down extremist content.
It is not clear at this stage whether the three terrorists who perpetrated the attack on Saturday night were users or consumers of social media, but a third attack in as many months has increased the focus on long-running concerns.
The accusation that social media groups are failing to tackle terrorists was first made in 2014 when the intelligence and security committee of MPs criticised a US internet company — now known to be Facebook — for failing to pass on information that could have helped prevent the murder of British soldier Lee Rigby by Islamist terrorists.
What are the issues?
Each platform faces different challenges — from regulating hate content circulating online to the way in which terrorist groups use end-to-end encrypted messaging services to communicate. There was speculation that the Westminister attacker, Khalid Masood, may have used encrypted messaging service WhatsApp to send a message before his rampage that resulted in the deaths of four people.
The tech groups say there is no single answer to the host of problems they face in policing millions of separate accounts and posts. On YouTube, for example, 400 hours of video are uploaded every minute.
“The idea that there can be one standard approach for the whole of the internet is not anywhere near to reality,” said an executive at one of the companies.
One of the biggest criticisms levelled against technology companies is that they only intervene when illegal or inappropriate content is flagged to them by users. Many politicians want to see the companies step up the levels of human intervention.
What are the tech groups doing?
All the big groups point to significant progress in the past three years. For example, Twitter says that between July and December 2016 more than 376,000 accounts were suspended for violations related to promotion of terrorism while YouTube says it terminates any account where they have reasonable belief that the account holder is an agent of a foreign terrorist organisation.
Facebook, meanwhile, insists that it uses a combination of technology and human analysis to thwart terrorists.
Mark Zuckerberg, the founder and chief executive of Facebook, said this year that artificial intelligence could “help provide a better approach.”
What about smaller ‘closed’ platforms such as Telegram?
Faced with tighter restrictions by the bigger companies, terrorists are changing their approach, said Professor Peter Neumann, an expert on terrorism at King’s College.
“Big social media platforms have cracked down on jihadist accounts, with the result that most jihadis are now using end-to-end encrypted messenger platforms such as Telegram. This has not solved the problem, just made it different,” Mr Neumann said.
This shift has led to fresh frustrations for politicians and the security services. Rob Wainwright, the head of Europol, said recently: “There are some that simply won’t co-operate with us. One in particular causing major problems for us is Telegram.”