Swiftmail
Buy and sell Bitcoin instantly at www.Fxprobitcoin.com List your own coin on the Fxprobitcoin exchange. Cash out of bitcoin at www.SwiftCoin.club

UK's New 'Extremist Content' Filter Will Probably Just End Up Clogged With Innocuous Content


The UK government has rolled out an auto-flag tool for terrorist video content, presumably masterminded by people who know it when they (or their machine) see it and can apply the "necessary hashtags." The London firm behind it is giving its own product a thumbs-up, vouching for its nigh invincibility.

London-based firm ASI Data Science was handed £600,000 by government to develop the unnamed algorithm, which uses machine learning to analyse Daesh propaganda videos.

According to the Home Office, tests have shown the tool automatically detects 94 per cent of Daesh propaganda with 99.995 per cent accuracy.

The department claimed the algorithm has an "extremely high degree of accuracy", with only 50 out of a million randomly selected videos requiring additional human review.

This tool won't be headed to any big platforms. Most of those already employ algorithms of their own to block extremist content. The Home Office is hoping this will be used by smaller platforms which may not have the budget or in-house expertise to pre-moderate third party content. They're also hoping it will be used by smaller platforms that have zero interest in applying algorithmic filters to user uploads because it's more likely to anger their smaller userbase than bring an end to worldwide terrorism.

The Home Office's hopes are only hopes for the moments. But if there aren't enough takers, it will become mandated reality.

[Amber] Rudd told the Beeb the government would not rule out taking legislative action "if we need to do it".

In a statement she said: "The purpose of these videos is to incite violence in our communities, recruit people to their cause, and attempt to spread fear in our society. We know that automatic technology like this, can heavily disrupt the terrorists' actions, as well as prevent people from ever being exposed to these horrific images."

Is such an amazing tool really that amazing? It depends on who you ask. The UK government says it's so great it may not even need to mandate its use. The developers also think their baby is pretty damn cute. But what does "94% blocking with 99.995% accuracy" actually mean when scaled? Well, The Register did some math and noticed it adds up to a whole lot of false positives.

Assume there are 100 Daesh videos uploaded to a platform, among a batch of 100,000 vids that are mostly cat videos and beauty vlogs. The algorithm would accurately pick out 94 terror videos and miss six, while falsely identifying five. Some people might say that's a fair enough trade-off.

But if it is fed with 1 million videos, and there are still only 100 Daesh ones in there, it will still accurately pick out 94 and miss six – but falsely identify 50.

So if the algorithm was put to work on one of the bigger platforms like YouTube or Facebook, where uploads could hit eight-digit figures a day, the false positives could start to dwarf the correct hits.

This explains the government's pitch (the one with latent legislative threat) to smaller platforms. Fewer uploads mean fewer false positives. Larger platforms with their own software likely aren't in the market for something government-made that works worse than what they have.

Then there's the other problem. Automated filters, backed by human review, may limit the number of false positives. But once the government-ordained tool declared something extremist content, what are the options for third parties whose uploaded content has just been killed? There doesn't appear to be a baked-in appeals process for wrongful takedowns.

"If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake? It may be too complicated for the small company," said Jim Killock, director of the Open Rights Group.

"If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process."

For now, it's a one-way ride. Content deemed "extremist" vanishes and users have no vehicle for recourse. Even if one were made available, how often would it be used? Given that this is a government process, rather than a private one, wrongful takedowns will likely remain permanent. As Killock points out, no one wants to risk being branded as a terrorist sympathizer for fighting back against government censorship. Nor do third parties using these platforms necessarily have the funds to back a formal legal complaint against the government.

No filtering system is going to be perfect, but the UK's new toy isn't any better than anything already out there. At least in the case of the social media giants, takedowns can be contested without having to face down the government. It's users against the system -- something that rarely works well, but at least doesn't add the possibility of being added to a "let's keep an eye on this one" list.

And if it's a system, it will be gamed. Terrorists will figure out how to sneak stuff past the filters while innocent users pay the price for algorithmic proxy censorship. Savvy non-terrorist users will also game the system, flagging content they don't like as questionable, possibly resulting in even more non-extremist content being removed from platforms.

The UK government isn't wrong to try to do something about recruitment efforts and terrorist propaganda. But they're placing far too much faith in a system that will generate false positives nearly as frequently as it will block extremist content.


Disclaimer: The information contained in this web site is for entertainment purposes only. John McAfee, John McAfee Swiftmail and Swiftcoin are not affiliated with McAfee Antivirus. This web site does not offer investment advice. Check with your attorney, financial advisor and local statutes before using this web site, McAfee Swiftmail or Swiftcoin. John McAfee makes no warranty or guarantee, expressed or implied, as to the confidentiality, performance or suitability of Swiftmail and Swiftcoin for any purpose. Use these products at your sole risk.