Thursday, January 04, 2018

A new SafeSearch for the epoch of hate

In the bad old days, just before yesterday, the main thing wrong with the Internet/Interweb was that it was awash with porn. Thankfully, browser providers such as Microsoft and Google came up with SafeSearch, which allowed you to automatically filter out this torrent of filth.

Today the Interweb is awash with people trying to convince you of fake news. Sometimes they distort information or just plain lie to you, sometimes they express discomfiting opinions which they ought to have self-censored .. and sometimes they expose you to true information about the world which is deeply upsetting or brand-damaging.




Google/YouTube has been slow to pull down (somewhat-popular) offending content. I was shocked, shocked to realise that there was a conflict of interest here. Luckily the big advertisers are on the case. In The Times today I read: "JP Morgan’s firewall blocks ads from YouTube hate videos".

Yes, hate. So bad for the brand.
"Google has been accused of failing to do enough to remove dangerous content from YouTube after a leading bank created its own tools to prevent its online advertisements appearing alongside hate-filled videos.

Politicians and advertisers said it was an indictment of Google that a financial company was able in effect to identify and filter racist and terrorist clips where the tech giant had failed.

JP Morgan Chase devised an algorithm with 17 layers, or filters, to separate what it deems as safe YouTube channels from unsafe ones. One of the filters assesses the total video count on a channel, which automatically cuts out channels with one-off viral videos. Other filters look at channels’ subscriber counts, the topics they focus on, the language of the video captions and viewer comments on their clips.

“The model Google has built to monetise YouTube may work for it, but it doesn’t work for us,” Aaron Smolick, executive director of paid-media analytics and optimisation at JP Morgan Chase, told Business Insider. “The attention of protecting a brand has to fall on the people within the brand.”
Do we even want Google or Facebook to be the arbitrators of what can be published on the Interweb? Times commentator David Aaronovitch is not so sure:
"We all accept that material inciting terrorism or showing child sexual abuse should be removed immediately. But what happens when companies are obliged to take down anything that constitutes hate speech or intimidation?

Leaving aside the question of how you define them, the approach would almost certainly involve creating algorithms that delete material with certain keywords or phrases as soon as they are posted.

For one thing, this would save tech giants the trouble of having to respond to potentially millions of requests to take down material that users found offensive.

Better to make the mistake of wrongful deletion of posts and accounts, for which there is no penalty, than to allow something bad to get through and get dragged through the courts."
The world according to Google. I'm sure Hillary Clinton and George Osborne would approve.

---

There is a better way.

Extend the concept of SafeSearch the JP Morgan way. Consider a firewall-like app which can download 'safe-to-view' rules. Anyone can design a rule set: the US Democratic Party, SJWs-against-hate, Fox News, The Sun newspaper, .. JP Morgan .. .

Any organisation which fears brand damage will simply white-list those rule-sets from organisations which filter content in an acceptable manner. The app will then block their ads in proximity to texts, pictures and videos deemed to be those of hate. The ads will only be served to those devices running the app with a white-listed rule-set; I think JP Morgan et al could swing that.

This decentralised solution removes our reliance on those compromised Silicon Valley giants. We can all live safely in our bubble of choice.

Seriously, what is there not to like?

2 comments:

  1. Meanwhile we now also have a microchip bug announced, which dates back to 1995 chips. It is a classic problem that the chips follow logically incorrect computation paths (for speed) since they dont know those paths are incorrect: when they find this out they abort. Except these chips are accessing other data during their incorrect path, and then not deleting that data, which can now be read. Affects just about everything since 1995.

    I wonder whether the relevant STL departments were onto this problem, as it involves some good theorem proving to predict, and some good architecture to prevent?

    ReplyDelete
    Replies
    1. I reviewed some of the tech descriptions. Who knew speculative lookahead would be so problematic? You'd need an 'interesting' logic to tease out those particular problems.

      You should check out John Preskill's quantum computing review which Scott Aaronson links to in his latest post - (on the sidebar).

      https://arxiv.org/abs/1801.00862.

      Delete

Comments are moderated. Keep it polite and no gratuitous links to your business website - we're not a billboard here.