Remember when the internet was in its infancy? We all had to put up with little 468 x 90 banner ads everywhere you looked – and sometimes we clicked them because we didn’t know better.

As time went on we grew smarter, we were able to tell the bad adverts from the good, and the emergence of online advertising  bumped the ugly out of the marketplace entirely. And now, our brains automatically blank out adverts to keep us focused on the content we went to the site in the first place for. Many of us use ad-blocking tools so our brains don’t even need to perform the mental airbrushing.

But what if those adverts were trying to tell us something really important?

What if the Emergency Broadcast System was hooked into those banner ads trying to give us forewarning of an avoidable cataclysm?

Social Engineering

Social Engineering refers to psychological manipulation of people into performing actions or divulging confidential information.

It is becoming increasingly common by malicious actors (bank and identity fraud, for example), but is also becoming a core part of many companies’ business models.

It all started innocently enough with the Social Graph. The ability to link people with other people, events, photos and products via rich, meaningful relationships turned the one-size-fits-all internet into a personalised window where the chaos suddenly started to shape itself into something we recognised and could engage with on a more emotional level.

Instant social gratification through ‘likes’ and ‘follows’ became our norm, information relevant to us started to travel at a speed that made some high school students, even back in 2008, say “email is too slow“. The relevancy-engine that is the Social Graph began to play on our most base motivations.

FOMO, fear of missing out, or FOMSI, fear of missing something interesting, drove all of us to start checking our smartphones over 150 times per day.

unsocialmedia
The social elite, yesterday.

Getting Hooked

Is there a place for ethics in the user-interface of software? Google Design “Ethicist”, and magician, Tristan Harris published an eye-opening analysis of the psychological tricks that are present in some of our most-loved apps. From the slot-machine-esque variable success of the pull-down-refresh gesture in most email and social notifications software to the difference in time spent on applications that lean towards disruptive notifications over courteous ones, to the impression that menus give us choice rather than limit what we can do or see.

All together, the combined effect of instant social feedback, a general feeling of belonging to a community and some of these psychological UX tricks means we pour hours of our lives into these graphs that organisations like Amazon, Facebook and Twitter are able to monetize.

But there are a few a big, real problems with this that go beyond simply being us pawns in big commercial machines.

We are a species susceptible to gamification, we are competitive in nature. Whether it is the size of our waist or the size of our stocks portfolio we are always competing against someone or something (often an ideal that doesn’t even exist). Introducing competition to something we care about can quickly change our behaviour.

Charlie Brooker’s epic series, Black Mirror (now funded by Netflix and with noticeably higher production values and more American accents than ever before) covers this exact problem in an episode in season 3 called Nosedive.

nosedive

 

Another serious problem that stems from the Social Graph is addressed in Adam Curtis’ recent video documentary, HyperNormalisation (BBC iPlayer).

Once a graph knows enough about a user’s product preferences, political leaning, sexual orientation or religious beliefs it can use this to aggressively filter out any information that may conflict with that user’s desires or world view. Adam sums it up in a phrase that will be familiar to you: “if you liked that, you’re going to love this…”

The result is applications that are so intensely personalised and engaging that they become almost impossible to put down. But it’s even worse than that.

If you’re scared of that, you’re going to be terrified of this

As you spend more and more time operating within an increasingly narrower view of the world, everything you look at is relevant to you and all your friends and the people you follow have common interests and political/sexual/religious leanings.

It sounds great, like some kind of relevance nirvana, and it is at first, but it can cause a very strange thing to happen: , the filters get to a point where they are unable to filter any further. What you see is so refined, so narrow and so self-reinforcing by others in a similar situation, it can stop you seeing anything that challenges your way of thinking.

This means that even if you’re in the minority, you may think you are greatly in the majority because you are missing out on the opposing, majority views.

The ‘shock’ Brexit referendum result is one example (some strong language):

I expect everyone who was in the Remain camp was astonished at the result. But flip it around, I expect everyone who voted Leave had been subject to months of pro-Independent Britain, Rule Britannia, ‘Make Britain Great Again’ propaganda. Patriotism is a groove that is very easy to fall into and very easy to whip up excitement about.

Why do people only read things that back up their way of thinking?

This question was posed by a friend of mine when I posted a link on Facebook that I realised was very similar to a lot of things I had been posting lately. I was the proverbial broken record.

I suppose I’d categorise myself as a ‘liberal capitalist’, I know the value of money and want to accrue wealth, but I also want people to be able to live whatever lives they want and for everyone to have a good basic standard of living. Looking through Facebook and Twitter all of the trends, suggested and promoted posts reflected this; it was all a bit centre-left.

So, in accepting the fact I had become a victim of the filters, I decided to try and break them…

spannerThe experiment: breaking the filters

I unfollowed some of the more mainstream left-wing media (sorry Huff Post), and reluctantly started to follow some from the right-wing, hoping the software behind the scenes would get confused and melt into puddles of silicon.

Soon enough, my feeds started to fill up with alternative views than my own. With the US Presidential Elections coming along soon a lot of the posts were Americans discussing the prospect of a Democratic or Republican win. In honesty, I expected the right-wing to be completely, certifiably insane…

Yet, the gun-toting Yosemite Sam-caricatures portrayed by the left-wing media as having misunderstood the 2nd Amendment turned out to be parents who saw guns as a necessary evil to keep their households safe, parents who knew the ‘bad guys’ already had all the guns they could ever want and any gun control would only reduce their ability to defend themselves. There was sensible, level-headed reasoning around abortion and choice, respect for victims of sexual crime and even evidence of the positive effects of trickle-down economics.

All things that would make for terrible reader counts in liberal media.

True, there are plenty of certifiably-insane Republicans, mostly punchy-types wearing red caps, but there are liberals who chain themselves together on runways at London Heathrow, so it works both ways.

I have found in social media since breaking the filters: balance.

Conclusion

Why do people only read things that back up their way of thinking? The scary answer to that could be that, thanks to highly-tuned, refined social relevancy filters, they are physically unable to access the things that disagree with their way of thinking.

Are you seeing the big picture? The answer, undoubtedly, is no. But it is out there waiting to be seen if you’re willing to risk seeing something you disagree with.