Content Marketing

Human vs machine: the things a bot can’t spot on social media4 min read

August 15, 2016 3 min read


Human vs machine: the things a bot can’t spot on social media4 min read

Reading Time: 3 minutes

Automation is becoming a very attractive prospect for busy people. It can save you oodles of time, allow you to move away from mundane tasks, and it can even help you grow your online audience. But when it comes to automating your social media activity, is it wise to leave all the responsibility with just an algorithm?

We all know how easy it is to get into PR-nightmare territory on social media. A slip of the tongue (or in this case, finger) can demolish a company’s reputation, deface a brand image, or even get you into legal trouble. A social media service that relies solely on code to source content for you will be great at spotting spelling mistakes, explicit words and out-of-date information, but it won’t be able to spot the nuances that make a good or bad quality article.

Here are just a few of the reasons why social media automation needs real people behind it.

A human can see bad website design

Have you ever clicked on a link, thinking what you’d find would be interesting or useful, only to find the website is badly formatted and difficult to read? That’s not the kind of thing you want to be sending out to your audience. A real person would immediately be able to spot a chaotic webpage, but a bot that could do the same would have to be incredibly complex. Even this robot ‘art critic’ relies on human facial expressions to decide if a piece is good or bad!

A human can weed out harmful advice

The internet is full of experienced, intelligent people with great tips for whatever you’re trying to achieve. Sadly, not all advice comes from such people. Some corners of the internet will give out bad advice, ranging from irresponsible to downright dangerous, if it’s in their own interest to do so. A human can quickly assess whether bits of advice or ‘top tips’ are trustworthy, but a 100% automated social media scheduler can only pick out keywords and data.

It may be a slower process, but your tweets and Facebook posts reflect on you – you don’t want your followers thinking you endorse harmful behaviour.

A human can question hyperbole

‘One weird trick to lose 10lbs!’

‘Learn how to make MILLIONS!’

We’ve all seen exaggerated claims online, and sometimes, they creep into articles that look legitimate. Sometimes they are just hyperbolic statements advertising something that is genuinely useful. Sometimes they entice people towards something less than desirable, like an internet scam. A completely automated system would not see anything unusual about these kinds of statements – they’re just text on a page. But human experience tells us that they could be potentially dodgy, and definitely not something you want your social media fans to click on. Even if they are harmless, they can send the wrong message.

A human can see the difference between controversial and offensive

A bit of debate and disagreement is normal, especially on social media. Content that makes people question their assumptions can even be a great talking point for your audience. It gets people engaged and sparks conversation, which can be more effective than posting the same old, same old.

But there is a line between a disruptive opinion and an offensive one, and nothing can crush a reputation on social media like the latter. Relying on machines to root out hateful or discriminatory content is precarious. Yes, they could screen for certain words and phrases, but what if a word that is only offensive in context falls through the net? A lot of what makes inflammatory speech so abrasive is ‘between the lines’: tone of voice, cultural or historical significance, current events. Only a human can connect these dots to see if an article is fit for purpose.

A human can tell if information is out of date

It’s fairly straightforward to get a machine to filter out some time-sensitive content. If an article has ‘2015’ or ‘Christmas’ in the title, for example, an app might be able to detect this. But not all information has such a clear-cut shelf life.

Say you’re a freelance photographer looking to automate your social media activity. Would a bot know that the camera in that product review had been outmoded? Or that the tips in that Photoshop tutorial are now irrelevant? Any industry that is constantly changing will have the same problem – unless a human is there to screen for it.

Posting out-of-date content can be more harmful than it sounds. If your professional reputation hinges on people thinking you know your stuff, you don’t want followers seeing you share advice from the Dark Ages.

No matter how efficient ‘run-by-robots’ automation systems might seem, it’s worth having a human pair of eyes to look over your social media content before it’s published.