AI versus extreme selfies

With smartphones ubiquitous and social media usage rising ever-higher, taking a photo of yourself – a “selfie” – alone or with friends has become popular not just among celebrities and world leaders but, let’s face it, with most of us.


Usually it’s just harmless fun, or at worst a minor inconvenience for those who have to dodge crowds of selfie-takers while making their way around picturesque parts of town. Wanting a keepsake of yourself and your pals has been a thing ever since humans first learned that we could make marks on cave walls.

But some people are concerned that our growing ability and inclination to take selfies anywhere and anytime is causing irreparable – and even deadly – harm.

Aldo Patriciello, an Italian MEP, has written to the EU’s administrative body, the European Commission, to ask what action it is taking to help prevent “the alarming phenomenon” of people accidentally hurting and killing themselves while taking “extreme selfies”. He defines these as “Selfies [taken] on skyscrapers, hanging off cornices, but even on railway lines, waiting until the last second to jump away.”

Citing sources including the United States’ Carnegie Mellon University, he says that such selfies are “resulting in 170 deaths each year”. Researchers in Turkey have found that, among 159 “selfie victims” they identified as being injured or killed in the act, 8 were based in Russia, 4 in Spain, 3 in Croatia, 3 in Turkey and 2 in Italy.

In particular, Patriciello is concerned about teenagers taking extreme selfies to show off, and then sharing the resulting images on social media in the hope of gaining “more followers and tens of thousands — or sometimes only dozens — of ‘likes’”.

He asked the Commission “whether it intends to require social platforms to use an algorithm that can prevent the risk of publicising and disseminating such images, to prevent them from becoming viral and setting negative examples to other young users?” The European Commission has yet to respond to Patriciello, but told Danube it would do so in due course.

Dr John Chiverton, who researches automated image analysis at the University of Portsmouth in the UK, told Danube that using AI to analyse images is “currently an active research area”, and that it is not an easy task.

“Action recognition and activity recognition from video are relatively challenging problems unless a number of constraints are imposed, such as good lighting and a clear view of the person performing the action,” he said. “Selfies on the other hand are a single snap shot in time and require a computer to understand the scene as well as how the person is posed or perhaps even acting in the single image frame. This, in a way, could be seen to be even more challenging.”

AIs can be trained by large amounts of data, and there are abundant selfies available on the internet to use as training material, Dr Chiverton said. But he said it might take “a considerable amount of time to accurately identify the more dangerous selfies which will be important to include in the training”. And even with training, automated image analysis typically has an accuracy “much less than 100%”.

“There would be a trade-off between sensitivity and specificity depending on the desired levels that the internet company might want to set or perhaps one that the government may require. A high sensitivity threshold indicates that it is vitally important to catch all the ‘dangerous’ selfies whilst a high specificity threshold would indicate the number of correctly classified ‘non-dangerous’ selfies would need to be relatively high.”

Professor Bernt Schiele heads the computer vision department at the Max Planck Institute for Informatics in Germany. He said that in principle it should be “as easy to train a machine learning algorithm to filter ‘extreme selfies’ as it is to train an algorithm to filter ‘spam’ or ‘terrorist propaganda’”. But he also flagged the need for “sufficient data to train from”, which he said means that someone has to label the training data accordingly.

Professor Schiele agreed that although such an algorithm could be trained to become “quite effective”, it couldn’t be 100% accurate. “We all know this from spam-filtering,” he said. “Lots of spam is classified correctly, but quite some spam is not classified as such and once in a while an important message is classified as spam even though this should not be classified as such.”

Social media platforms are already using algorithms to find and remove extremist content and copyrighted materials. They could soon be adding extreme selfies to that list.

Words: Craig Nicholson

Photo: R4vi

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s