Previous Next
07/06/2021
every+one, Trends, Featured

Should Social Media Platforms Require Real IDs?

A recent survey of tech experts in the United Kingdom revealed that nearly two in three (64%) believe that platforms like Twitter and Facebook should require users to provide a real ID, making people fully accountable for what they post online.

Do we need to go this far? Would such a change even be acceptable to Singaporeans?

On the surface, yes. More than three in four (77%) agree to the idea of real IDs being attached to online social activities. Surprisingly, millennials are most in favour while Gen Zers are least favourable, highlighting important generational gaps when it comes to privacy issues in the digital sphere.

Some of the benefits are also apparent to Singaporeans, particularly the eradication of dangerous, offensive, or problematic content.
Half of Singaporeans (50%) think the end of anonymity will greatly contribute to reducing the amount of fake news circulated online, closely followed by the reduction of offensive comments (49%).

Completing the top 5 benefits of requiring a real ID for social comments are eliminating rumours (47%)), racist comments (46%), and mitigating the spread of conspiracy theories (42%).

The downside for Singaporeans, however, is the impact that requiring IDs might have on their ability to criticise political leaders online.

This is aligned with recent research by Blackbox looking at current attitudes towards social media; our data shows that Singaporeans are more likely to view social media as a tool to hold people – public figures as well as users – accountable for their actions.

These findings suggest that Singaporeans are generally supportive of measures that make the internet a safer, more wholesome space. It remains to be seen, however, whether increasing social media users’ accountability is as straightforward as making social media platforms less anonymous.

Indeed, platforms such as Facebook already make it compulsory to register with a real name – and advanced algorithms are specifically designed to identify and delete pseudonymised accounts. While this undoubtedly reduces the number of fake accounts used to spread spam, phishing links, or malware, a number of hate groups still thrive within and around the platform.

Perhaps the expansion of the Airbnb model – that of requiring a scanned ID and proof of address to create an account – may be the way forward. But then come several thorny questions: this model may very well work for Airbnb to ensure peer-to-peer transactions are secure, but will it be relevant for platforms designed for users to exchange opinions? If found applicable, who is in charge of determining whether a topic or view is out-of-bounds? And what criteria will they use to reach such conclusions?