TL;DR – These new rules, which are implemented to enhance online safety on social media, are not new internationally.
In a recent announcement, the Ministry of Communications and Information (MCI) has said that social media services may soon be directed to “disable access” to harmful content or disallow accounts that carry such content.
Is there a need to do so?
A poll conducted by Sunlight Alliance for Action in January 2022 revealed that almost half of more than 1,000 Singaporean surveyed said that they had experienced online harm. Hence, the Government is proposing two codes of practice to enhance online safety.
And here’s what the two codes of practice cover:
1. Designated social media services with high reach or high risk to have system-wide processes to enhance online safety for all users, with additional safeguards for young users under 18.
What this means is that social media services will have to institute and execute community standards and actively moderate the content put up on their platforms to “mitigate users’ exposure” to harmful content.
Users of their platforms should also be given the option to reduce or prevent altogether, exposure to unwanted content. Some of the outright ‘no-go’ content would include child sexual exploitation and abuse material and terrorism content social media platforms must detect and remove the ‘no-go’ content actively.
Another kind of harmful material would be the kind of video that “challenges” viewers to film themselves engaging in a dangerous act, potentially leading to injuries or even death.
Easy and permanent reporting mechanisms should importantly be made available to users of social media platforms to report harmful content and unwanted interactions.
On top of all these, social media services are required to produce an annual accountability report to be published on IMDA’s website.
2. Designated social media services with content areas which are assessed to be “egregious online harms”
Some content areas that pertain “egregious online harms” are safety in relation to sexual harm, self-harm, or public health. Public security, racial and religious harmony/intolerance, are also included.
Under this content code, IMDA will be granted powers to direct any social media service accessible from Singapore to disable access to the specified type(s) of “egregious harmful content” or disallow specified online accounts to communicate such content and/or interact with users in Singapore.
MCI has noted that online harm has become more prevalent globally and becoming a major concern. It has begun industry consultations in June 2022 and will consult the public come July 2022.
Not something new
In case you don’t already know, these new rules are not new internationally. Singapore joins a growing number of countries that have enacted or proposed laws to regulate online content.
One example is Germany’s Network Enforcement Act which came into force in 2018, which prohibits defined types of illegal content on social media services with more than 2 million users in Germany.
Another is Australia’s Online Safety Act, which came into force in January 2022. Its Online Safety Act allows Australis’s online safety regulator to order take-down of illegal and restricted content. Any refusal by the companies to do so results in fines of up to A$555,000 per offence!
Regions like the whole of the European Union, Britain are also planning similar online safety bills. Do you feel there is a need for these proposed codes to safeguard online safety?