• IP addresses are NOT logged in this forum so there's no point asking. Please note that this forum is full of homophobes, racists, lunatics, schizophrenics & absolute nut jobs with a smattering of geniuses, Chinese chauvinists, Moderate Muslims and last but not least a couple of "know-it-alls" constantly sprouting their dubious wisdom. If you believe that content generated by unsavory characters might cause you offense PLEASE LEAVE NOW! Sammyboy Admin and Staff are not responsible for your hurt feelings should you choose to read any of the content here.

    The OTHER forum is HERE so please stop asking.

Ministry of Communications & Info survey: Sinkie cyberbully victims like to complain and never block bullies

Rogue Trader

Alfrescian (Inf)
Asset

2 in 3 encountered harm online, but half of those harmed did not block offending content: Survey​

dw-onlineharms-231017.jpg

More than three-quarters of those who reported harmful content to the online platforms indicated that they faced issues with the reporting process. PHOTO: THE NEW PAPER FILE
Lynda Hong
Senior Environment Correspondent
UPDATED

23 MINS AGO

SINGAPORE – Two-thirds of Singapore Internet users encountered harmful content online, but nearly half of the people who experienced harm did not block the offending online content or users, or report to the hosting platforms, a survey by the Ministry of Communications and Information (MCI) found.

Even among those who reported the harm, the majority faced issues with the reporting process offered by tech platforms such as Facebook, fuckwarezone, Instagram, TikTok, X (formerly known as Twitter) and YouTube.

In the survey conducted online in May 2023, 2,107 Singapore users aged above 15 were asked whether they had encountered harm in the previous six months.

It found that the most common types of harmful content were related to cyber bullying, sexual content, illegal activities, racial or religious disharmony, violence and self-harm.

Nearly half of those who experienced harm online said that they did nothing about it because it did not occur to them to do so, or they were unconcerned about the content.

Most of the harmful content was hosted on social media platforms and online forums, followed by messaging apps, search engines and e-mails.

Among users who reported harmful online content to the platforms, more than three-quarters indicated that they faced issues with the reporting process. Some of the main issues highlighted by users include the platform not taking down harmful online content or disabling the account responsible; taking too long to act; and the lack of updates of their reports.

The survey noted that 88 per cent of the respondents were aware of at least one privacy tool that could be used on social media services, with highest awareness of tools that allowed users to control access to their profile information or their content, as well as to block other users from finding or contacting them.

The survey, which included 515 parent respondents, had found that half of them had used parental controls to restrict the types of content that could be accessed by their children, but usage was lower for other child safety tools. The latter include parent-child linked accounts to allow parents to monitor children’s online activity, kids-only accounts that come with restricted content and filtering tools offered by Internet service providers to block access to age-restricted sites.

To help parents manage the harms their children face online, MCI launched an Online Safety Digital Safety Toolkit in March in partnership with Google, Meta, ByteDance and X. This toolkit recommends parental controls, privacy and reporting tools, as well as self-help resources for individuals and parents to manage their own or their children’s safety online.

An inter-ministry toolkit is being developed by MCI, Ministry of Education, and Ministry of Social and Family Development, and expected to be launched in phases from early 2024.

Laws have also been tightened over the last few months to tackle harm online.

In July, the Online Criminal Harms Act was passed in Parliament to allow the Government to tell individuals, entities, online and Internet service providers, and app stores to remove or block access to content it suspects is being used to commit crimes.

A new code of practice for app stores will address risks associated with harmful content in online games, possibly with the use of a classification system for them. The Republic will also address how children’s personal data is collected and how data can be used in artificial intelligence (AI) systems.

The code of practice for app stores will complement the Code of Practice for Online Safety, which took effect in July. Under the Code, social media firms with significant reach, such as Instagram and Facebook, must put in place systems to limit Singapore users’ exposure to egregious content, including those that promote terrorism, cyber bullying or incite racial or religious tensions.
 

laksaboy

Alfrescian (Inf)
Asset
'Harm' = wrongthink or hurting snowflakes' feelings. Just build the Great Firewall if you want to clamp down on information so much. :rolleyes:
 
Top