MLM News Hubb
Advertisement
  • Home
  • Marketing
  • Multi Level Marketing News
  • Contact us
No Result
View All Result
  • Home
  • Marketing
  • Multi Level Marketing News
  • Contact us
No Result
View All Result
MLM News Hubb
No Result
View All Result
Home Marketing

The Impact of Toxic and Harmful Content on Brands, Their Teams and Customers

admin by admin
January 22, 2023
in Marketing


Share:




Online toxicity can be damaging for brands, impacting the well-being of their frontline staff and creating a real commercial impact for brands if their customers are exposed to it. So, how can companies work to alleviate the negative effects?

Here, Matthieu Boutard, President and co-founder of Bodyguard.ai, outlines the benefits and challenges of content moderation and explores how companies can take a blended approach to achieve the best outcomes. 

With the Online Safety Bill set to come into UK law in the coming months, much attention has been paid to the negative impact of social media on its users. 

The goal of the bill is to deliver upon the government’s manifesto commitment to make the UK the safest place in the world to be online. However, it will need to strike a critical balance to achieve this effectively. 

According to the Department for Digital, Culture, Media and Sport (DCMS), it aims to keep children safe, stop racial hate and protect democracy online, while equally ensuring that people in the UK can express themselves freely and participate in pluralistic and robust debate.

The bill will place new obligations upon organisations to remove illegal or harmful content. Further, firms that fail to comply with these new rules could face fines of up to £18 million or 10% of their annual global turnover – whichever is highest.

Such measures may seem drastic, but they are becoming increasingly necessary. Online toxicity is rife, spanning all communications channels, from social media to in-game chat. 

In exploring the extent of the problem, we recently published an inaugural whitepaper examining the online toxicity aimed at businesses and brands in the 12 months that ended July 2022.

During this process we analysed over 170 million pieces of content across 1,200 brand channels in six languages, finding that as much as 5.24% of all content generated by online communities is toxic. Indeed, 3.28% could be classed as hateful (insults, hatred, misogyny, threats, racism, etc), while 1.96% could be classed as junk (scams, frauds, trolling, etc). 

Three Key Challenges of Content Moderation

Unfortunately, the growing prevalence of online hate and toxic content is increasingly seeping into brand-based communication channels such as customer forums, social media pages, and message boards.

For brands, this can have a significant commercial impact. Indeed, one study suggests that as many as four in 10 consumers will leave a platform after their first exposure to harmful language. Further, they may share their poor experience with others, creating a domino effect of irreparable brand damage. 

It is therefore important that brands moderate their social media content to remove toxic comments. However, doing this effectively is no easy task, and there are several potential challenges.

First, it can be a highly resource-intensive and taxing task to complete manually. A trained human moderator typically needs 10 seconds to analyse and moderate a single comment.

Therefore, if there are hundreds or thousands of comments posted at the same time, it can become an impossible task to manage the flow of hateful comments in real time. Resultantly, many content moderators are left mentally exhausted from the volume of work.

Additionally, being repeatedly exposed to bad language, toxic videos, and harmful content can have a psychological effect on moderators. Indeed, the mental health of these individuals cannot be overlooked, while further burnout from toxicity can be costly to businesses, potentially accelerating employee turnover.

Thirdly, companies need to tread a fine line when moderating to ensure they aren’t accused of censorship. Brand channels such as social media are often a primary source for customers engaging with brands, providing their feedback and holding brands to account. Those that give the impression that they are simply deleting any critical or negative comments may also come under fire.

A Blended Approach for Balanced Outcomes

Fortunately, AI and machine learning-powered technologies are beginning to address some of the challenges facing human moderators. However, there are further issues that need to be ironed out here. 

Machine learning algorithms currently used by social platforms such as Facebook and Instagram have been shown to have an error rate that can be as high as 40%. As a result, only 62.5% of hateful content is currently removed from social networks according to the European Commission, leaving large volumes of unmoderated content out there that can easily impact people and businesses.

What’s more, these algorithms also struggle to manage the sensitive issue of freedom of expression. In lacking the ability to detect linguistic subtleties, they can lean too far on the side of censorship as algorithms are prone to overreacting.

With both human moderation and AI-driven solutions having their limitations, a blended approach is required. Indeed, by combining intelligent machine learning with a human team comprising linguists, quality controllers and programmers, brands will be well-placed to remove hateful comments more quickly and effectively.

Of course, selecting the right solution here will be key. Ideally, brands should look to adopt a solution that is advanced enough to recognise the differences between friends interacting with “colourful” language, and hostile comments directed towards a brand. 

Striking this balance is vital. To encourage engagement and build trust in online interactions, it is crucial that brands work to ensure that toxicity doesn’t pollute communications channels while also providing consumers with a platform to criticise and debate.

Thankfully, with the right approach, moderation can be effective. Indeed, it shouldn’t be about prohibiting freedom of expression but preventing toxic content from reaching potential recipients to make the internet a safer place for everyone.





Source link

Tags: billcontentmediamoderationonlinesafetysocialtoxicity
Previous Post

Proven Way Without Investment Make passive income. | by Smriti sultana | ILLUMINATION | Jan, 2023

Next Post

How to Measure Social Media Marketing ROI [with Expert Advice]

Next Post
How to Measure Social Media Marketing ROI [with Expert Advice]

How to Measure Social Media Marketing ROI [with Expert Advice]

Recommended

Joshua D. Nicholas sentenced to 51 months for EmpiresX fraud

November 29, 2022

How to Take the Struggle Out of Convincing Your Board on Digital

October 31, 2022
10 Facebook Cover Photo Size & Design Best Practices [Templates]

10 Facebook Cover Photo Size & Design Best Practices [Templates]

December 30, 2022

Social Media Strategies for Network Marketing by Stefano Orru

January 15, 2023

Don't miss it

Marketing

What Challenges Does a PIM System Solve For Retailers

February 2, 2023
Marketing

Exploring the Latest Trends in Digital Marketing Technology

February 2, 2023
Multi Level Marketing News

Go Global Review: Collapsed OmegaPro Ponzi rebooted

February 2, 2023
Multi Level Marketing News

How Megan George got 100 Customers in a Month in using Tiktok

February 2, 2023
What Is It & How Is It Beneficial? [+ Examples]
Marketing

What Is It & How Is It Beneficial? [+ Examples]

February 2, 2023
Marketing

Set Your Data Free With Web3

February 1, 2023
Marketing

How Forrest Webber Grew to $40k Per Month After Replacing Rental Properties With Websites

February 1, 2023
Multi Level Marketing News

John DeMarr sentenced to 5 years in prison for Krstic fraud

February 1, 2023
Multi Level Marketing News

How to Attract Good Prospects for Your Business by Edwin Haynes

February 1, 2023
Marketing

How to Use Black-Owned Banks to Start Your Business

February 1, 2023

© 2022 MLM News Hubb All rights reserved.

Use of these names, logos, and brands does not imply endorsement unless specified. By using this site, you agree to the Privacy Policy and Terms & Conditions.

Navigate Site

  • Home
  • Marketing
  • Multi Level Marketing News
  • Contact us

Newsletter Sign Up

No Result
View All Result
  • Home
  • Marketing
  • Multi Level Marketing News
  • Contact us

© 2022 MLM News Hubb All rights reserved.