Social Media

# X Continues To Display Ads Alongside Harmful Content

X Continues To Display Ads Alongside Harmful Content

Despite repeated assurances from X (formerly Twitter) that its ad placement tools provide maximum brand safety, ensuring that paid promotions do not appear alongside harmful or objectionable content in the app, more and more advertisers keep reporting concerns under X’s revised “freedom of speech, not reach” approach.

Today, Hyundai has announced that it’s pausing its ad spend on X, after it found that its promotions were being displayed alongside pro-Nazi content.

This comes just days after NBC published a new report which showed that at least 150 blue checkmark profiles in the app, along with thousands of unpaid accounts, have posted and/or amplified pro-Nazi content on X in recent months.

X denied the NBC report earlier in the week, labeling it a “gotcha” article, which lacked “comprehensive research, investigation, and transparency.” Yet, now, another major X advertiser has been confronted with the exact issue highlighted in the report. Which X has acknowledged, and it’s suspended the profile in question, while it’s also working with Hyundai to address its concerns.

But again, this keeps happening, which seems to suggest that X’s new approach to free speech is not sustainable, at least in terms of meeting advertiser expectations.

Under X’s “freedom of speech, not reach” approach, more content that violates X’s policies is now left active in the app, as opposed to being removed by X’s moderators, though its reach is restricted to limit any impact. X also claims that any posts that are hit with these reach penalties are not eligible to have ads displayed alongside them, yet various independent analysis reports have found that brand promotions are indeed being displayed alongside such material, meaning that it’s either not being detected as violative by X’s systems, or X’s ad placement controls aren’t functioning as expected.

The main concern for X is that with an 80% reduction in total staff, including many moderation and safety employees, the platform is now simply not equipped to be able to deal with the level of detection and action required to enforce its rules. Which means that a lot of posts that do break the rules are simply being missed in detection, with X instead relying on AI, and its crowd-sourced Community Notes, to do a lot of the heavy lifting in this respect.

Which experts claim will not work.

Every platform utilizes AI to moderate content to varying degree, though there’s general acknowledgment that such systems are not good enough on their own, with human moderators still a necessary expense.

And based on E.U. disclosures, we know that other platforms have a better moderator-to-user ratio than X.

According to the latest E.U. moderator reports, TikTok has one human moderation staff member for every 22,000 users in the app, while Meta is slightly worse, at 1/38k.

X has one moderator for every 55k E.U. users.

So while X claims that its staff cuts have left it well equipped to deal with its moderation requirements, it’s clear that it’s now putting more reliance on its other, non-staffed systems and processes.

Safety analysts also claim that X’s Community Notes are simply not effective in this respect, with the parameters around how notes are shown, and how long it takes for them to appear, leaving significant gaps in its overall enforcement.

And based on Elon Musk’s own repeated statements and stances, it seems like he would actually prefer to have no moderation at all in effect.

Musk’s long-held view is that all perspectives should be given a chance to be presented in the app, with users then able to debate each on its merits, and decide for themselves what’s true and what’s not. Which, in theory, should lead to more awareness through civic participation, but in reality, it also means that opportunistic misinformation peddlers are misguided internet sleuths are able to gain traction with their random theories, which are incorrect, harmful, and often dangerous to both groups and individuals.

Last week, for example, after a man stabbed several people at a shopping center in Australia, a verified X account misidentified the killer, and amplified the wrong person’s name and info to millions of people across the app.    

It used to be that blue checkmark accounts were the ones that you could trust for accurate information in the app, which was often the purpose of the account getting verified in the first place, but the incident underlined the erosion of trust that X’s changes have caused, with conspiracy theorists now able to boost unfounded ideas rapidly in the app, by simply paying a few dollars a month.

And what’s worse, Musk himself often engages with conspiracy-related content, which he’s admitted that he doesn’t fact-check in any way before sharing. And as the holder of the most-followed profile in the app, he himself arguably poses the biggest risk of causing such harm, yet, he’s also the one making policy decisions at the app.

Which seems like a dangerous mix.

It’s also one that, unsurprisingly, is still leading to ads being displayed alongside such content in the app, and yet, just this week, ad measurement platform DoubleVerify issued an apology for misreporting X’s brand safety measurement info, while reiterating that X’s actual brand safety rates are at “99.99%”. That means that brand exposure of this type is limited to just 0.01% of all ads displayed in the app.

So is this tiny margin of error leading to these repeated concerns being reported, or is X’s brand safety actually significantly worse than it suggests?

It does seem, on balance, that X still has some problems that it needs to clean up, especially when you also consider that the Hyundai placement issue was only addressed after Hyundai highlighted it to X. It was not detected by X’s systems.

And with X’s ad revenue still reportedly down by 50%, a significant squeeze is also coming for the app, which could make more staffing in this element a challenging solution either way.

If you want to read more like this article, you can visit our Social Media category.

Source

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button