Don’t Skip the Comments — They Might Just Save Journalism (My Piece in Times Now)
In the evolving AI-driven media landscape, user comment sections are surprisingly emerging as vital fact-checking tools, surpassing even Wikipedia in trust for verifying information, especially among younger users. Once dismissed as troll havens, these spaces are now arenas for collective scrutiny, where users spot inconsistencies and debunk misinformation faster than official agencies
In the brave new world of AI and media, there’s unexpected, good news for those who worry that human intelligence might soon be tossed into the dustbins of history. And it’s not from a tech lab or a university — it’s from the humble, often-ignored comments section.
For years, we’ve lived in what scholars call the ‘review economy’, where customer ratings and reviews massively influence the sale of products and services. On platforms like Amazon, Zomato, or Airbnb, a seller's fate can swing dramatically with each new review. A single critical comment can drag down an entire business, and that pressure — while good for consumers — can be excruciating for producers and sellers.
Now, this trend is spilling over into the news ecosystem, where reviews take the shape of user comments. These aren’t just reactions anymore — they’re fast becoming a new barometer of trust, credibility, and proof.
In the recently released Reuters Institute Digital News Report 2025, a particularly revealing section titled “How the Public Checks Information It Thinks Might Be Wrong” puts “comments from other users” at #6. That’s above Wikipedia — which for years held the status of the internet’s go-to fact-checker.
The findings show a distinct generational tilt: 22% of users aged 18–24 rely on comment sections to verify news, compared to 17% of those 35 and older. Younger users, it seems, are more willing to test ideas in open forums, using public feedback to form judgments — a kind of social fact-checking.
From Trolling to Truth-Telling
The shift is subtle but powerful. Until recently, comment sections were mostly dismissed as digital sewers — spaces for trolls, bots, or users with too much time and cheap internet. But as media literacy grows and users begin to understand how misinformation spreads, many are turning to the comment section to help separate fact from fiction.Consider this: in 2024, during a spike in fake videos about election rallies in India, many users didn’t wait for fact-checking agencies or news outlets to step in. Instead, they turned to the comment threads on X (formerly Twitter) and YouTube, where sharp-eyed users were already pointing out inconsistencies, like shadows not matching or background signage revealing the wrong city. This kind of ground-level scrutiny often beats AI detection, simply because humans notice different things — language nuances, cultural context, or even emotional tone.
What Comments Really Do
For content creators — whether on YouTube, Instagram, or traditional news — comments are now a form of social listening. They show not just how many people are watching, but how closely they’re paying attention.The comment section has, in many ways, become a co-creation space, where knowledge isn’t just delivered but discussed, debated, and shaped by varied human experience.
The Human Edge in the AI Age
But we mustn’t ignore the downside. The internet is still overflowing with confirmation bias, a phenomenon first described by cognitive psychologist Peter Wason in the 1960s. Simply put, humans tend to seek information that confirms what they already believe. Add algorithmic feeds to this — designed to keep you scrolling — and we end up inside echo chambers of our own making.Yet it’s here that the comment section pushes back. Precisely because it brings together people from different ideological, social, and geographical backgrounds, it has the potential to disrupt the bias loop — especially when platforms promote constructive, verified contributions.
In 2023, The New York Times launched a Reader Insight initiative that pulled key insights from the comments and fed them directly into editorial meetings. The Guardian’s moderators have been doing this for years — flagging not just offensive content but surfacing reader contributions that add value, nuance, and sometimes truths journalists may have missed.
Conclusion: Don’t Scroll Past the Comments
In the race between machine learning and misinformation, it’s tempting to place all our faith in AI to clean up the mess. But the truth is, AI can only go so far. It can detect inconsistencies, track sources, even flag deepfakes — but it lacks human context. It cannot weigh sarcasm, cultural subtext, or emotional intelligence the way people can.And so, the most democratic space on the internet — the much-maligned comments section — may yet prove to be one of our strongest tools in the battle for truth.
It’s time we paid closer attention to it.
(Authored by Prof. Dhiraj Singh, Deputy Dean, Times School of Media & Head, Centre for Media & Technology, Bennett University)
Read original article HERE
Comments
Post a Comment