Employee Hub

Your daily insights and updates
Thursday
,
March 12, 2026
HOME
Blog
AI is Here, Can We Still Trust the Media?

AI is Here, Can We Still Trust the Media?

We’re living through exciting and uncertain times.

Recently, Google’s CEO unveiled updates to its AI offerings, including Veo 3, capable of generating scenes so realistic they appear to have been shot with actual cameras. The line between what’s real and what’s AI-generated is now almost indistinguishable. Early users have already begun showcasing remarkable creations. One example? A fully fabricated conference, complete with interviews, participants of every demography, and content so rich you’d think it actually happened. Just two years ago, we were laughing at AI-generated hands with six fingers. Now? we’re watching fully believable human motion, speech, and body language, across races, ages, and expressions, emerge from pure code.

From filmmaking to marketing, the implications are enormous. The cost-savings potential is huge: organizations now need fewer large production budgets and more creative minds who know how to prompt AI tools, alongside powerful graphic machines and premium GenAI subscriptions. This excites me a whole lot, however, I can't help but ask: what does this mean for truth? For verification? For trust?

This development takes Thussu’s reflections on the mediated nature of reality[1] to a whole new level. Even before this, what actually happened and what the media says happened were often not the same, influenced by editorial bias, media agendas, and power structures. Now, the situation is even more complex. This article isn’t about media theory per se. But it’s important to highlight that media institutions now have an even greater role to play in combating misinformation and disinformation. In Nigeria, we’ve only just begun gaining momentum with media and information literacy efforts, encouraging both youth and adults to think critically and verify information before sharing it, but now, AI arrives, advancing at a breakneck speed and where seeing was once believing, that may no longer be enough. The traditional media’s value has always been in its credibility, even if not always first to break a story. But if AI can replicate a news anchor’s face, voice, tone, and even the studio set, and distribute fabricated news via digital platforms... how will the average person know what’s real? What will be the new tell-tale signs?

Thinking in Solutions - We are increasingly at the mercy of ethics.

  1. Should news organizations allow their likenesses to be fed into AI models?
  2. Should their broadcast content be used to train generative systems?
  3. To what extent should media houses themselves adopt GenAI tools in their workflows?

Really, what stops a malicious actor from capturing those very elements, tweaking them with AI, and pushing false narratives?

Consider the recent case with OpenAI’s Sora and Studio Ghibli. Ghibli's creative works were featured in AI-generated content without the creator’s consent. Though OpenAI removed the material, the internet still holds onto those AI-generated designs. Once released, there’s no real rollback. News agencies should consider restricting the use of their studio likenesses, anchors’ images, and content archives, especially in AI training sets. Could that be a workable ethical measure? What other safeguards are possible, and which ones should Nigeria and other African nations begin adopting now?

Final Thoughts

As we innovate fast, we must also pause to think deeply. Every new AI tool makes life easier and opens up spaces once limited to a few, but with that power comes significant risk. Lately, I’ve been thinking about how we approach these risks. And it wasn’t until we engaged sistemaFutura's Stephen Wendell for a week-long learning workshop on behavioural systems thinking at PIC that I began to see how systems thinking could become more than just a buzzword to being a practical approach to designing innovation with awareness of long-term impact.

The way AI is reshaping media, truth, and trust isn’t just a tech issue; it’s a systems issue. Perhaps frameworks like systems thinking can help us hold space for complexity, unintended consequences, and ethical responsibility. Without that kind of structured reflection, the societal fallout from “unchecked” innovation may go far beyond digital confusion; it could quietly and permanently reshape how we perceive reality itself.

By Omofuoma Agharite, Abstract Comms Lead

Share this article...

  • Facebook
  • Twitter
  • Linkedin
  • Twitter
  • Articles
  • PIC Admin
  • No Comments

Behavioral Insights in Nigeria: A Paradigm Whose Time Has Come

Read more about Behavioral Insights in Nigeria: A Paradigm Whose Time Has Come Read More
  • Articles
  • PIC
  • No Comments

Culture Can Be Controlling: How Social Rules Keep Female Genital Mutilation Alive

Read more about Culture Can Be Controlling: How Social Rules Keep Female Genital Mutilation Alive Read More
  • Articles
  • PIC
  • No Comments

Creating Safe Spaces for Tomorrow’s Voices: Lessons from Northern Nigeria

Read more about Creating Safe Spaces for Tomorrow’s Voices: Lessons from Northern Nigeria Read More

This Equivalency Determination on File “badge” means that NGOsource has determined this organization to be equivalent to a U.S. public charity or…

Read More

ABOUT US

  • Our Story
  • Our Vision & Mission
  • Our People
  • Our Projects
  • Contact Us

WHERE WE WORK

  • Health
  • Education
  • Agriculture & Climate
  • Good Governance
  • More...

What We Do

  • Behavioural Insights Application
  • Gender & Social Inclusion
  • Social & Behavioural Science
  • Evidence-based Policy
  • More...

Resources

  • Blog
  • Events
  • Downloads
  • Accessibility Statement
  • Privacy Policy

[email protected]

ABUJA OFFICE

Plot 1524, Cadastral Zone B08, Jahi District, FCT Abuja.

LAGOS OFFICE

294 Borno Way, Alagomeji-Yaba, Lagos.

KANO OFFICE

Suite 4, Binta Dan Baffa Plaza, Ahmadu Bello Way, Kano

EDO OFFICE

2nd Floor Exquisite Plaza, #5 Evbuomwan Close, Etete GRA, Benin City.

Copyright 2025 – Policy Innovation Centre (PIC)

  • X-twitter
  • Facebook
  • Linkedin
  • Instagram
Accessibility Adjustments

Powered by OneTap

How long do you want to hide the toolbar?
Hide Toolbar Duration
Select your accessibility profile
Vision Impaired Mode
Enhances website's visuals
Seizure Safe Profile
Clear flashes & reduces color
ADHD Friendly Mode
Focused browsing, distraction-free
Blindness Mode
Reduces distractions, improves focus
Epilepsy Safe Mode
Dims colors and stops blinking
Content Modules
Font Size

Default

Line Height

Default

Color Modules
Orientation Modules