2 Comments

This is what happens when you decouple audience from content and set that notion loose in a largely unregulated ecosystem under tech oligopolies. To solve this, philosophically we can't continue to sever audiences from the publishers that aggregate and cultivate them. If we do, we're going to keep the incentives in the wrong places and you'll continue to see more support of CSAM, terrorism, extremism and more.

Expand full comment

I am deeply disturbed to read the list of ad tech vendors and major advertisers reportedly confirmed to have served ads on sites hosting CSAM. WTF??🧐🤨😡

This is absolutely unacceptable and raises serious questions about the integrity of the digital advertising ecosystem.

It’s no secret that these key players have always been and continue to be complicit in a race to the bottom when it comes to ad quality control—whether it’s the ads being served, the consumer experience, or the lack of transparency and accountability in areas like brand safety, ad fraud, and inventory quality. The root cause? A systemic laziness and an overreliance on inflated metrics and misleading analytics that prioritize appearances over real, meaningful outcomes.

This isn’t just negligence—it’s a failure of responsibility at every level. The industry must do better.

What worries me even more is the role AI will play in accelerating these problems. As AI-driven tools become more prevalent in ad targeting, content moderation, and performance measurement, the risks of misuse, oversight, and unintended consequences grow exponentially. Without robust ethical frameworks, transparency, and accountability, AI could amplify the very issues we’re already failing to address—making it easier to exploit loopholes, manipulate data, and further erode trust in the system.

Thanks for sharing this Lou!

Expand full comment