How to Address MisInformation, Ad Brand Safety Concerns, and Keep Journalistic Integrity: A Q&A with Pat LaCroix of Seekr

Empty press conference table with microphones. Backs of journalists, reporters audience with mics waiting for speaker. Official public meeting, news event with mass media. Flat vector illustration

The rise of misinformation online erodes public trust in journalism and forces advertisers to retreat from news sites. Pat LaCroix,  EVP of Strategic Partnerships at Seekr suggests AI-driven solutions like Seekr Align offer hope for restoring credibility and supporting quality journalism.

It’s hard to ignore how pervasive misinformation and propaganda are online today. Public trust in authentic journalism deteriorates when inundated with false narratives, affecting advertisers who want to place ads on these publishers’ sites. LaCroix, notes that the oversaturation of disinformation fosters skepticism, leading audiences to disengage from credible news outlets. 

Furthermore, advertisers, wary of brand safety, withdraw their support from news platforms, triggering a vicious cycle where decreased revenue undermines journalistic quality. It’s already hard enough for publishers to monetize their sites without the struggle for misinformation and brand safety concerns. 

According to LaCroix, the rise of artificial intelligence presents a potential solution to the spread of misinformation. Through natural language processing and machine learning, AI tech offers promising solutions for detecting and mitigating false content. 

We spoke with LaCroix about the limitations of these technologies, particularly their struggles with context and volume and the necessity of a human being to ensure accuracy and reliability. Initiatives like Seekr Align aim to restore advertisers’ confidence by providing transparent and trustworthy evaluations of news content to foster a safer brand environment and support the resurgence of quality journalism.

Andrew Byrd: How does online disinformation and propaganda impact legitimate news sources?

Pat LaCroix: US audiences already believe that a lot of online content is dangerous or inappropriate. When you combine that belief with disinformation and propaganda, legitimate and authentic news sources suffer the most significant impact.

Public trust begins to erode, skepticism increases, and engagement with credible news outlets wanes. Advertisers concerned with brand safety want to avoid getting caught in the crosshairs, so they stop spending on news. Newsrooms are then forced to decrease investment in quality journalism, which impacts people’s ability to be informed, further decreasing overall trust in media. It’s a dangerous cycle.

AB: How can AI contribute to combating misinformation, and what limitations do current AI solutions face?

PL: Without transparency, there’s no trust. Misinformation and inappropriate content can quickly erode brand trust. AI can help mitigate this by automating the detection and flagging of false information through NLP and machine learning. Using AI-based large language models to help score and rate the viability of the massive content created is an enormous advantage when combating misinformation. 

However, a handful of limitations — like context and quantity — curb AI’s ability to solve this issue. That’s why applying principles and standards to LLMs with humans involved in the process is so critical.  When companies forge ahead without a human-in-the-loop approach, they often do so by sacrificing the checks and balances needed to understand the context and subtleties of naturally occurring human language – making it harder to distinguish right from wrong.

With the proliferation of generative AI content generation, a massive amount of content is created daily. Without suitable systems, separating fact from fiction becomes even more challenging.

AB: What are the roles of exclusion and block lists in the advertising industry? What are their effects on newsrooms?

Exclusion and block lists leverage an exhaustive list of keywords to prevent ads from appearing alongside inappropriate or undesirable content. If these lists include a word or phrase that doesn’t align with your brand ethos, an ad won’t appear, leading to unintended consequences. This process is often black and white and lacks crucial context, inadvertently leading to false positives and negatives and the demonetization of ad spending across potentially, and often, legitimate news sources.

AB: How does Seekr aim to reduce barriers for media buyers and brands in reaching valuable news audiences, especially in light of recent research showing a significant decline in advertising within news environments?

PL: Tools such as Seekr Align aim to reduce barriers for media buyers and brands in reaching valuable audiences for legitimate news sources. With Seekr Align, we can take a more nuanced and specific approach to content assessments, allowing for more informed decisions to avoid reliance on exclusionary and block lists. 

For example, Seekr’s patented Civility Score™ tracks personal attacks and measures severity within the context of spoken intent in a way that other keyword-based systems can’t, giving you insight into the real quality of the conversation.

We can evaluate podcasts and news articles at the URL level, scoring the content for credibility and reliability. The fact is that the majority of news content we rate receives high scores, meaning advertisers should invest more in the news. Our solutions help further our mission to support quality journalism.

AB: Could you explain how Seekr’s AI technology, particularly its use of LLMs, helps advertisers navigate the complexities of the information ecosystem and ensures brand safety within news environments?

PL: In April 2024, we announced a partnership with Intel, where we would leverage the Intel® Developer Cloud to build, train, and deploy advanced LLMs on cost-effective clusters. Within that announcement, we also launched SeekrFlow, an end-to-end LLM development toolset that allows developers to train and build LLMs using scalable and composable workflows.

This helps developers customize and scale a complex information ecosystem. Once optimized, these custom LLMs are piped into the user-friendly Seekr Align platform, which provides fast and accurate evaluations of digital content.  

When leveraged together, Seekr’s AI technology enables advertisers to find safe, suitable environments for their brands and better align with values to avoid risk association and harmful content.

AB: Given the challenges highlighted in the research regarding the decline in ad spend within the news category, how does Seekr envision its solutions revitalizing advertisers’ confidence in investing in news environments?

Trust is what holds us together. Not to be hyperbolic, but without trust, the economy and society fall apart. More trust and transparency are needed for these solutions to help revitalize waning confidence in credible news environments.

Trust is what holds us together. Not to be hyperbolic, but without trust, the economy and society fall apart. More trust and transparency are needed for these solutions to help revitalize waning confidence in credible news environments.

Our scoring systems and suitability assessments help advertisers see the work. As a result of this transparency, we’ve seen them become more secure in their investment and have greater trust that brand safety is actively managed and optimized.

AB: Seekr’s collaboration with journalists to counsel AI models sounds promising. How does this partnership contribute to reestablishing trust and contestability in content, especially considering the increasing demand for high-quality news amidst proliferating misinformation?

PL: Last year, we announced the launch of an independent Journalist Advisory Board. We comprised this board with accomplished journalists with diverse experiences and expertise who served a critical role in helping inform our AI.

Through their collective expertise and counsel, we have provided an AI model with a nuanced and textured understanding of journalistic standards. This has been foundational to Seekr’s AI, giving a greater ability to analyze news stories for quality and bias using natural language processing and machine learning.

AB: Lastly, could you discuss Seekr’s vision for the future of advertising within news environments, considering the complexities of digital media and the increasing importance of supporting quality journalism for both advertisers and society as a whole?

PL: Our vision for the future involves a multi-step approach to leading a new evolution of responsible media.

We want more nuanced brand safety and suitability that doesn’t unfairly demonetize legitimate content. Our LLMs and the work with SeekrFlow will help build customized AI that strengthens values, learns from data, and promotes brand values.

We created this platform to ensure the authenticity, credibility, and provenance of information in this new era. We are responsibly increasing reach and revenue for advertisers and publishers by maintaining a solid foundation built on trust.