New York
London
Glasgow
Paris
Singapore

Why AI Search Engines Can Be a Differentiator for Investment Marketers

Niels Footman 16 July 2024

With the emergence of large-language AI models (LLMs) such as ChatGPT, many believed the writing was on the wall for search engines.

But as the AI landscape has evolved, the picture has become more nuanced. While LLMs such as Claude and Llama continue to nudge higher in a series of benchmarks around maths and reasoning, others are pursuing a different tack.

AI-powered search engines look to capture the interactivity of chatbot conversations while retaining the credibility and accuracy of ‘traditional’ search. Achieving this means dealing with some of the key problems facing LLMs – while persisting with some methods that have long served investment marketers very well.

Chatty Engines

AI-powered search engines such as Perplexity, the new Bing, you.com and Google’s own Search Generative Experience (SGE) model provide chatbot-like answers to search enquiries. They are underpinned by LLMs, but there is a clear difference of emphasis, according to Otterly.ai:

  • LLMs are language models that are trained on vast amounts of text data to understand and generate human language. In contrast, AI-powered search experiences use a combination of machine learning algorithms and natural language processing techniques to enhance search results and provide more relevant and personalized recommendations to users.
  • LLMs are typically used to improve the overall language understanding capabilities of a system, while AI-powered search experiences focus specifically on improving the search functionality and user experience of a website, app, or platform.

In a paper from earlier this year, Google further defined the distinction as follows:

While SGE also applies LLMs, it has been purposefully trained to carry out tasks specific to Search, including identifying high-quality web results that corroborate the information presented in the output.

This final point is crucial. In the world of AI, stories abound of smart, accomplished professionals being taken in by confident-sounding bits of balderdash. If AI search is to gain the level of trust Google search and others have generated, it has to make major strides in dealing with not just hallucinations, but also contextual errors, bias and opinions asserted as fact.

Sources for Courses

Let’s try an example relevant to financial services: What are liability-driven investment funds? Perplexity had this to say:

LDI funds aim to reduce the risk of a pension plan being underfunded by matching the sensitivity of the plan’s assets to changes in interest rates and inflation with the sensitivity of its liabilities. This is achieved by investing in bonds and derivatives that move in a similar way to the plan’s projected liabilities.

This sounds convincing. But let’s not forget the “confident-sounding balderdash” I mentioned above. Just why should we trust Perplexity on this complex, rather arcane topic?

A closer look at Perplexity’s answer reveals a series of numbered sources throughout the answer that are listed in footnotes:

What is Liability-Driven Investing? by Russell Investments

Liability-Driven Investing Strategies also by Russell Investments

Liability Driven Investment on the gov.uk site

And Liability Driven Investment from an actuaries blog also on the gov.uk site

A Cure Through Curation?

In the AI argot, Perplexity is making use of “curated” data sets. In a move that has been backed by extensive research, AI search engines aim to minimise hallucinations by focusing only on circumscribed, recently updated data.

So why, among the various sources on offer, did Perplexity choose the ones it did? Perplexity said (citing sources including GitHub, Search Engine Journal and Reddit):

Web crawling and indexing: They crawl and index websites, but with a focus on authoritative and reputable sources.

Automated source evaluation: AI algorithms analyse various signals like citations, author credentials, publication venues, and fact-checking to assess the credibility and quality of potential sources.

Knowledge base construction: For specific domains like science or finance, they may construct custom knowledge bases by extracting and curating information from highly reputable sources like academic publications or industry reports.

In its paper, to mitigate the drawbacks of LLMs, Google says it restricts “SGE to specific tasks, including corroboration” and uses “our existing search quality systems and our ability to identify and rank high-quality, reliable information”. And for critical subjects such as healthcare or finance, Google “places even more emphasis on producing informative responses that are corroborated by reliable sources” and “include disclaimers in its output”.

What Does It All Mean?

As AI-powered search progresses, its need for accuracy and credibility is only going to grow. This means the information it cites must be accurate, reputable, up to date and rigorously checked. And as Google states in its paper, this need for reliability is even more pronounced in finance, a situation that will require ongoing human input and corroboration.

So what does this mean for investment marketers?

  • Credibility counts: though things are evolving, curated sources seem certain to play a critical role in heightening the reliability of responses on AI-powered search. For its SGE, Google says it will combine conversational answers with links down below that can corroborate what is being said. For others such as Perplexity, that corroboration will come in the form of footnote-type links. For either method, providing timely, credible insightful content will remain essential to online prominence.
  • Don’t over-rely on AI: but might those engines not just end up turning to AI-generated content to support their claims? This is, of course, a possibility – despite the tech giants’ best efforts, no AI detectors are infallible. But it’s also a significant danger. As some of the best minds in AI have posited, as the amount of human-generated data available for AI training diminishes, there’s a real risk that LLMs will increasingly cannibalise AI-generated content, entrenching inaccuracies and biases along the way. If you publish purely AI-generated content and it contributes to that process, especially in a field as dependent on insightful thinking as asset management, it could cause serious embarrassment – or worse.
  • Consistent, good-quality content is STILL king: while I wrote this piece, AI’s ability to provide succinct, conversational answers, suggestions for structure and follow-up ideas aided my thought process and saved me heaps of time. But the ultimate direction of the piece and its specific views on context and its relevance to us and our clients were mine. And for emerging or specialised areas of knowledge, original, timely content seems certain to remain a crucial part of ensuring online prominence and differentiation.

Given how quickly AI has surged into our everyday lives, it should go without saying that everything is subject to change. Perhaps the key takeaway with AI is, and will remain, that the landscape could be completely different next year, or even a few months from now.

But that means there is a lot to play for right now. And as investment marketers, if you’re drawing on your vast wells of proprietary data and in-house expertise to generate content that is original, well written, regularly updated, often cited and – critically – factually and contextually accurate, you will by definition be the target for AI search engines as they answer people’s questions.

And isn’t that something that’s worth aiming for?