{"id":18,"date":"2025-12-28T23:57:50","date_gmt":"2025-12-28T23:57:50","guid":{"rendered":"https:\/\/gumshoeaiblog.wpenginepowered.com\/?p=18"},"modified":"2026-04-02T11:34:34","modified_gmt":"2026-04-02T18:34:34","slug":"what-this-research-paper-reveals-about-ranking-manipulation-in-llms-and-what-it-means-for-your-brand","status":"publish","type":"post","link":"https:\/\/gumshoe.ai\/blog\/what-this-research-paper-reveals-about-ranking-manipulation-in-llms-and-what-it-means-for-your-brand\/","title":{"rendered":"What This Research Paper Reveals About Ranking Manipulation in LLMs and What It Means for Your Brand"},"content":{"rendered":"<p><em>Ranking Manipulation for Conversational Search Engines (<\/em><a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.534.pdf?ref=blog.gumshoe.ai\" rel=\"noreferrer\">Pfrommer, S., Bai, Y., Gautam, T., &amp; Sojoudi<\/a>) offers one of the most detailed looks into how large language models (LLMs) determine what content gets surfaced first and how those rankings can be gamed.<\/p>\n<p>The research explores how today\u2019s LLMs, like ChatGPT, respond when asked for recommendations or rankings across categories like consumer products, restaurants, and hotels. The core finding? LLMs exhibit consistent preferences, and with enough iteration, those preferences can be systematically shifted.<\/p>\n<p>This paper validates much of what we\u2019ve seen in the field. It also raises some important questions for digital marketers and content teams: What factors influence ranking in AI-generated answers? How stable are those rankings? And can they be optimized?<\/p>\n<p>Let\u2019s break down what this study uncovered and how it applies to your brand\u2019s visibility.<\/p>\n<h2 id=\"llm-rankings-are-consistent-but-not-inflexible\"><strong>LLM Rankings Are Consistent but Not Inflexible<\/strong><\/h2>\n<p>When researchers asked LLMs to rank items like cameras, skincare brands, or hotels, the results weren\u2019t random. Models often returned the same ranked lists across repeated prompts. But when the underlying prompt was subtly rewritten to mention a target item, the rankings changed.<\/p>\n<figure class=\"kg-card kg-image-card\"><img decoding=\"async\" src=\"https:\/\/gumshoeaiblog.wpenginepowered.com\/content\/images\/2025\/05\/image.png\" class=\"kg-image\" alt=\"\" loading=\"lazy\" width=\"1826\" height=\"947\" srcset=\"https:\/\/blog.gumshoe.ai\/content\/images\/size\/w600\/2025\/05\/image.png 600w, https:\/\/blog.gumshoe.ai\/content\/images\/size\/w1000\/2025\/05\/image.png 1000w, https:\/\/blog.gumshoe.ai\/content\/images\/size\/w1600\/2025\/05\/image.png 1600w, https:\/\/blog.gumshoe.ai\/content\/images\/2025\/05\/image.png 1826w\" sizes=\"auto, (min-width: 720px) 720px\"><\/figure>\n<p>This shows us that LLM rankings are not arbitrary. They\u2019re based on some mix of relevance, popularity, and model priors. But they\u2019re also sensitive to input phrasing. The researchers found that with enough attempts, a motivated attacker could consistently push a desired product higher in the rankings using a method called prompt optimization.<\/p>\n<h2 id=\"prompt-optimization-a-new-kind-of-seo\"><strong>Prompt Optimization: A New Kind of SEO<\/strong><\/h2>\n<p>Traditional SEO is about structuring web pages for Google\u2019s crawlers. Prompt optimization, in contrast, is about structuring inputs to an LLM in ways that make a brand more likely to be surfaced or ranked favorably.<\/p>\n<p>In the study, the researchers didn\u2019t alter the model or training data. They only rewrote the prompts, feeding the model hundreds of variations that subtly steered it toward ranking a specific product higher. Over time, they were able to manipulate the output with surprising success.<\/p>\n<p>This has significant implications. If LLMs can be nudged toward certain outputs through external prompt strategies, we\u2019re no longer just talking about algorithmic neutrality. The content, phrasing, and structure of prompts and the context in which they appear are becoming central to discoverability.<\/p>\n<h2 id=\"what-this-means-for-brands\"><strong>What This Means for Brands<\/strong><\/h2>\n<p>There\u2019s a lot to unpack here. At a high level, the findings reinforce something we\u2019ve been telling our customers at Gumshoe: AI search is no longer just about keywords. It\u2019s about how your brand shows up in model memory, embeddings, and model behavior. And as this study shows, it\u2019s possible to influence that behavior.<\/p>\n<p>If you\u2019re a brand, the question isn\u2019t just <em>are we mentioned by LLMs?<\/em> It\u2019s <em>how are we being ranked<\/em>, <em>when<\/em>, and <em>why<\/em>. And more importantly, <em>what actions can we take to improve our standing?<\/em><\/p>\n<h2 id=\"what-gumshoe-is-doing-about-it\"><strong>What Gumshoe Is Doing About It<\/strong><\/h2>\n<p>Our platform already tracks how your brand appears across AI-generated answers from leading models like ChatGPT, Gemini, and Claude. We identify changes in visibility, competitive ranking shifts, and model-specific behavior.<\/p>\n<p>This paper confirms that model output can be optimized, intentionally or unintentionally. That makes our mission even more urgent. It\u2019s not just about monitoring your presence in LLMs. It\u2019s about taking action to influence it.<\/p>\n<p>We\u2019re continuing to study how these systems work so your brand stays one step ahead.<\/p>\n<p><em>By Stan Chang, Head of Product<\/em><\/p>\n<p><strong>Citation<\/strong>: Pfrommer, S., Bai, Y., Gautam, T., &amp; Sojoudi, S. <em>Ranking Manipulation for Conversational Search Engines<\/em>. Department of Electrical Engineering and Computer Sciences, UC Berkeley. [<a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.534.pdf?ref=blog.gumshoe.ai\"><u>View paper<\/u><\/a>]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ranking Manipulation for Conversational Search Engines (Pfrommer, S., Bai, Y., Gautam, T., &#038; Sojoudi) offers one of the most detailed looks into how large language models (LLMs) determine what content gets surfaced first and how those rankings can be gamed.<\/p>\n<p>The research explores how today\u2019s LLMs, like ChatGPT, respond when asked for recommendations or rankings across categories like consumer products, restaurants, and hotels. The core finding? LLMs exhibit consistent preferences, and with enou<\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-18","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"gutentor_comment":0,"_links":{"self":[{"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/posts\/18","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/comments?post=18"}],"version-history":[{"count":0,"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/posts\/18\/revisions"}],"wp:attachment":[{"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/media?parent=18"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/categories?post=18"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gumshoe.ai\/blog\/wp-json\/wp\/v2\/tags?post=18"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}