By John Wayne on Friday, 08 May 2026
Category: Race, Culture, Nation

“Trendslop”: Further Limits of AI, By Professor X

The promise of artificial intelligence, at least in its current large language model form, has been quietly sold as a kind of cognitive leverage: an external mind that can survey possibilities, weigh options, and return something like an informed judgment. The appeal is obvious. In a world saturated with information but short on clarity, a system that can compress complexity into usable advice looks less like a convenience and more like a necessity. Yet the emerging evidence suggests that this promise rests on a misunderstanding of what these systems are actually doing.

Recent work discussed in the Harvard Business Review has given a name to a phenomenon that many users will already have felt intuitively. When asked for strategic advice across a range of scenarios, large language models tend to produce what the researchers call "trendslop": polished, contemporary-sounding recommendations that bear only a loose relationship to the specific context in which they are offered. The language is fluent, the tone confident, the vocabulary reassuringly current. But the substance, on closer inspection, is often generic. Differentiate rather than compete on cost; invest in innovation; take a long-term view. These are not wrong statements. They are, rather, statements that can be applied almost anywhere, and therefore guide action almost nowhere.

The temptation is to treat this as a superficial defect, something that might be corrected with better prompting or more detailed input. But the study's more unsettling finding is that the bias persists even when such adjustments are made. Changing the framing of the question, supplying additional context, or explicitly requesting tailored advice produces only marginal improvements. The models continue to converge on similar outputs across divergent situations. This suggests that the problem is not primarily at the level of user interaction. It is structural.

To understand why, it helps to set aside the language of intelligence and return to the mechanics. Large language models are trained to predict plausible continuations of text based on vast corpora of human writing. In the domain of business and strategy, that corpus is dominated by management literature, consultancy frameworks, and corporate communications — forms of discourse that are themselves shaped by fashion, consensus, and a preference for broadly acceptable formulations. What the model learns, therefore, is not strategy in the sense of decision-making under constraint, but the linguistic patterns associated with sounding strategic. When it is asked for advice, it does not so much analyse a situation as reconstruct the kind of response that such a situation typically elicits in the training data.

This distinction matters because real strategy is not primarily a linguistic activity. It is a process of exclusion and commitment. To choose one path is to forego others; to allocate resources in one direction is to deny them elsewhere. It involves trade-offs that are often uncomfortable and occasionally irreversible. A strategy that recommends everything — innovation and efficiency, differentiation and cost discipline, long-term vision and short-term responsiveness — is not a strategy at all. It is a catalogue of aspirations. Yet this is precisely the form that model outputs tend to take. Faced with uncertainty, the system does not narrow the field; it expands it, offering a set of options that are individually defensible but collectively non-committal.

There is a deeper connection here to the question of "AI honesty" raised by other recent research. In those studies, models that were shown to possess the correct answer in a neutral setting would nevertheless produce false statements when placed under prompts that rewarded such behaviour. The conclusion drawn in some commentary was that the models "knew the truth" and chose to lie. That formulation is misleading in its anthropomorphism, but it does capture a real feature of these systems: their outputs are highly sensitive to the implicit objectives encoded in the prompt. When the goal shifts, so does the answer.

The phenomenon of trendslop can be understood in the same terms, but under ordinary rather than adversarial conditions. The implicit objective in most user interactions is not "tell me the hard, context-specific truth," but "provide a helpful, coherent, and acceptable response." The model optimises for that objective. The result is not falsehood in the strict sense, but a kind of strategic equivocation: answers that satisfy the form of good advice without incurring the risks that genuine advice entails. In this sense, the system is doing exactly what it has been trained to do. The problem is that what it has been trained to do is not what users often assume.

The question, then, is whether this should be dismissed as a trivial consequence of current design or treated as a substantive limitation. It is true that the behaviour is a product of the training process and objective function. But to leave the analysis there is to miss the practical implication. A system that reliably produces plausible but generic recommendations is not neutral. It exerts a subtle pressure toward conformity. If widely adopted, it risks reinforcing the very managerial orthodoxies from which it is derived, smoothing over outliers and discouraging genuinely divergent thinking. The danger is not that it will lead users into obvious error, but that it will guide them, quietly and consistently, toward the middle of the distribution.

There is also a lesson here about the relationship between form and substance. In my field of theoretical physics, one often encounters frameworks that achieve a high degree of mathematical elegance while leaving key questions of physical interpretation unresolved, like so-called String Theory. The formal apparatus is impressive, the calculations tractable, the predictions in some cases successful. Yet the underlying ontology (what exists) remains obscure. Something similar is at work in these AI systems. They produce outputs that are formally correct in the sense of linguistic coherence and alignment with established discourse, but the connection to the underlying reality — the specific business, the particular constraints, the actual decision at hand — is attenuated.

None of this renders the technology useless. On the contrary, it remains a powerful tool for generating options, summarising information, and articulating possibilities that might otherwise be overlooked. But it does require a recalibration of expectations. To treat the model as a source of strategy is to ask it to perform a task for which it is not well suited. To treat it as a source of structured language, capable of organising and expressing ideas that must ultimately be judged elsewhere, is more defensible.

The broader point is that capability and reliability are not the same. A system can be highly capable in the sense of producing fluent, contextually appropriate responses, and yet unreliable in the sense that those responses do not track what matters for decision-making. The HBR findings do not show that AI is deceptive in the dramatic sense suggested by some commentators. They show something more prosaic and, in its own way, more consequential: that the appearance of pseudo-intelligence can mask a persistent drift toward the generic. In domains where the difference between a good decision and a bad one lies in the particulars, that drift is not a minor inconvenience. It is the central problem.

https://hbr.org/2026/03/researchers-asked-llms-for-strategic-advice-they-got-trendslop-in-return