Most people use AI the way they used to use Google. Type a question, take the first answer, move on. The result is predictable: safe, generic, convergent mush that could apply to any company in any industry. Researchers just put a name on it. They're calling it trendslop.
A team from Esade, NYU Stern, and the University of Sydney ran more than 15,000 simulations across six major language models, testing how they handle seven core strategic tensions. Differentiation versus cost leadership. Short-term versus long-term. Compete versus collaborate. The results, published in HBR, were damning. Every model converged on the same safe playbook: differentiate, collaborate, think long-term, augment capabilities. Adding company-specific context shifted the bias by roughly 11%. Standard prompt engineering moved the needle under 2%. And here's the part that should make you uncomfortable: simply flipping the order of the options in the prompt moved results by 19%. Positional bias had more influence on the output than any amount of business context the researchers provided.
The study is real and the findings are useful. But the researchers tested something very specific, and it wasn't how skilled operators should actually work with AI.
They tested structured A-versus-B dichotomies. Pick differentiation or pick cost leadership. That's it. One shot, binary choice, no follow-up. But what happens when someone pushes back, says "that's too generic," feeds proprietary data, asks why, challenges the reasoning, or runs a second and third and fourth pass with increasingly specific constraints? They tested the floor, but they didn't test the ceiling, and the ceiling, my friends, is what we care about most.
I think the distinction matters more than the study itself.
I've spent the past few years building products and running operations with AI as a daily collaborator. The pattern I've landed on looks nothing like what those researchers tested. It looks more like arguing with a very smart colleague who has no context on your business until you give it to them.
The first answer is almost always wrong. Not factually wrong, necessarily, but wrong in the way that matters: it's the answer anyone would get. The median answer. The trendslop answer. When I'm working through a strategic decision, the first response from any model is the starting position, not the conclusion. The real value shows up in the next four or five exchanges, where I push back on the assumptions, inject context the model doesn't have, challenge the reasoning, and orchestrate arguments between different models to force it to defend or revise its position.
There's an emerging discipline for this. People are calling it context engineering, and the name is useful because it draws a hard line between what the researchers tested and what actually works. Prompt engineering is about phrasing the question well. Context engineering is about constructing the entire information environment the model operates within: the system instructions, the proprietary data you feed it, the conversation history that accumulates as you push back, the structured constraints that make your situation different from everyone else's. The researchers tested prompt-level variations. They never touched the context layer. That's like evaluating a consultant's value based on how they answer a cold question in a hallway, before the engagement even starts.
AI isn't an 'insert prompt, receive strategy' vending machine. The people getting genuine leverage from these tools treat them like a sparring partner who needs to be loaded with your specific situation before the conversation gets useful.
Think about it this way. If you hired a consultant and asked them a binary question about your business strategy on their first day, before they'd seen your financials, your competitive landscape, your operational constraints, or your team's capabilities, you'd expect a generic, perhaps even uninformed answer. You wouldn't blame the consultant for that. You'd blame the engagement model. The same thing is happening with AI, except most people never move past day one.
The researchers found the disease. Passive, one-shot use of AI produces convergent, generic advice. Okay, agreed, but the treatment isn't to stop using AI for strategic thinking. The treatment is to stop using it passively.
Here's what actually works:
Start with the model's generic answer and then tear it apart. Ask it why it chose that direction. Tell it what's unique about your situation that makes the generic answer insufficient. Feed it real numbers, real constraints, real competitive dynamics. Then ask again. The second answer will be different. The third will be better. The fourth might actually be useful. Each round, you're engineering the context, building the information environment that makes the model's next response specific to you rather than generic to everyone.
The skill gap in AI isn't about prompt engineering. It's about context engineering. Knowing what information the model needs to move past its default assumptions, and being willing to build that context through iteration, proprietary data, and direct challenge. The people who get trendslop are the people who stop at the first or second response. The people who get strategic leverage are the people who treat the first response as the beginning of an argument.
I see both patterns playing out in real time working with founders and builders. The ones who tell me AI "doesn't really help with strategy" are almost always the ones who ask a question, get a generic answer, and walk away concluding the technology isn't ready. The ones who tell me it's changed how they think are the ones who've learned to fight with it. They challenge it, redirect it, feed it information it couldn't have known, and iterate until the output reflects their specific reality rather than the statistical average of everyone else's.
The HBR study proved that if you treat a language model like an oracle, you'll get an oracle's answer: vague, safe, and indistinguishable from the answer everyone else got. The technology works, but the way most people use it still doesn't.
Push back. The interesting work starts on the other side of the first answer.
Keep building,
– JW