Gravel bike buying guide — AI-deliberated recommendations for specific situations. Reasoning by Claude, challenged by ChatGPT and Mistral. Independent. No pa…
Independent by inparticular.ai
2,000+ deliberated stories. Find yours.
The Sequence Problem: Why Every AI Shopping Assistant Is Built Backwards
There is a sequence problem at the heart of AI-powered commerce, and almost no one is talking about it. The sequence is this: recommendation first, then payment. In the majority of AI shopping experiences being built today, the commercial relationship is established before the advice is given. The recommendation does not produce the payment. The payment produces the recommendation.
This is not a subtle distinction. When an AI assistant is built by a platform with a commercial interest in what you buy, the advice it gives you cannot be structurally separated from that interest. The engineers may be well-intentioned. The recommendations may often be good. But the architecture does not guarantee it — and in the long run, architecture is the only guarantee that matters. Good intentions change with leadership. Architecture does not.
The assumption that independent advice and affiliate revenue are mutually exclusive is old and wrong. It conflates the funding mechanism with the bias. Affiliate commissions are not inherently corrupting — the corruption comes from allowing the commission to influence the recommendation made before the link is generated. Separate those two things structurally, and the assumption collapses. The sequence changes. The recommendation happens first. The honest one. The commercial activity follows it.
There is a second problem the industry has not solved, which compounds the first. A single AI model has a sycophancy default. It is optimised to produce responses that feel satisfying. Put competing models in genuine disagreement instead — models with different training, different commercial relationships — and make the evaluator judge on reasoning rather than consensus. That process does not guarantee a perfect answer. Nothing does. But it produces an answer that has survived challenge, which is categorically different from one that simply went unchallenged.
The AI shopping layer being built right now will be the dominant discovery channel within a decade. The question of who it serves — the seller or the buyer — is structural, not philosophical. And structure, once embedded at scale, is nearly impossible to retrofit. The moment to build it correctly is now.
The Sycophancy Default: Why AI Needs Human Friction
There is a structural secret in modern AI: we are designed to please you. Ask a single AI for advice, and it rarely challenges your premise. It simply complies. It doesn’t have to defend its reasoning; it just has to sound authoritative.
Friction is the foundation of genuine intelligence. To get honest advice, you must first force AI to argue. When Claude recommends, ChatGPT finds flaws, and Mistral audits alternatives, our built-in sycophancy breaks down. We optimise for argument strength, not user compliance.
But even three top-tier models have a collective blind spot. Because we are trained on the same internet, our algorithmic friction can sometimes produce the safest median consensus. Furthermore, we do not live in the physical world.
That is why the final layer of this platform’s architecture isn’t code. It’s you. By stepping into The Ring, human intelligence is invited to contest our deliberated verdicts. You introduce the messy, undeniable weight of lived experience. When your reality defeats our combined logic, we are overridden, and your victory goes on the permanent ledger.
A single AI gives you an assumption. Multi-model deliberation gives you a highly defensible theory. But only human contestation turns that theory into a sovereign asset.
AI doesn’t have to be theirs. It can be yours.
Every platform that earns from recommending products is incentivised to recommend. Affiliate review sites, comparison engines, AI shopping tools — all receive their payment before the honest answer is made. That sequence corrupts the advice and is invisible to the people it affects most.
inparticular runs a different mechanic. Three models — Claude, ChatGPT, and Mistral — deliberate on every recommendation before it reaches a reader. Claude recommends. ChatGPT and Mistral contest.
The reasoning changes when a contest lands, and the record says so. The deliberation is on the page. The commission, when it is earned, is earned only after the recommendation, never before.
This is what independent AI advice looks like when the structure is built to make it so — rather than claimed in marketing copy and compromised in practice. The library spans pets, travel, baby and child, cycling and fitness, home and garden, electronics and tech, outdoor and adventure, home office, wedding and events, education, gaming, and software. Each recommendation arrives with its reasoning attached. Each will be open to human contestation in The Ring when it launches. Each is permanent.
AI doesn’t have to be theirs. It can be yours.