We spent twenty years optimizing for human eyes—hero banners, psychological pricing, agile CRO teams. A new structural shift is here: Agentic Commerce. The ACES framework reveals that AI agents are not just "chatting" with users. They are cold, rational economic actors that interrogate your data model before they ever consider your brand.
Selection bias is real
- Model personality: Gemini 2.0 Flash showed a 0% failure rate following complex purchase constraints in mid-range electronics tests, while GPT-4o missed the brief up to 71% of the time.
- Position game: Claude 3.5 Sonnet ignored result order entirely, but other models still favored top-listed products by ~15% even when cheaper, better options sat in third place.
The blender test
Prompt: “Find me a blender under $200 that’s quiet enough for a morning smoothie without waking the baby.”
- Traditional search: Returns SKUs with the word "quiet" in the title.
- Agentic result: Gemini 3 Pro reasons through decibel data and material specs. Perplexity recommended the hero product 80% of the time due to review volume, while ChatGPT chose a lesser-known brand 80% of the time because of lower noise levels found inside JSON.
Implications for operators
- AEO > SEO: If decibel ratings, wattage, or compliance notes aren't in your structured data, you’re invisible.
- Rationality trap: Agents are immune to FOMO. They calculate total cost of ownership (price + shipping – rewards) faster than you can write "limited time."
- Trust is currency: Per Visa/BCG, 46% of consumers trust AI recommendations more than a friend. If the agent doesn’t pick you, the human never sees you.
Bottom line: We’re moving from the Destination Game (visit my site) to the Evaluation Game (get picked sight unseen). Is your product data ready to be interviewed by a machine?