There’s been this recent wave of VC hot-takes on “AI vs. taste” that all read like the same circular, self-satisfied arguments dressed up as profundity. The classics: “Taste is random,” “AI just follows rules so it can’t surprise you,” “Taste needs soul,” “It’s all shaped by music and travel and life experience, therefore uncomputable.” No.
Here’s the thing: taste isn’t random. If it were, you wouldn’t recognize yourself from one month to the next. You wouldn’t have a type, a pattern, or any thematic through-lines across what you buy, wear, listen to, follow, or aspire toward. The only reason people think taste is chaotic is because they’ve never actually tried to model it with the right substrate. Once you anchor behavior to psychographics, the signal is unmistakable. Taste evolves, but it evolves from a stable baseline, not entropy.
The “AI can’t surprise you” line is also based on a very 2022 definition of AI as a token-predictor. If you think intelligence is just “If A, then B,” then yes, good luck generating anything interesting. But novelty isn’t randomness; novelty is controlled deviation along latent axes. When you have a multi-modal world model with persistent user identity, surprise is just the system taking you somewhere coherent-but-not-obvious. A relevant adjacency.
And the “soul” argument - romantic, but technically pointing to the mammalian convergence layer where affect, memory, meaning, desire, and multi-modal context fuse. That’s not magic. It’s representation and weighting. It’s the way the brain compresses experience into priors. Saying “taste has soul” doesn’t counter computation; it just admits you don’t know how it works.
Then there’s the idea that taste is shaped by experiences - music, cities, trips, people you idolize - therefore impossible to model. But the common denominator in all those experiences is you. Your underlying traits determine what you gravitate toward, what resonates, what imprints, and what you ignore. Experiences don’t overwrite your taste; they articulate it. They refine your latent structure, they don’t wipe it clean.
At PSYKHE AI, we built a world model precisely because taste isn’t a single event or a reactive clickstream. It has dynamics. It has direction. It has “Because of X psychographic driver + Y context, you move from A → B → C → Z.” Not as a rulebook. As a lived pattern. The model knows why something hits and what comes next, across products, categories, moods, and scenarios. Not a vibe check - a psychographic operating system.
And now that LLMs have been thoroughly commoditized, the need for personal grounding - systems that actually understand the individual - is glaringly obvious. The industry is waking up to the fact that generic intelligence without taste is flat. But let’s not throw our hands up and declare taste uncomputable. The issue isn’t that it’s too mystical. It’s that our analog understanding of it has always been too shallow. We’ve reduced taste to purchases, to surveys, to income brackets - none of which capture the real machinery underneath.
Taste isn’t ineffable. It’s just been under-modeled. And once you stop treating it like chaos and start treating it like latent structure, the whole thing becomes legible. Predictable. Computable, with the right data layers.