About / Categories / Guides / Subscribe / Twitter

Venture Capitalist at Theory Ventures

My Prompt, My Reality

“Now with LLMs, a bunch of the perceived quality depends on your prompt. So you have users that are prompting with different skills or different level of skills. And the outcome of that prompt may be perceived as low quality, but that’s something that is really hard to control.”

Loïc Houssier, VP Product at Superhuman, shared this perspective on a recent podcast. AI products differ from classic software in that the experience is in large part determined by the user.

Software has always had a learning curve; master Photoshop, for example, and you can apply Bezier curves consistently, just like any other skilled user.

AI products selling outcomes, operate differently. The ideal output isn’t an identical fixed output achievable by all skilled users.

Instead, it’s a collaboration where expert prompts can lead to a spectrum of valid results based on nuanced intent and context.

How can product teams manage this? They can rewrite the user prompt - many are - to expand on the user intent and steer a basic query into a more nuanced & ultimately successful answer.

Even then, anticipating how a user might want to steer the AI is hard.

image

One product technique I’ve found very useful is a series of follow up questions. ChatGPT does this very well - like in this example above, asking for refinement on a broad query.

Just like a colleague asking for clarity, the AI seeks guidance. More than just asking for greater insight, the questions help me understand my request better.