I’m working on a new project with workflow involving a lengthy series of LLM prompts. This got me thinking about the whole meta and layered aspects of working with these AI APIs. Including a hot take on why the chat interface is so useful for LLMs.

Prompt Roundtrips = Brittle

My first thought was that prompt roundtrip based workflow interactions are going to create a whole new class of brittle integrations with LLM/AI APIs. With a whole new class of error types and debugging challenges. Is the prompt wrong? Did the LLM start returning odd results?

LLMs are so good at translating schema and content between domains that this feels like a near solution to the never ending “data interchange format standard” that software has wrestled with. The reality is that doing those translations at scale and with detail to feed into other workflows is introducing new layers of prompt, data, format and schema with it’s own complexity.

Meta Prompting is a Half Solution

One potential solution is “meta prompting” or “prompt for prompts”, where the workflow would focus on data and ask the LLM to build prompts for a task based on the data, then feed it its own prompts. This seemingly makes your API cleaner. You provide a prompt and then just handle subsequent returns as you go. I’m not sure it’s a solution or that meta prompting entirely solves it. It pushes the onus to the LLM instead of application glue code, but it makes the entire interaction based on LLM returns and increases effort to inject your own variations.

Prompt Library as SDK?

Is a prompt library the new SDK? The new schema? The new XML? The new JSON? Does that fill you with awe or dread?

“Experts” in The Loop

A hot take from these musings: The chat interface for an LLMs is valuable and useful because it’s the human in the loop that actually completes the facade of intelligence and insights.