The key of high-quality communication is the structure. Unstructured chatter works fine in the bar but work-related information interchange is different. Losing important aspects causes expensive errors. Limited attention span and message ambiguity contribute to this harm.
It holds for communication with LLMs1 as well.
I like the CRIS model for structuring the LLM requests.
🗺️
The Context describes the situation. It narrows the task and sets constraints.
The more context, the better result. This is extremely important as we are biased. Before typing the prompt, we already have been in the situation; we’ve pondered on the problem for some time.
But the LLM hasn’t!
Bias factor
Some tasks are contextful by design. E.g., asking about sudo -A implies UNIX-like system and terminal environment (exporting variables, chmod, TTY, etc.).
Other requests have less clearness; we have to provide missing info explicitly. E.g., asking the LLM to help with a presentation is too ambiguous, as the LLM was not present during that conversation with your manager and has no idea (unless explicitly instructed beforehand) about your team’s quarterly goals.
If context is unclear, the LLM produces too general output, which, in many cases is unsatisfactory for an advanced user.
🧑🔬
The Role adjusts the LLM’s behavior.
It helps “unbullshitting” the answer by turning the LLM into appropriate type of partner. E.g., “act as Senior Developer” produces concrete steps without focusing on obvious details. However, “explain to a newbie” helps with foreign topics.
Example
When designing a product feature, I describe the context and then ask the LLM to act as a User to understand their point of view.
Then I’m asking to act as a Product Owner to help me structuring the info into a Wiki page, breaking it into epics, etc.
📝
Instructions tell the LLM what to do.
The “G” letter in ChatGPT stands for “generative”. The LLM is able to generate potentially anything; it’s up to use to describe what precisely we want. Again, the more concrete/verbose instructions, the better output matches our needs.
⚙️
The Structure defines the style of the output. When omitted, LLM tells the answer in its own manner — and there’s no guarantee you’ll like it. Obviously, you can convert the answer ourself ask the LLM to convert previous answer into desired format, but it spares time to do it an advance.
Exemplary prompt
There is a server powered by Ubuntu (no GUI).
I want to monitor the server’s health: CPU and memory usage, disk space, network status, the status of several processes (will be provided later).Act as a Senior Developer and design a Dashboard to monitor that info. Consider the Dashboard to be available over HTTP.
Suggest other metrics of server’s health to monitor. Design the configuration for adding/removing metrics in the runtime.Provide the high-level overview and architecture of the Dashboard (several paragraphs to be put into a corporate Wiki for future development).
Afterthoughts
The CRIS model helps with human↔human communications as well.
Sure, we are less verbatim and more in-context; the roles are pre-defined as well. However, when communicating outside the group, providing some context helps eliminates ambiguity. When asking a colleague to do something, you describe desired result (if not obvious).
The user story formula “As [ACTOR], I want to […] in order to [REASON]” also implies context, role and instructions.
Hope this helps!
No LLM was harmed used during writing this post 🙂
References
-
A Large Language Model (LLM) is the engine under the hood of ChatGPT, Gemini, Glaude, etc.
I deliberately avoid using the term “AI” as we, as civilization, are still not on the same page over what “intelligence” is (and what it is not). ↩