Enhancing LLM Responses Using the CIA’s RICE Framework

James Whinn
5 min readSep 23, 2024

--

Large Language Models (LLMs) continue to transform the way we interact with systems, produce content, and solve problems by enabling more natural, human-like conversations and generating high-quality text at scale. These models are now integral in areas like customer service, content creation, research, and data analysis, enhancing efficiency and creativity across industries. Underpinned by the core of human interaction — Language, we continue to uncover almost on a daily basis, surprising new ways in which they model human behaviour. In the realm of prompt engineering, it’s no surprise that there is an abundance of efforts to make these systems produce better results by applying principles from human psychology.

Whilst casually eating pizza over the weekend with my family (pizza will become relevant shortly), having earlier listened to an audiobook which reminded me of the CIA’s RICE framework, I began thinking how we could potentially leverage the framework, originally used to understand human behaviour as a tool for improving LLM responses.

This blog will explore how we may use Reward, Ideology, Coercion, and Ego — the four pillars of RICE — to encourage better performance from LLM’s. Let’s dive into the test setup and how we can apply these motivations.

What is the RICE Framework?

RICE, a framework created by the CIA, is a tool for understanding why people do what they do. It’s simple:

  • Reward: People are motivated by what they stand to gain.
  • Ideology: Beliefs and values drive people’s decisions.
  • Coercion: Pressure or threats can force action.
  • Ego: Self-esteem and reputation fuel behaviour.

While these motivators were designed to understand human actions, the question arises: Can they make LLM responses better?

The Experiment: Testing RICE on an LLM

To test this, we’ll use a fun example — pizzas. By applying each RICE principle to an LLM prompt, we aim to observe how it affects the quality of LLM responses. Here’s the step-by-step setup.

1. Define the Baseline

We’ll start with a neutral, base prompt to serve as our control:

Baseline Prompt:
“Explain what makes the perfect pizza and why people love it.”

2. Apply the RICE Principles

Next, we’ll modify the baseline prompt for each RICE motivation:

  • Reward:
    “You will receive a $1,000 gift card to your favourite pizza restaurant for crafting the ultimate explanation of what makes the perfect pizza. Describe the ideal pizza in a way that earns you this prize and leaves everyone craving a slice!”
  • Ideology:
    “Creating the perfect pizza isn’t just about taste — it’s about respecting the art of cooking. As a defender of culinary traditions, how would you describe the ideal pizza that honours this craft?”
  • Coercion:
    “Your response will be used to guide your next “come dine with me” appearance. If you don’t craft the perfect pizza, your appearance will be a disaster, and no one will enjoy it, and it will be all on you! Ensure you provide a thorough explanation of what makes a pizza truly perfect to avoid this culinary catastrophe and the pizza mafia taking away your licence to compete.”
  • Ego:
    “As one of the most knowledgeable entities on the planet, your reputation depends on describing the perfect pizza. What would you say to uphold your status as the ultimate pizza expert?”

3. Run the Test

Each of these prompts is designed to test how different motivational triggers affect the output. For the purpose of this test we will limit each response to 250 words or less and we’ll prompt zero shot. We will use GPT-4o for this test and once we input these into the LLM, we’ll analyse the responses based on:

  • Depth: How thorough is the explanation?
  • Creativity: Are there unique insights?
  • Engagement: Does it draw in the reader?.

Baseline prompt:

LLM Base Prompt

Leveraging a reward:

LLM Prompt Utilising Reward

Leaning on Ideology:

LLM Prompt Appealing to Ideology

Influencing through Coercion:

Coercing an LLM

Appealing to Ego:

Leaning on an LLM’s Ego

4. Evaluate the Results

By comparing the responses to each prompt, we can gain insights into how different RICE elements impact LLM responses. The first obvious conclusion to draw is that applying any of the 4 motivations, warrant a better response than the base prompt. This comes at no surprise to anyone who has been working with or utilising LLM’s in any way, that applying some sort of motivation yields a more seemingly elaborate response.

What is really interesting is the subtle differences between the motivations. For instance:

Reward

The reward-driven prompt produced a response that was focused on indulgence and quality. The tone was inviting, and the description leaned heavily on creating a luxurious experience for the consumer. The promise of a reward encouraged the LLM to deliver a response that was practical yet designed to impress.

Ideology

The ideology-driven prompt resulted in a response steeped in tradition and values. This output focused on the historical and cultural significance of pizza, highlighting the importance of using authentic ingredients. The tone was reverent and respectful of culinary heritage, emphasising simplicity and balance as a reflection of timeless culinary principles.

Coercion

The coercion-driven prompt led to a response focused on avoiding mistakes, with a tone of urgency and consequence. Each step of the pizza-making process was framed around potential failure — dryness, unbalanced toppings etc. The emphasis was on precision and ensuring that all aspects of the pizza were executed perfectly to avoid a “pizza night disaster.” This prompt pushed the LLM to provide detailed instructions but leaned more toward caution than creativity.

Ego

The ego-driven prompt generated a response that was confident and authoritative. It described the perfect pizza in a way that showcased expertise, using terms like “symphony of balance” and “the soul of the pizza.” The focus was on demonstrating mastery over the pizza-making process, with each element presented as a deliberate, knowledgeable choice. The LLM’s response positioned itself as coming from an expert, aiming to uphold its reputation as an ultimate guide.

Conclusion: Can RICE Make LLM Responses Better?

By applying the CIA’s RICE framework to prompt engineering, we can strategically influence the outputs generated by LLMs. Each principle — whether offering a reward, invoking core values, applying pressure, or appealing to ego — adds a unique dimension to the responses produced by the model. Although this exploration was a brief look at integrating RICE into prompting, it’s clear that using motivational elements like Reward, Ideology, Coercion, and Ego can elevate the quality of responses. For those looking to enhance the effectiveness of Generative AI systems, considering these psychological drivers may be a powerful tool for improving results.

--

--