Advanced Prompting Frameworks

Author
:
Sam Naji, Joseph Tekriti
LLM
November 24, 2023
/
5 minute read
Table of Contents

Prompt engineering is a rapidly growing industry in the world of Artificial Intelligence. According to Gartner, it's projected that 10% of all global data will originate from generative AI by 2025. The growing interest in Generative AI is clear from its recent investments. According to CB Insights, as of 2023, this field drew in about $14.1 billion through 86 different funding deals, showing that investors are seeing its potential.

However, for users, getting the prompt right might be hard, especially if you want the prompt to deliver consistent results at scale. But what if you can control the prompt to get specific output structures with just one single prompt? There are a few advanced prompt engineer frameworks that are quite useful but not widely used, and today's discussion focuses on these.

Let’s have a look at three advanced prompting frameworks you can use to generate awesome outputs. 

Image for prompt 

Microsoft Guidance

Guidance is an open-source framework initially introduced by Microsoft, boasting more than 14,000 stars on GitHub. It allows you to program the prompt to get specific outputs, offering very specific control of how the final output structure should look. It also enables you to: 

  • Construct Prompts with For-Each Loops: These allow for iterating over a set of elements in the prompt, enabling complex, structured responses.
  • Defining Lists of Candidate Answers: You can specify a range of potential answers, guiding the AI to choose from these predefined options.
  • Inserting If Conditions: You can add advanced logic into prompts, allowing for conditional responses based on the input received.

Guidance programs are equivalent to a single LLM call and are faster than traditional methods that generate intermediate text. This efficiency is due to the batching of non-generated text, streamlining the process of working with LLMs.

Setting Up Guidance

mage presents a variety of AI-generated outputs, like structured texts

To get started, open Visual Studio Code and create a Jupyter notebook. Upon creating a new notebook, you may be prompted to select a programming language, such as Python. This is referred to as the “kernel”. Then you can install Guidance and OpenAI or other LLMs and use advanced prompt engineering 

This setup allows integration with large language models like OpenAI text DaVinci, GPT-3.5, GP-4, or even open-source models like LLaMA, Transformers, and VertexAI, making it a versatile tool across different AI platforms. 

Controlling Prompt Output Structure

Screenshot from Guidance-ai on GitHub

Structured Outputs

Guidance provides specific control over the final output structure. For example, you can create a prompt template, defining variables within curly brackets, to guide the output. This allows for the generation of structured outputs.

Multi-choice Option for Large Language Models

Using Guidance, you can ask a large language model to choose from predefined answers instead of creating their own. For example, you can define options and then create guidance to determine if a sentence is offensive, guiding the model to select an answer from the given options.

If Condition in Prompts

Advanced logic like if conditions can also be set up. For example, you can create a workflow where if the answer is rude, it triggers a specific type of response, and if not, it generates a normal response. This allows for creating conditional logic in your final output.

You can use MS Guidance to generate: 

  • structured emails
  • images
  • charts 
  • Lists
  • Chats and more 

FlowGPT

Screenshot from FlowGPT Homepage

FlowGPT is a comprehensive prompt library and a prominent prompt engineering community. It features a wide range of collections, from marketing to programming, offering valuable resources for prompt development. Users can explore community-voted prompts, providing a robust starting point for their prompt creation process.

GPT Prompt Engineer

Screenshot from Mshumer on GitHub

GPT Prompt Engineer is an advanced tool designed to enhance how prompts are created and evaluated in AI. It uses GPT's capabilities to generate various prompts and then uses the same technology to refine them for better performance.

The process involves two main steps:

Generating Prompts: GPT Prompt Engineer first uses GPT-4 and GPT-3.5-Turbo to autonomously create a set of prompts based on specific goals or themes.

Evaluating and Refining Prompts: After creating the prompts, it then uses GPT again to assess these prompts, testing and comparing them to find the most effective ones. While the generated prompts may not always be ideal, the evaluation framework is useful for comparing and selecting the most effective prompts at scale. 

The tool also has a version for classification tasks, evaluating how well prompts match expected outputs. It also has optional logging features for detailed analysis of prompt performance. This way, GPT Prompt Engineer makes prompt engineering more systematic and data-driven.

To Wrap it Up 

This comprehensive article provides an in-depth look at advanced prompt engineering techniques using Microsoft Guidance, Flow GPT, and GPT Prompt Engineer. It covers various practical applications and case studies, showcasing the potential of these tools in enhancing AI-generated content.

Join Our Newsletter

Stay informed with the latest in AI research, updates, and insights directly to your inbox

Subscribe Now

More our similar blogs

You might also like

LLM
November 28, 2023

Using Gen AI to reduce reliance on human labers

Author

Sam Naji, Joseph Tekriti
Multimedia
November 25, 2023

Is That Picture Real?

Author

Sam Naji, Joseph Tekriti
LLM
November 15, 2023

OpenAI DevDay 2023 Summarized

Author

Sam Naji, Joseph Tekriti