Tech Corner August 10, 2023

The art of prompting: how to win [conversations] and influence [LLMs]

by Keerti Hariharan

Website LLM Prompting

In the dynamic world of wealth and asset management, harnessing the power of AI is becoming crucial to remain competitive. One of the key elements in leveraging AI and large language models (LLMs) effectively is LLM prompting.

With the rise in LLMs, you might have also heard the term “prompt engineering.” Its technical definition refers to a technique for refining, targeting and training LLMs by using frameworks like Langchain. Here, we’re going to use the more general term “LLM prompting” to refer to when an end-user asks questions directly to an LLM, for example, through a chat interface, which may work in tandem with additional prompt engineering already built-in behind the scenes.

If you are using an LLM, prompting is at the heart of making it effective. Your prompt provides a roadmap for the model to generate meaningful responses.

Better prompts will give you better answers. We’ve compiled a few tips below to help you get exactly what you need when prompting an LLM.

#1: Ask for what you want

LLMs can produce responses in a variety of forms. You can ask an LLM to answer your prompt in the form of a 50-word paragraph, a term paper, or even a haiku. When thinking about how to structure your prompt, think through the kind of response you’re looking for first, and be specific when asking for it. Types of responses you could ask for include:

  • A summarization, of a piece of text or even a whole document
  • An answer to a straight-forward question
  • A list or reference material
  • An interpretation or translation, of text you provide or that it may have in its dataset
  • Creative copy—generative text based on a specific subject
  • Calculations
  • Comparisons between topics or documents

This flexibility in format opens up endless possibilities for customization and tailored human-like interactions.

#2: Set the stage

Context plays a pivotal role when prompting LLMs. Transformer-based models have been trained on vast amounts of text using deep learning methods, aiming to generate output that resembles human-generated text. Just like humans, language models thrive on context, allowing them to generate more relevant responses. Set the conversational stage with relevant background information, and you can help the model understand exactly what you mean.

Consider this example of a poorly constructed prompt to an LLM:

Example of poorly constructed LLM prompt

What makes it ineffective? It lacks essential context and fails to specify the desired outcome. The response given by the model thus lacks specificity and doesn't provide the analysis we asked for. When the model is given minimal context and instruction, it isn’t able to provide a useful response.

Providing effective context enables you to influence how the model responds. Giving more information about the specific areas you’re looking for information on, combined with instructions on the type and format of response you want, will lead to more helpful responses.

#3: Know who (or what) you’re talking to

Different LLM models and interfaces have access to different data sets, which can inform the accuracy of the answer you’re given. It’s crucial to be aware of the parameters of the LLM you’re interacting with. Does it search the open Internet, or is it drawing answers only from its data set? What time periods does its data set cover? If your question includes current events or market trends, but the model’s data set only covers up to 2021, the response may not contain all the information you need. If the model has been engineered to look only at a specific document or data set, i.e., a research report or set of content you’ve uploaded, you may be able to get a more precise answer, though you won’t get additional context outside of the report.

#4. If at first you don’t succeed—try and try again

When prompting LLMs, it can be helpful to simply rephrase your question. Like humans, LLMs sometimes need to be asked in a slightly different way or have the question clarified.

If your first question didn’t lead to a useful answer, keep iterating to see if you can get better results. Add context, details, and specific instructions to guide the model in crafting its response. Think of your interactions with the LLM as an ongoing conversation—it’s following along, building a cumulative understanding of what you want as you use it. You can refer back to what you found helpful or unhelpful about previous responses and work together to find the answer you’re looking for.

Below is a refined version of the initial prompt above, demonstrating the value of iterating and clarifying expectations. By providing more detail and specifying the desired format and what the response should include, we’ve offered the engine more valuable input, and as a result, we get a more specific and useful response.

Example of effective LLM prompt

As AI continues to evolve, LLM prompting (and the prompt engineering that often occurs behind the scenes) remains vital for harnessing the power of language models. Getting the response you need from an LLM might take practice, but by mastering the art of crafting precise instructions and providing relevant context, you can unlock the untapped potential of AI in your day-to-day interactions and for your business.

About the author: Keerti Hariharan

More from the blog

July 18, 2024

Applied AI strategies for private markets: takeaways from our NYC Happy Hour

by Elizabeth Matson

We were thrilled to host top investment operations executives for an evening of insightful discussion. Read our key takeaways and highlights from the night.

July 11, 2024

Leverage AI to attract and retain top-tier analysts

by Elizabeth Matson

Attracting and retaining skilled analysts is increasingly challenging for financial services firms. Read how firms can have an advantage in today's job market.

June 27, 2024

Alkymi offers investment managers self-contained LLMs in secure private clouds

by Harald Collet

Announcing a private cloud solution for use with ring-fenced LLMs, enabling firms to integrate GenAI into their workflows with greater security for their data.