Coming up with a good prompt for a generative AI tool has quickly become a specialised skill since the runaway success of OpenAI’s ChatGPT in 2022. It has led to the foundation of an entirely new scientific discipline known as prompt engineering.
As the technology becomes more advanced and widely adopted, some experts believe that the quality of the AI-generated outputs will depend on how clearly and effectively users can frame their instructions to large language models (LLMs).
“LLMs are tuned to follow instructions and are trained on large amounts of data so they can understand a prompt and generate an answer. But LLMs aren’t perfect; the clearer your prompt text, the better it is for the LLM to predict the next likely text,” Google said in its recently published whitepaper on prompt engineering.
The 68-page document authored by Lee Boonstra, a software engineer and technical lead at Google, is focused on helping users write better prompts for its flagship Gemini chatbot within its Vertex AI sandbox or by using Gemini’s developer API.
In simple terms, a text prompt is defined as an input that the AI model uses to predict the output, as per Google. “Many aspects of your prompt affect its efficacy: the model you use, the model’s training data, the model configurations, your word-choice, style and tone, structure, and context all matter,” it added.
When a user submits a text prompt to an LLM, it analyses the sequential text as an input and then predicts what the following token should be based on the data that the model was trained on.
In order to become a pro in prompt engineering, Google has offered the following pointers:
Google recommends providing at least one or multiple examples within a text prompt so that the AI model can imitate the example or catch onto the pattern required to complete the task.“It’s like giving the model a reference point or target to aim for, improving the accuracy, style, and tone of its response to better match your expectations,” the whitepaper read.
Google has cautioned against using complex language and providing unnecessary information to LLMs within the text prompt, and instead using verbs that describe the action.
“Providing specific details in the prompt (through system or context prompting) can help the model to focus on what’s relevant, improving the overall accuracy,” Google said. While system prompting offers the LLM ‘the big picture’, contextual prompting provides specific details or background information relevant to the current conversation or task.
“Instead of telling the model what not to do, tell it what to do instead. This can avoid confusion and improve the accuracy of the output.”
This means configurating the AI-generated output by requesting a specific length or max token limit. For example: “Explain quantum physics in a tweet length message”.
“If you need to use the same piece of information in multiple prompts, you can store it in a variable and then reference that variable in each prompt,” Google said. This is likely to save you time and effort by allowing you to avoid repeating yourself.
AI-generated outputs rely on several factors such as model configurations, prompt formats, word choices, etc. Experimenting with prompt attributes like the style, the word choice, and the type prompt can yield different results.
If you need an AI model to classify your data, Google recommends mixing up the possible response classes in the multiple examples provided within the prompt. “A good rule of thumb is to start with 6 few shot examples and start testing the accuracy from there,” the company said.
The document advises users to stay on top of model architecture changes as well as newly announced features and capabilities. “Try out newer model versions and adjust your prompts to better leverage new model features,” it states.
Google suggests engineering your prompts to have the LLM return the output in a JSON format. JavaScript Object Notification (JSON) is a structured data format that can be used in prompt engineering, particularly for tasks like data extraction, selecting, parsing, ordering, ranking, or categorising data.
Source;IndianExpress.com
Please sign in
Login and share