Prompt Engineering: Passing Fad or the Future of Programming?

The reality is probably somewhere in between, so it is important that everyone develops a base understanding and capability of these GenAI platforms, tools, and techniques such as prompt engineering.

Frank Palermo, EVP of Global Digital Solutions

October 5, 2023

5 Min Read
speeding headlights
estherpoon via Adobe Stock

The current wave of rapid generative AI advancements is spawning new engineering disciplines, the most publicized being “prompt engineering.” This has created a polarizing debate on whether this is just a passing fad or will become the future of software engineering.

Some, like Robin Li, CEO of Baidu, China’s top search engine, have proclaimed that prompt engineering will be half of the world’s jobs in the next 10 years.  Others like Sam Altman, CEO of Open AI, believe that this is a result of temporary limitations in the current large language models (LLMs) and believe long-term prompt engineering will be as critical.

The reality is probably somewhere in between, so it is important that everyone develops a base understanding and capability of these generative AI (GenAI) platforms, tools, and techniques such as prompt engineering.

The Rise of Prompt Engineering

While the power of the numerous GenAI platforms will continue to increase, unlocking the full value of these GenAI systems requires sophistication in how the questions or prompts are phrased. Given these large language models learn from vast amounts of text data, they are very sensitive to how a question or prompt is framed.

This new discipline of “prompt engineering” requires strong linguistic skills to construct a proper query for the AI system using nouns, verbs, adjectives, and other language primitives. This also requires understanding the nuances of the language, such as idioms, slang, and other informal jargon. Knowing when to use these in different situations can vastly improve the relevancy of the response.

What Makes a Good Prompt Engineer?

Prompt engineering is much more than just getting the words right. It’s a multi-dimensional field that involves being able to reframe a question into an effective problem statement. In order to create an effective problem statement, you need to identify the exact problem you are looking for AI to solve. If the problem statement proves to be too large, you will likely need to decompose this into smaller topics that can be more readily framed for the GenAI platform. It is also important that the problem has constraints or boundaries so the responses are not too broad.

A prompt engineer must also understand how the GenAI platform works, recognize how it will respond to specific prompts, and be creative enough to ensure it delivers meaningful output. This involves having a grasp of how the underlying LLMs are organized and how their capabilities are quickly evolving. It is critical to be able to quickly adapt prompts to influence the AI model’s performance.

Taking a Shot at Creating a Good Prompt

Creating a good prompt is a combination of both art and science. There are several elements that should be considered when designing a prompt.

First are the specifics on the task to be completed, which should focus on what you want the model to do: answer a question, summarize results, provide a comparison, etc. Describing context is critical in a prompt as it provides the model information on how to tailor a response. Simple things like providing a word count can force the model to be more succinct in its response.  

There are also several techniques that can be employed for instructing large language models. Some of the most notable techniques are zero, one and few-shot prompting.

Zero-shot prompting is where a model makes a prediction on a topic it wasn’t trained on without the need for any additional training. This means that the models can understand and execute tasks without having seen any previous examples. In this case, you would provide the model with a prompt and supporting text that describes what you want the model to consider in the prompt. This is useful for sentiment analysis of text, sentence translation, and text summarization.

In one-shot or few-shot prompting, models are trained with a smaller amount of data, allowing the model to learn and generalize (either one or a few). In these cases, you provide the model with some examples to guide the model, enabling in-context learning to produce the desired response. This can be useful for guiding the model’s response to be more contact-aware and relevant, especially in situations where there is limited training data available. For instance, if you wanted to generate some SQL code you would have to understand SQL syntax as well as the format the model is expecting. Here’s an example:

Input Prompt:

Table departments, columns = [DepartmentId, DepartmentName]Table students, columns = [DepartmentId, StudentId, StudentName]Create a MySQL query for all students in the Finance Department


SELECT StudentId, StudentName FROM students WHERE DepartmentId IN (SELECT DepartmentId FROM departments WHERE DepartmentName = 'Finance');

There is also a technique called chain-of-thought (CoT) prompting that enables complex reasoning capabilities through intermediate reasoning steps and examples. The idea is that by showing the model some examples (i.e., a few shots), the LLM is able to follow the reasoning process and provide an accurate response. CoT is very effective in improving results on tasks like arithmetic, reasoning, and other common-sense tasks.

Which approach to use depends on the type and complexity of the task. You need to assess if the task is simple, well-defined, or requires specialized skills.

Is Prompt Engineering Just a Bridge?

In the short term, it’s clear that anyone who wants to effectively interact with GenAI platforms will need to understand the basics of prompt engineering. These skills will soon become prerequisites for getting your daily job done, very similar to how we interact with internet search engines today.

However, it’s likely that over time, this is just a bridge until these AI systems can understand human language more naturally. In the long term, large language models will become more proficient at understanding a prompt’s intent and require less “engineering” to get to the desired result.

In the future, it’s unlikely that all engineering will be prompt engineering. But it’s highly likely that great engineers will be deeply versed in prompt engineering.

For now, it looks like writing has indeed become an important programming language that we should all polish up on.

About the Author(s)

Frank Palermo

EVP of Global Digital Solutions, Virtusa

Frank Palermo has over 20 years of global experience as a successful technology executive including leading Virtusa’s Digital Business where he was responsible for building highly specialized technology practices focused on capabilities in UX, Mobility, Social, Cloud, Gamification, Analytics, Big Data and IoT. Virtusa is a global provider of digital engineering and technology services. In its 25+ year history, Virtusa has grown to 35,000 employees serving more than 200 companies across diverse industries, including banking and financial services, insurance, and telecommunications among others.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights