Navigating the ML landscape after the LLM tsunami

The art of prompt engineering

Navigating the ML landscape after the LLM tsunami

Machine learning (ML) has significantly shifted with the growth of Large Language Models (LLMs), such as PaLM by Google and ChatGPT by OpenAI. These tools can only be trained by companies with significant budgets, and are altering the approach ML engineers take to problem-solving and coding.

This massive paradigm shift makes it essential for ML engineers to understand how the models work and how to utilise them effectively. It's important to understand how the role of the ML engineer is changing and how LLMs will impact our future.

Traditional ML vs LLM

Before the influx of LLM, ML engineers had to consider various models, from simple linear regression to more complex ensembles or deep learning models, and adjust them according to their particular use case. This can be time-consuming, requiring many iterations during the development stages. Not only does it disrupt engineers' train of thought, jumping from model to model, but it can result in poorly written code.

Focus has now shifted from manually building and tweaking models to understanding how to leverage the capabilities of LLMs productively. This opens up a new set of possibilities for ML applications, but also introduces new challenges.

As LLMs require a lot less manual intervention, since they have already learned useful representations from their vast data pools, engineers can focus more on prompt engineering for their specific task and rely on the huge number of tasks that LLMs can address - and all this just with a single API call. This means they can offer descriptions of tasks in LLMs, rather than hunt through multiple models for the answer.

However, this type of engineering can come with its risks.

The implications of LLMs for engineers

In its simplest form, prompt engineering involves crafting effective prompts that can efficiently instruct LLMs to perform desired tasks. It requires precise wording, formatting, and structuring to optimise the model's output, making it relevant and accurate. Done properly, this reduces the time the engineer has to spend amending prompts to fit a particular task.

But prompt engineering is not an exact science. It's more like art that follows some specific patterns. Any slight variation in the prompts can result in a dramatic difference in responses, which can impact the overall utility of the model. A well-crafted prompt can significantly boost the model's performance and efficiency by providing clear instructions, minimising ambiguity, and leading the model's understanding in the desired direction.

Different types of prompting can (obviously) produce different responses, and getting these down to a T can be difficult. As an ML engineer, discovering the best practices for prompting is essential. Variations include:

Maximising control over your prompts is important to ensure that the model's behaviour reflects the intended outcome.

The importance of prompt engineering cannot be understated. It's a skill that can dramatically increase the quality of large language models' output. As the applications of LLMs continue to expand, prompt engineering will only become more significant, becoming an integral part of the future of artificial intelligence (if not replaced by yet another model).

Where does traditional ML fit into this?

But what does this mean for engineers in the short term? Is traditional machine learning dead?

Well, not quite. The boom in LLMs and prompt engineering doesn't mean that traditional ML techniques have become obsolete. They still excel when the dataset is small, the use case is simple, interpretability is crucial, or where the use case is too esoteric to be solved with LLMs. Large language models are usually best suited to large, complex data sets - whereas ML techniques can help source solutions for bespoke situations.

The bottom line is that a combination of ML and LLM models is a powerful new tool in our toolset.

An ongoing shift

The job of an engineer has evolved massively with the development of LLMs. We've gone from manual stack sifting, which can eat away at a developer's day, to utilising prompt engineering as a primary method for sourcing coding shortcuts. Perfecting the art of prompting can be hugely beneficial to not only the engineer but the wider business, preserving resources and saving development time.

At DoiT, we are seeing the results of this shift, as our team of 150+ customer reliability engineers (CREs) invests significant time and energy into mastering the new skills needed to navigate this new landscape. Applications for our clients range from industries such as healthcare and life sciences (HCLS), fintech, cybersecurity and more - we're excited to see what comes next.

Sascha Heyer is senior machine learning engineer at DoiT International