LARGE LANGUAGE MODELS AS OPTIMIZERS

Invest in Humankind

Optimization is ubiquitous. While derivative-based algorithms have been powerful
tools for various problems, the absence of gradient imposes challenges on many
real-world applications. In this work, we propose Optimization by PROmpting
(OPRO), a simple and effective approach to leverage large language models (LLMs)
as optimizers, where the optimization task is described in natural language. In
each optimization step, the LLM generates new solutions from the prompt that
contains previously generated solutions with their values, then the new solutions
are evaluated and added to the prompt for the next optimization step. We first
showcase OPRO on linear regression and traveling salesman problems, then move
on to prompt optimization where the goal is to find instructions that maximize
the task accuracy. With a variety of LLMs, we demonstrate that the best prompts
optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K,
and by up to 50% on Big-Bench Hard tasks.

Share This Post

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

Shifting Dynamics in AI: Altman’s Departure and the Future of OpenAI and Generative Technology

Sam Altman, former head of Y Combinator and a notable figure in the entrepreneurial and investment sphere, has been a prominent advocate for generative AI. His world tour this year placed him at the forefront of this technological wave. After OpenAI’s recent announcement, Altman reflected on his impactful tenure at the company through a post on a social platform, expressing gratitude for his experiences and hinting at future endeavors.