Researchers have used the GPT-4 large language model (LLM) to seek out new combinations of existing drugs, which could be used as novel treatments for cancer.
The team said that using an ‘AI scientist’ in this way in combination with laboratory automation could be a promising new route to drug discovery - and could help usher in a new era of personalised medicine.
The research team, led by the University of Cambridge, used GPT-4 to seek out potential new cancer drugs based on existing scientific literature. They asked the LLM to avoid standard cancer drugs, to identify drugs that would attack cancer cells but not harm healthy cells, and to focus on drugs that were affordable and already approved by regulators.
The drug combinations suggested by GPT-4 were then tested by human scientists, both in combination and individually, to measure their effectiveness against breast cancer cells. In the lab-based test, three of the 12 drug combinations suggested by GPT-4 worked better than current breast cancer drugs, the team said. The LLM then learned from these tests and suggested a further four combinations, three of which also showed promising results, they added.
The team said the work was the first instance of a closed-loop system where experimental results guided an LLM, and LLM outputs were then used to guide further experiments.
The research published in the Journal of the Royal Society Interface noted that the real cost of scientific research comes down to two things: the intellectual effort of human scientists and the financial burden of laboratory experiments. “With the rapid advancements in AI, the cost of machine-driven scientific intelligence is decreasing. It is inevitable that LLMs will play an increasingly significant role in scientific discovery. We are already witnessing the emergence of AI scientists and AI-assisted researchers, signalling a shift in the way science is conducted,” it said. “By leveraging the vast knowledge encoded in LLMs, scientists can explore regions of the hypothesis space that human researchers may miss or find more difficult to explore due to biases, exhaustion, or other factors,” it said.
Professor Ross King from Cambridge’s Department of Chemical Engineering and Biotechnology, who led the research, said he had been surprised by the drug combinations that the LLM had predicted. “I don’t think anyone would have found them apart from randomly trying things - I don’t think it would be obvious to any human scientists to try these,” he said.
AI models are often criticised for generating hallucinations – generating faulty information from its sources - but that was not an issue in this case, as the LLM’s predictions were then tested in the lab. “There’s a lot of talk about hallucinations in large language models, that they just make things up,” he told C&I. “That doesn’t matter too much if they are forming scientific hypotheses, because the real test of a scientific hypothesis is whether it works in experiment or not. The actual mechanism of how the large language model comes to the hypothesis is a bit opaque but it doesn’t matter - we took it and tested it in the lab and it works,” he said.
GPT-4 helped identify six promising drug pairs, all tested through lab experiments. Among the combinations, simvastatin (commonly used to lower cholesterol) and disulfiram (used in alcohol dependence) stood out against breast cancer cells. Some of these combinations show potential for further research in therapeutic repurposing, the researchers said. These drugs, while not traditionally associated with cancer care, could be potential cancer treatments, although they would first have to go through extensive clinical trials, they noted.
King said that the use of AI and laboratory automation and robots is pushing down the cost of science and potentially opening the way in future for much more personalised approaches to disease.
“Every cancer in every patient is different, the genetics of the patient are different, the particular mutations of that particular cancer are going to be different. That means that to decide on the best treatment for any particular patient is essentially a research project,” he said.
“Science is getting cheaper to do, so we can tackle bigger problems - ones where you need to have more research done. Particularly in biological systems where they are very complicated, you can’t understand them just by a few experiments, you need to do orders of magnitude more experiments. Thanks to automation we can actually test a lot more things, so we aren’t limited to what you can pipette in the lab anymore,” he said.
Instead of having to do everything manually, this could mean that in future scientific researchers will instead be managing a series of AI agents and robots that do the work for them.
“This is not automation replacing scientists, but a new kind of collaboration,” said co-author Dr Hector Zenil from King’s College London.
“Guided by expert prompts and experimental feedback, the AI functioned like a tireless research partner—rapidly navigating an immense hypothesis space and proposing ideas that would take humans alone far longer to reach.”
More on robots, AI and science:
- Letting robots into the lab could speed up science. Here's how
- Life sciences and pharma priorities in 2025: M&A, AI and China
- Chemistry lessons for AI [Premium]
- AI designed drugs in trials this year, says Google DeepMind chief
- Chemistry needs to be more sustainable. Here’s what needs to change, say top scientists
-
AI plan to improve drug production [Premium]
For over 100 years Chemistry & Industry (C&I) magazine has reported on the scientific advances being harnessed to tackle society's biggest challenges. C&I covers advances in agrifood, energy, health and wellbeing, materials, sustainability and environment, as well as science careers, policy and broader innovation issues. C&I’s readers are scientific researchers, business leaders, policy makers and entrepreneurs who harness science to spark innovation.
Get the latest science and innovation news every month with a subscription to Chemistry & Industry