All posts by Adam Brinded

Adolescents who sleep longer perform better at cognitive tasks

Teenager asleep and wrapped in a blanket
Teenager asleep and wrapped in a blanket
Credit: harpazo_hope (Getty Images)

Adolescents who sleep for longer – and from an earlier bedtime – than their peers tend to have improved brain function and perform better at cognitive tests, researchers from the UK and China have shown.

Even though the differences in the amount of sleep that each group got was relatively small, we could still see differences in brain structure and activity and in how well they did at tasks
Barbara Sahakian

But the study of adolescents in the US also showed that even those with better sleeping habits were not reaching the amount of sleep recommended for their age group.

Sleep plays an important role in helping our bodies function. It is thought that while we are asleep, toxins that have built up in our brains are cleared out, and brain connections are consolidated and pruned, enhancing memory, learning, and problem-solving skills. Sleep has also been shown to boost our immune systems and improve our mental health.

During adolescence, our sleep patterns change. We tend to start going to bed later and sleeping less, which affects our body clocks. All of this coincides with a period of important development in our brain function and cognitive development. The American Academy of Sleep Medicine says that the ideal amount of sleep during this period is between eight- and 10-hours’ sleep.

Professor Barbara Sahakian from the Department of Psychiatry at the University of Cambridge said: “Regularly getting a good night’s sleep is important in helping us function properly, but while we know a lot about sleep in adulthood and later life, we know surprisingly little about sleep in adolescence, even though this is a crucial time in our development. How long do young people sleep for, for example, and what impact does this have on their brain function and cognitive performance?”

Studies looking at how much sleep adolescents get usually rely on self-reporting, which can be inaccurate. To get around this, a team led by researchers at Fudan University, Shanghai, and the University of Cambridge turned to data from the Adolescent Brain Cognitive Development (ABCD) Study, the largest long-term study of brain development and child health in the United States.

As part of the ABCD Study, more than 3,200 adolescents aged 11-12 years old had been given FitBits, allowing the researchers to look at objective data on their sleep patterns and to compare it against brain scans and results from cognitive tests. The team double-checked their results against two additional groups of 13-14 years old, totalling around 1,190 participants. The results are published today in Cell Reports.

The team found that the adolescents could be divided broadly into one of three groups:

Group One, accounting for around 39% of participants, slept an average (mean) of 7 hours 10 mins. They tended to go to bed and fall asleep the latest and wake up the earliest.

Group Two, accounting for 24% of participants, slept an average of 7 hours 21 mins. They had average levels across all sleep characteristics.

Group Three, accounting for 37% of participants, slept an average of 7 hours 25 mins. They tended to go to bed and fall asleep the earliest and had lower heart rates during sleep.

Although the researchers found no significant differences in school achievement between the groups, when it came to cognitive tests looking at aspects such as vocabulary, reading, problem solving and focus, Group Three performed better than Group Two, which in turn performed better than Group One.

Group Three also had the largest brain volume and best brain functions, with Group One the smallest volume and poorest brain functions.

Professor Sahakian said: “Even though the differences in the amount of sleep that each group got was relatively small, at just over a quarter-of-an-hour between the best and worst sleepers, we could still see differences in brain structure and activity and in how well they did at tasks. This drives home to us just how important it is to have a good night’s sleep at this important time in life.”

First author Dr Qing Ma from Fudan University said: “Although our study can’t answer conclusively whether young people have better brain function and perform better at tests because they sleep better, there are a number of studies that would support this idea. For example, research has shown the benefits of sleep on memory, especially on memory consolidation, which is important for learning.”

The researchers also assessed the participants’ heart rates, finding that Group Three had the lowest heart rates across the sleep states and Group One the highest. Lower heart rates are usually a sign of better health, whereas higher rates often accompany poor sleep quality like restless sleep, frequent awakenings and excessive daytime sleepiness.

Because the ABCD Study is a longitudinal study – that is, one that follows its participants over time – the team was able to show that the differences in sleep patterns, brain structure and function, and cognitive performance, tended be present two years before and two years after the snapshot that they looked at.

Senior author Dr Wei Cheng from Fudan University added: “Given the importance of sleep, we now need to look at why some children go to bed later and sleep less than others. Is it because of playing videogames or smartphones, for example, or is it just that their body clocks do not tell them it’s time to sleep until later?”

The research was supported by the National Key R&D Program of China, National Natural Science Foundation of China, National Postdoctoral Foundation of China and Shanghai Postdoctoral Excellence Program. The ABCD Study is supported by the National Institutes of Health.

Reference

Ma, Q et al. Neural correlates of device-based sleep characteristics in adolescents. Cell Reports; 22 Apr 2025; DOI: 10.1016/j.celrep.2025.115565



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Charles Darwin Archive recognised by UNESCO

Two of Charles Darwin’s pocket notebooks. Cambridge University Library
Two of Charles Darwin’s pocket notebooks in Cambridge University Library
Credit: Cambridge University Library

Documentary heritage relating to the life and work of Charles Darwin has been recognised on the prestigious UNESCO International Memory of the World Register, highlighting its critical importance to global science and the necessity of its long-term preservation and accessibility.

We could not be prouder of UNESCO’s recognition of this remarkable documentary heritage
Jessica Gardner

The UNESCO Memory of the World Programme serves as the documentary heritage equivalent of UNESCO World Heritage Sites, protecting invaluable records that tell the story of human civilisation.

A collaboration between Cambridge University Library, the Natural History Museum, the Linnean Society of London, English Heritage’s Down House, the Royal Botanic Gardens, Kew and the National Library of Scotland, the Charles Darwin documentary heritage archive provides a unique window into the life and work of one of the world’s most influential natural scientists.

The complete archive, comprising over 20,000 items across the six major institutions, includes Darwin’s records illustrating the development of his ground-breaking theory of evolution and extensive global travels.

At Cambridge University Library, the Darwin Archive is a significant collection of Darwin’s books, experimental notes, correspondence, and photographs, representing his scientific and personal activities throughout his life.

The collection in Cambridge includes Darwin’s pocket notebooks recording early statements of key ideas contributing to his theory of evolution, notably that species are not stable. These provide important insights into the development of his thought and feature the iconic ‘Tree of Life’ diagram which he drew on his return from the voyage of the HMS Beagle.

The Linnean Society of London holds several of Darwin’s letters, manuscripts and books. Here is also home to John Collier’s original iconic portrait of Charles Darwin, commissioned by the Society and painted in 1883 to commemorate the first reading of the theory of evolution by natural selection at a Linnean Society meeting in 1858.

At the Natural History Museum, a letter written to his wife Emma in 1844, provides insight into Darwin’s perceived significance of his species theory research and holds instructions on what she should do in the case of his sudden death. This is alongside other letters to Museum staff and other family members which demonstrate the broad scope of his scientific thinking, research and communication ranging from caterpillars to volcanoes, dahlias to ants and the taking of photographs for his third publication Expression of the Emotions in Man and Animals.

Correspondence with Darwin’s publisher John Murray, held at the National Library of Scotland document the transformation of his research into print, including the ground-breaking On the Origin of Species publication.

At the Royal Botanic Gardens, Kew, documents include a highly significant collection of 44 letters sent around the HMS Beagle expedition from Darwin to Professor John Stevens Henslow, detailing his travels and the genesis of his theory of evolution as he comes in contact with new plants, wildlife and fossils; as well as a rare sketch of the orchid Gavilea patagonica made by Darwin. Other items include a letter from Darwin to his dear friend Joseph Hooker, Director of Kew in which he requests cotton seeds from Kew’s collections for his research.

Down House (English Heritage) in Kent was both a family home and a place of work where Darwin pursued his scientific interests, carried out experiments, and researched and wrote his many ground-breaking publications until his death in 1882.

The extensive collection amassed by Darwin during his 40 years at Down paint a picture of Darwin’s professional and personal life and the intersection of the two. The archive here includes over 200 books from Darwin’s personal collection, account books, diaries, the Journal of the Voyage of the Beagle MSS, and Beagle notebooks and letters. More personal items include scrapbooks, Emma Darwin’s photograph album and Charles Darwin’s will. The collection at Down House has been mainly assembled through the generous donations of Darwin’s descendants.

This inscription marks a significant milestone in recognising Darwin’s legacy, as it brings together materials held by multiple institutions across the UK for the first time, ensuring that his work’s scientific, cultural, and historical value is preserved for future generations.

In line with the ideals of the UNESCO Memory of the World Programme, much of the Darwin archive can be viewed by the public at the partner institutions and locations.

The UNESCO International Memory of the World Register includes some of the UK’s most treasured documentary heritage, such as the Domesday Book, the Shakespeare Documents, alongside more contemporary materials, including the personal archive of Sir Winston Churchill. The Charles Darwin archive now joins this esteemed list, underscoring its historical, scientific, and cultural significance.

The inscription of the Charles Darwin archive comes as part of UNESCO’s latest recognition of 75 archives worldwide onto the International Memory of the World Register.

These newly inscribed collections include a diverse range of documents, such as the Draft of the International Bill of Human Rights, the papers of Friedrich Nietzche, and the Steles of Shaolin Temple (566-1990) in China.

Baroness Chapman of Darlington, Minister of State for International Development, Latin America and Caribbean, Foreign, Commonwealth & Development Office (FCDO) said: “The recognition of the Charles Darwin archive on UNESCO’s International Memory of the World Register is a proud moment for British science and heritage.

“Darwin’s work fundamentally changed our understanding of the natural world and continues to inspire scientific exploration to this day. By bringing together extraordinary material from our world class British institutions, this archive ensures that Darwin’s groundbreaking work remains accessible to researchers, students, and curious minds across the globe.”

Ruth Padel, FRSL, FZS, poet, conservationist, great-great-grand-daughter of Charles Darwin and King’s College London Professor of Poetry Emerita, said: “How wonderful to see Darwin’s connections to so many outstanding scientific and cultural institutions in the UK reflected in the recognition of his archive on the UNESCO Memory of the World International Register. All these institutions are open to the public so everyone will have access to his documentary heritage.”

Dr Jessica Gardner, University Librarian and Director of Library Services at Cambridge University Libraries (CUL) said: “For all Charles Darwin gave the world, we are delighted by the UNESCO recognition in the Memory of the World of the exceptional scientific and heritage significance of his remarkable archive held within eminent UK institutions.

“Cambridge University Library is home to over 9,000 letters to and from Darwin, as well as his handwritten experimental notebooks, publications, and photographs which have together fostered decades of scholarship and public enjoyment through exhibition, education for schools, and online access.

“We could not be prouder of UNESCO’s recognition of this remarkable documentary heritage at the University of Cambridge, where Darwin was a student at Christ’s College and where his family connections run deep across the city, and are reflected in his namesake, Darwin College.”

Read the full, illustrated version of this story on the University Library’s site.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Throwing a ‘spanner in the works’ of our cells’ machinery could help fight cancer, fatty liver disease… and hair loss

Bald young man, front view
Bald young man, front view
Credit: bob_bosewell (Getty Images)

Fifty years since its discovery, scientists have finally worked out how a molecular machine found in mitochondria, the ‘powerhouses’ of our cells, allows us to make the fuel we need from sugars, a process vital to all life on Earth.

Drugs inhibiting the function of the carrier can remodel how mitochondria work, which can be beneficial in certain conditionsEdmund Kunji

Scientists at the Medical Research Council (MRC) Mitochondrial Biology Unit, University of Cambridge, have worked out the structure of this machine and shown how it operates like the lock on a canal to transport pyruvate – a molecule generated in the body from the breakdown of sugars – into our mitochondria.

Known as the mitochondrial pyruvate carrier, this molecular machine was first proposed to exist in 1971, but it has taken until now for scientists to visualise its structure at the atomic scale using cryo-electron microscopy, a technique used to magnify an image of an object to around 165,000 times its real size. Details are published today in Science Advances.

Dr Sotiria Tavoulari, a Senior Research Associate from the University of Cambridge, who first determined the composition of this molecular machine, said: “Sugars in our diet provide energy for our bodies to function. When they are broken down inside our cells they produce pyruvate, but to get the most out of this molecule it needs to be transferred inside the cell’s powerhouses, the mitochondria. There, it helps increase 15-fold the energy produced in the form of the cellular fuel ATP.”

Maximilian Sichrovsky, a PhD student at Hughes Hall and joint first author of the study, said: “Getting pyruvate into our mitochondria sounds straightforward, but until now we haven’t been able to understand the mechanism of how this process occurs. Using state-of-the-art cryo-electron microscopy, we’ve been able to show not only what this transporter looks like, but exactly how it works. It’s an extremely important process, and understanding it could lead to new treatments for a range of different conditions.”

Mitochondria are surrounded by two membranes. The outer one is porous, and pyruvate can easily pass through, but the inner membrane is impermeable to pyruvate. To transport pyruvate into the mitochondrion, first an outer ‘gate’ of the carrier opens, allowing pyruvate to enter the carrier. This gate then closes, and the inner gate opens, allowing the molecule to pass through into the mitochondrion.

“It works like the locks on a canal but on the molecular scale,” said Professor Edmund Kunji from the MRC Mitochondrial Biology Unit, and a Fellow at Trinity Hall, Cambridge. “There, a gate opens at one end, allowing the boat to enter. It then closes and the gate at the opposite end opens to allow the boat smooth transit through.”

Because of its central role in controlling the way mitochondria operate to produce energy, this carrier is now recognised as a promising drug target for a range of conditions, including diabetes, fatty liver disease, Parkinson’s disease, specific cancers, and even hair loss.

Pyruvate is not the only energy source available to us. Our cells can also take their energy from fats stored in the body or from amino acids in proteins. Blocking the pyruvate carrier would force the body to look elsewhere for its fuel – creating opportunities to treat a number of diseases. In fatty liver disease, for example, blocking access to pyruvate entry into mitochondria could encourage the body to use potentially dangerous fat that has been stored in liver cells.

Likewise, there are certain tumour cells that rely on pyruvate metabolism, such as in some types of prostate cancer. These cancers tend to be very ‘hungry’, producing excess pyruvate transport carriers to ensure they can feed more. Blocking the carrier could then starve these cancer cells of the energy they need to survive, killing them.

Previous studies have also suggested that inhibiting the mitochondrial pyruvate carrier may reverse hair loss. Activation of human follicle cells, which are responsible for hair growth, relies on metabolism and, in particular, the generation of lactate. When the mitochondrial pyruvate carrier is blocked from entering the mitochondria in these cells, it is instead converted to lactate.

Professor Kunji said: “Drugs inhibiting the function of the carrier can remodel how mitochondria work, which can be beneficial in certain conditions. Electron microscopy allows us to visualise exactly how these drugs bind inside the carrier to jam it – a spanner in the works, you could say. This creates new opportunities for structure-based drug design in order to develop better, more targeted drugs. This will be a real game changer.”

The research was supported by the Medical Research Council and was a collaboration with the groups of Professors Vanessa Leone at the Medical College of Wisconsin, Lucy Forrest at the National Institutes of Health, and Jan Steyaert at the Free University of Brussels.

Reference

Sichrovsky, M, Lacabanne, D, Ruprecht, JJ & Rana, JJ et al. Molecular basis of pyruvate transport and inhibition of the human mitochondrial pyruvate carrier. Sci Adv; 18 Apr 2025; DOI: 10.1126/sciadv.adw1489



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Extreme drought contributed to barbarian invasion of late Roman Britain, tree-ring study reveals

Milecastle 39 on Hadrian's Wall
Milecastle 39 on Hadrian’s Wall
Credit: Adam Cuerden

Three consecutive years of drought contributed to the ‘Barbarian Conspiracy’, a pivotal moment in the history of Roman Britain, a new Cambridge-led study reveals. Researchers argue that Picts, Scotti and Saxons took advantage of famine and societal breakdown caused by an extreme period of drought to inflict crushing blows on weakened Roman defences in 367 CE. While Rome eventually restored order, some historians argue that the province never fully recovered.

Our findings provide an explanation for the catalyst of this major event.
Charles Norman

The ‘Barbarian Conspiracy’ of 367 CE was one of the most severe threats to Rome’s hold on Britain since the Boudiccan revolt three centuries earlier. Contemporary sources indicate that components of the garrison on Hadrian’s wall rebelled and allowed the Picts to attack the Roman province by land and sea. Simultaneously, the Scotti from modern-day Ireland invaded broadly in the west, and Saxons from the continent landed in the south.

Senior Roman commanders were captured or killed, and some soldiers reportedly deserted and joined the invaders. Throughout the spring and summer, small groups roamed and plundered the countryside. Britain’s descent into anarchy was disastrous for Rome and it took two years for generals dispatched by Valentian I, Emperor of the Western Roman Empire, to restore order. The final remnants of official Roman administration left Britain some 40 years later around 410 CE.

The University of Cambridge-led study, published today in Climatic Change, used oak tree-ring records to reconstruct temperature and precipitation levels in southern Britain during and after the ‘Barbarian Conspiracy’ in 367 CE. Combining this data with surviving Roman accounts, the researchers argue that severe summer droughts in 364, 365 and 366 CE were a driving force in these pivotal events.

First author Charles Norman, from Cambridge’s Department of Geography, said: “We don’t have much archaeological evidence for the ‘Barbarian Conspiracy’. Written accounts from the period give some background, but our findings provide an explanation for the catalyst of this major event.”

The researchers found that southern Britain experienced an exceptional sequence of remarkably dry summers from 364 to 366 CE. In the period 350 to 500 CE, average monthly reconstructed rainfall in the main growing season (April–July) was 51 mm. But in 364 CE, it fell to just 29mm. 365 CE was even worse with 28mm, and 37mm the following year kept the area in crisis.

Professor Ulf Büntgen, from Cambridge’s Department of Geography, said: “Three consecutive droughts would have had a devastating impact on the productivity of Roman Britain’s most important agricultural region. As Roman writers tell us, this resulted in food shortages with all of the destabilising societal effects this brings.”

Between 1836 and 2024 CE, southern Britain only experienced droughts of a similar magnitude seven times – mostly in recent decades, and none of these were consecutive, emphasising how exceptional these droughts were in Roman times. The researchers identified no other major droughts in southern Britain in the period 350–500 CE and found that other parts of northwestern Europe escaped these conditions.

Roman Britain’s main produce were crops like spelt wheat and six-row barley. Because the province had a wet climate, sowing these crops in spring was more viable than in winter, but this made them vulnerable to late spring and early summer moisture deficits, and early summer droughts could lead to total crop failure.

The researchers point to surviving accounts written by Roman chroniclers to corroborate these drought-driven grain deficits. By 367 CE, Ammianus Marcellinus described the population of Britain as in the ‘utmost conditions of famine’.

“Drought from 364 to 366 CE would have impacted spring-sown crop growth substantially, triggering poor harvests,” Charles Norman said. “This would have reduced the grain supply to Hadrian’s Wall, providing a plausible motive for the rebellion there which allowed the Picts into northern Britain.”

The study suggests that given the crucial role of grain in the contract between soldiers and the army, grain deficits may have contributed to other desertions in this period, and therefore a general weakening of the Roman army in Britain. In addition, the geographic isolation of Roman Britain likely combined with the severity of the prolonged drought to reduce the ability of Rome to alleviate the deficits.

Ultimately the researchers argue that military and societal breakdown in Roman Britain provided an ideal opportunity for peripheral tribes, including the Picts, Scotti and Saxons, to invade the province en masse with the intention of raiding rather than conquest. Their finding that the most severe conditions were restricted to southern Britain undermines the idea that famines in other provinces might have forced these tribes to invade.

Andreas Rzepecki, from the Generaldirektion Kulturelles Erbe Rheinland-Pfalz, said: “Our findings align with the accounts of Roman chroniclers and the seemingly coordinated nature of the ‘Conspiracy’ suggests an organised movement of strong onto weak, rather than a more chaotic assault had the invaders been in a state of desperation.”

“The prolonged and extreme drought seems to have occurred during a particularly poor period for Roman Britain, in which food and military resources were being stripped for the Rhine frontier, while immigratory pressures increased.”

“These factors limited resilience, and meant a drought induced, partial-military rebellion and subsequent external invasion were able to overwhelm the weakened defences.”

The researchers expanded their climate-conflict analysis to the entire Roman Empire for the period 350–476 CE. They reconstructed the climate conditions immediately before and after 106 battles and found that a statistically significant number of battles were fought following dry years.

Tatiana Bebchuk, from Cambridge’s Department of Geography, said: “The relationship between climate and conflict is becoming increasingly clear in our own time so these findings aren’t just important for historians. Extreme climate conditions lead to hunger, which can lead to societal challenges, which eventually lead to outright conflict.”

Charles Norman, Ulf Büntgen, Paul Krusic and Tatiana Bebchuk are based at the Department of Geography, University of Cambridge; Lothar Schwinden and Andreas Rzepecki are from the Generaldirektion Kulturelles Erbe Rheinland-Pfalz in Trier. Ulf Büntgen is also affiliated with the Global Change Research Institute, Czech Academy of Sciences and the Department of Geography, Masaryk University in Brno.

Reference

C Norman, L Schwinden, P Krusic, A Rzepecki, T Bebchuk, U Büntgen, ‘Droughts and conflicts during the late Roman period’, Climatic Change (2025). DOI: 10.1007/s10584-025-03925-4

Funding

Charles Norman was supported by Wolfson College, University of Cambridge (John Hughes PhD Studentship). Ulf Büntgen received funding from the Czech Science Foundation (# 23-08049S; Hydro8), the ERC Advanced Grant (# 882727; Monostar), and the ERC Synergy Grant (# 101118880; Synergy-Plague).



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Mouse study suggests a common diabetes drug may prevent leukaemia

Brown lab mouse on blue gloved hand
Brown lab mouse on blue gloved hand
Credit: University of Cambridge

Metformin, a widely used and affordable diabetes drug, could prevent a form of acute myeloid leukaemia in people at high risk of the disease, a study in mice has suggested. Further research in clinical trials will be needed to confirm this works for patients.

We’ve done the extensive research all the way from cell-based studies to human data, so we’re now at the point where we have a made a strong case for moving ahead with clinical trials
Brian Huntly

Around 3,100 people are diagnosed with acute myeloid leukaemia (AML) each year in the UK. It is an aggressive form of blood cancer that is very difficult to treat. Thanks to recent advances, individuals at high risk of AML can be identified years in advance using blood tests and blood DNA analysis, but there’s no suitable treatment that can prevent them from developing the disease.

In this study, Professor George Vassiliou and colleagues at the University of Cambridge investigated how to prevent abnormal blood stem cells with genetic changes from progressing to become AML. The work focused on the most common genetic change, which affects a gene called DNMT3A and is responsible for starting 10-15% of AML cases.

Professor Vassiliou, from the Cambridge Stem Cell Institute at the University of Cambridge and Honorary Consultant Haematologist at Cambridge University Hospitals NHS Foundation Trust (CUH) co-led the study. He said: “Blood cancer poses unique challenges compared to solid cancers like breast or prostate, which can be surgically removed if identified early. With blood cancers, we need to identify people at risk and then use medical treatments to stop cancer progression throughout the body.”

The research team examined blood stem cells from mice with the same changes in DNMT3A as seen in the pre-cancerous cells in humans. Using a genome-wide screening technique, they showed that these cells depend more on mitochondrial metabolism than healthy cells, making this a potential weak spot. The researchers went on to confirm that metformin, and other mitochondria-targeting drugs, substantially slowed the growth of mutation-bearing blood cells in mice. Further experiments also showed that metformin could have the same effect on human blood cells with the DNMT3A mutation.

Dr Malgorzata Gozdecka, Senior Research Associate at the Cambridge Stem Cell Institute and first author of the research said: “Metformin is a drug that impacts mitochondrial metabolism, and these pre-cancerous cells need this energy to keep growing. By blocking this process, we stop the cells from expanding and progressing towards AML, whilst also reversing other effects of the mutated DNMT3A gene.”

In addition, the study looked at data from over 412,000 UK Biobank volunteers and found that people taking metformin were less likely to have changes in the DNMT3A gene. This link remained even after accounting for factors that could have confounded the results such as diabetes status and BMI.

Professor Brian Huntly, Head of the Department of Haematology at the University of Cambridge, Honorary Consultant Haematologist at CUH, and joint lead author of the research, added: “Metformin appears highly specific to this mutation rather than being a generic treatment. That specificity makes it especially compelling as a targeted prevention strategy.

“We’ve done the extensive research all the way from cell-based studies to human data, so we’re now at the point where we have a made a strong case for moving ahead with clinical trials. Importantly, metformin’s lack of toxicity will be a major advantage as it is already used by millions of people worldwide with a well-established safety profile.”

The results of the study, funded by Blood Cancer UK with additional support from Cancer Research UK, the Leukemia & Lymphoma Society (USA) and the Wellcome Trust, are published in Nature.

Dr Rubina Ahmed, Director of Research at Blood Cancer UK, said: “Blood cancer is the third biggest cancer killer in the UK, with over 280,000 people currently living with the disease. Our Blood Cancer Action plan shed light on the shockingly low survival for acute myeloid leukaemia, with only around 2 in 10 surviving for 5 years, and we urgently need better strategies to save lives. Repurposing safe, widely available drugs like metformin means we could potentially get new treatments to people faster, without the need for lengthy drug development pipelines.”

The next phase of this research will focus on clinical trials to test metformin’s effectiveness in people with changes in DNMT3A at increased risk of developing AML.  With metformin already approved and widely used for diabetes, this repurposing strategy could dramatically reduce the time it takes to bring a new preventive therapy to patients.

Tanya Hollands, Research Information Manager at Cancer Research UK, who contributed funding for the lab-based screening in mice, said: “It’s important that we work to find new ways to slow down or prevent AML in people at high risk. Therefore, it’s positive that the findings of this study suggest a possible link between a commonly-used diabetes drug and prevention of AML progression in some people. While this early-stage research is promising, clinical trials are now needed to find out if this drug could benefit people. We look forward to seeing how this work progresses.”

Reference
Gozdecka, M et al. Mitochondrial metabolism sustains DNMT3A-R882-mutant clonal haematopoiesis. Nature; 16 Apr 2025; DOI: 10.1038/s41586-025-08980-6

Adapted from a press release from Blood Cancer UK



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Growing wildflowers on disused urban land can damage bee health

Chicory growing on unused land in Cleveland, USA.
Chicory growing in a vacant lot
Credit: Sarah Scott

Wildflowers growing on land previously used for buildings and factories can accumulate lead, arsenic and other metal contaminants from the soil, which are consumed by pollinators as they feed, a new study has found.

Our results should not discourage people from planting wildflowers in towns and cities. But.. it’s important to consider the history of the land and what might be in the soil.”Sarah Scott

The metals have previously been shown to damage the health of pollinators, which ingest them in nectar as they feed, leading to reduced population sizes and death. Even low nectar metal levels can have long-term effects, by affecting bees’ learning and memory – which impacts their foraging ability.

Researchers have found that common plants including white clover and bindweed, which are vital forage for pollinators in cities, can accumulate arsenic, cadmium, chromium and lead from contaminated soils.

Metal contamination is an issue in the soils of cities worldwide, with the level of contamination usually increasing with the age of a city. The metals come from a huge range of sources including cement dust and mining.

The researchers say soils in cities should be tested for metals before sowing wildflowers and if necessary, polluted areas should be cleaned up before new wildflower habitats are established.

The study highlights the importance of growing the right species of wildflowers to suit the soil conditions.

Reducing the risk of metal exposure is critical for the success of urban pollinator conservation schemes. The researchers say it is important to manage wildflower species that self-seed on contaminated urban land, for example by frequent mowing to limit flowering – which reduces the transfer of metals from the soil to the bees.

The results are published today in the journal Ecology and Evolution.

Dr Sarah Scott in the University of Cambridge’s Department of Zoology and first author of the report, said: “It’s really important to have wildflowers as a food source for the bees, and our results should not discourage people from planting wildflowers in towns and cities.

“We hope this study will raise awareness that soil health is also important for bee health. Before planting wildflowers in urban areas to attract bees and other pollinators, it’s important to consider the history of the land and what might be in the soil – and if necessary find out whether there’s a local soil testing and cleanup service available first.”

The study was carried out in the post-industrial US city of Cleveland, Ohio, which has over 33,700 vacant lots left as people have moved away from the area. In the past, iron and steel production, oil refining and car manufacturing went on there. But any land that was previously the site of human activity may be contaminated with traces of metals.

To get their results, the researchers extracted nectar from a range of self-seeded flowering plants that commonly attract pollinating insects, found growing on disused land across the city. They tested this for the presence of arsenic, cadmium, chromium and lead. Lead was consistently found at the highest concentrations, reflecting the state of the soils in the city.

The researchers found that different species of plant accumulate different amounts, and types, of the metals. Overall, the bright blue-flowered chicory plant (Cichorium intybus) accumulated the largest total metal concentration, followed by white clover (Trifolium repens), wild carrot (Daucus carota) and bindweed (Convolvulus arvensis). These plants are all vital forage for pollinators in cities – including cities in the UK – providing a consistent supply of nectar across locations and seasons.

There is growing evidence that wild pollinator populations have dropped by over 50% in the last 50 years, caused primarily by changes in land use and management across the globe. Climate change and pesticide use also play a role; overall the primary cause of decline is the loss of flower-rich habitat.

Pollinators play a vital role in food production: many plants, including apple and tomato, require pollination in order to develop fruit. Natural ‘pollination services’ are estimated to add billions of dollars to global crop productivity.

Scott said: “Climate change feels so overwhelming, but simply planting flowers in certain areas can help towards conserving pollinators, which is a realistic way for people to make a positive impact on the environment.”

The research was funded primarily by the USDA National Institute of Food and Agriculture.

Reference
Scott, SB and Gardiner, MM: ‘Trace metals in nectar of important urban pollinator forage plants: A direct exposure risk to pollinators and nectar-feeding animals in cities.’ Ecology and Evolution, April 2025. DOI: 10.1002/ece3.71238



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Complete clean sweep for Cambridge at The Boat Race 2025

Credit: Row360

Cambridge is celebrating a complete clean sweep at The Boat Race 2025, with victories in all 4 openweight races and also both lightweight races.

Thousands of spectators lined the banks of the River Thames on 13 April to witness a dramatic afternoon of action, with millions more following live on the BBC.

Cambridge Women secured their eighth consecutive win in the 79th Women’s Boat Race, extending their overall record to 49 victories to Oxford’s 30. The Men’s crew, too, were victorious in defending their title in the 170th edition of the event, notching up their 88th win, with Oxford sitting on 81.

Goldie, the Cambridge Men’s Reserve Crew, won the Men’s Reserve Race, while Blondie, the Cambridge Women’s Reserve Crew, won the Women’s Reserve Race. And the day before, the 2025 Lightweight Boat Race also saw two wins for Cambridge.

Cambridge’s Claire Collins said it was an incredible feeling to win the race. 

“This is so cool, it’s really an incredible honour to share this with the whole club,” she said.

The Women’s Race was stopped initially after an oar clash, but Umpire Sir Matthew Pinsent allowed the race to resume after a restart. Claire said that the crew had prepared for eventualities such as a restart and so were able to lean on their training when it happened.

“I had total confidence in the crew to regroup. Our focus was to get back on pace and get going as soon as possible and that’s what we did.”

For Cambridge Men’s President Luca Ferraro, it was his final Boat Roat campaign, having raced in the Blue Boat for the last three years, winning the last two.

He said: “It was a great race. The guys really stepped up. That’s something that our Coach Rob Baker said to us before we went out there, that each of us had to step up individually and come together and play our part in what we were about to do. I couldn’t be prouder of the guys, they really delivered today.”

Professor Deborah Prentice, Vice-Chancellor of the University of Cambridge, congratulated all the crews following the wins.

“I am in awe of these students and what they have achieved, and what Cambridge University Boat Club has been able to create,” she said.

“These students are out in the early hours of the morning training and then trying to make it to 9am lectures. It’s so inspiring. And a complete clean sweep – this was an incredibly impressive showing by Cambridge, I am so proud of them.”

The Cambridge Blue Boats featured student athletes drawn from Christ’s College, Downing College, Emmanuel College, Gonville & Caius, Hughes Hall, Jesus College, Pembroke College, Peterhouse, St Edmund’s, and St John’s.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Harmful effects of digital tech – the science ‘needs fixing’, experts argue

Illustration representing potential online harms
Illustration representing potential online harms
Credit: Nuthawut Somsuk via Getty

From social media to AI, online technologies are changing too fast for the scientific infrastructure used to gauge its public health harms, say two leaders in the field.

The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology developmentDr Amy Orben

Scientific research on the harms of digital technology is stuck in a “failing cycle” that moves too slowly to allow governments and society to hold tech companies to account, according to two leading researchers in a new report published in the journal Science.

Dr Amy Orben from the University of Cambridge and Dr J. Nathan Matias from Cornell University say the pace at which new technology is deployed to billions of people has put unbearable strain on the scientific systems trying to evaluate its effects.

They argue that big tech companies effectively outsource research on the safety of their products to independent scientists at universities and charities who work with a fraction of the resources – while firms also obstruct access to essential data and information. This is in contrast to other industries where safety testing is largely done “in house”.

Orben and Matias call for an overhaul of “evidence production” assessing the impact of technology on everything from mental health to discrimination.  

Their recommendations include accelerating the research process, so that policy interventions and safer designs are tested in parallel with initial evidence gathering, and creating registries of tech-related harms informed by the public.    

“Big technology companies increasingly act with perceived impunity, while trust in their regard for public safety is fading,” said Orben, of Cambridge’s MRC Cognition and Brain Sciences Unit. “Policymakers and the public are turning to independent scientists as arbiters of technology safety.”

“Scientists like ourselves are committed to the public good, but we are asked to hold to account a billion-dollar industry without appropriate support for our research or the basic tools to produce good quality evidence quickly.”

“We must urgently fix this science and policy ecosystem so we can better understand and manage the potential risks posed by our evolving digital society,” said Orben.

‘Negative feedback cycle’

In the latest Science paper, the researchers point out that technology companies often follow policies of rapidly deploying products first and then looking to “debug” potential harms afterwards. This includes distributing generative AI products to millions before completing basic safety tests, for example.

When tasked with understanding potential harms of new technologies, researchers rely on “routine science” which – having driven societal progress for decades – now lags the rate of technological change to the extent that it is becoming at times “unusable”.  

With many citizens pressuring politicians to act on digital safety, Orben and Matias argue that technology companies use the slow pace of science and lack of hard evidence to resist policy interventions and “minimize their own responsibility”.

Even if research gets appropriately resourced, they note that researchers will be faced with understanding products that evolve at an unprecedented rate.

“Technology products change on a daily or weekly basis, and adapt to individuals. Even company staff may not fully understand the product at any one time, and scientific research can be out of date by the time it is completed, let alone published,” said Matias, who leads Cornell’s Citizens and Technology (CAT) Lab.

“At the same time, claims about the inadequacy of science can become a source of delay in technology safety when science plays the role of gatekeeper to policy interventions,” Matias said.

“Just as oil and chemical industries have leveraged the slow pace of science to deflect the evidence that informs responsibility, executives in technology companies have followed a similar pattern. Some have even allegedly refused to commit substantial resources to safety research without certain kinds of causal evidence, which they also decline to fund.” 

The researchers lay out the current “negative feedback cycle”:

Tech companies do not adequately resource safety research, shifting the burden to independent scientists who lack data and funding. This means high-quality causal evidence is not produced in required timeframes, which weakens government’s ability to regulate – further disincentivising safety research, as companies are let off the hook.

Orben and Matias argue that this cycle must be redesigned, and offer ways to do it.

Reporting digital harms

To speed up the identification of harms caused by online technologies, policymakers or civil society could construct registries for incident reporting, and encourage the public to contribute evidence when they experience harms.

Similar methods are already used in fields such as environmental toxicology where the public reports on polluted waterways, or vehicle crash reporting programs that inform automotive safety, for example.

“We gain nothing when people are told to mistrust their lived experience due to an absence of evidence when that evidence is not being compiled,” said Matias.

Existing registries, from mortality records to domestic violence databases, could also be augmented to include information on the involvement of digital technologies such as AI.

The paper’s authors also outline a “minimum viable evidence” system, in which policymakers and researchers adjust the “evidence threshold” required to show potential technological harms before starting to test interventions.

These evidence thresholds could be set by panels made up of affected communities, the public, or “science courts”: expert groups assembled to make rapid assessments.   

“Causal evidence of technological harms is often required before designers and scientists are allowed to test interventions to build a safer digital society,” said Orben. 

“Yet intervention testing can be used to scope ways to help individuals and society, and pinpoint potential harms in the process. We need to move from a sequential system to an agile, parallelised one.”

Under a minimum viable evidence system, if a company obstructs or fails to support independent research, and is not transparent about their own internal safety testing, the amount of evidence needed to start testing potential interventions would be decreased.

Orben and Matias also suggest learning from the success of “Green Chemistry”, which sees an independent body hold lists of chemical products ranked by potential for harm, to help incentivise markets to develop safer alternatives.

“The scientific methods and resources we have for evidence creation at the moment simply cannot deal with the pace of digital technology development,” Orben said.  

“Scientists and policymakers must acknowledge the failures of this system and help craft a better one before the age of AI further exposes society to the risks of unchecked technological change.”

Added Matias: “When science about the impacts of new technologies is too slow, everyone loses.”



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Cambridge research: First global bond index to address fossil fuel expansion

University of Cambridge researchers based at the Department for Land Economy have selected index provider Bloomberg Index Services Limited to launch the first global corporate bond index to cover fossil fuel producers, utilities, insurance, and financing, with the aim of driving investment to reduce real-economy emissions.

This is an enormously impactful project which showcases the high-quality research undertaken at CambridgeAnthony Odgers, University of Cambridge Chief Financial Officer

This is a critical – and hugely challenging – moment for climate action. Legal and political pressures have paralysed asset managers and other financial service providers, leading to a recent wave of actors leaving investor climate coalitions. However, asset owners are increasingly seeing the need to take a leadership role in addressing climate change, which threatens the long-term future of their portfolios and the wider economy.

That’s why we are delighted to announce that Cambridge researchers based at the Department for Land Economy have selected index provider Bloomberg Index Services Limited to launch the first global corporate bond index to cover fossil fuel producers, utilities, insurance, and financing, with the aim of driving investment to reduce real-economy emissions.

You can read the University press release here.

“We are delighted that this project has reached such a key milestone,” said Professor Martin Dixon, Head of the Department of Land Economy. “As a multidisciplinary department with a focus on outstanding academic publication and teaching, this project has the potential to serve as a ‘systems demonstrator’ for ongoing research in this important area.”

Why a bond index?

The launch of the bond index by an 816-year-old institution is an unusual process and a tale worth telling. It began with a peer-reviewed paper by Dr Ellen Quigley, Principal Research Associate at Land Economy, exploring the case for evidence-based climate impact by institutional investors. This was followed by an internal feasibility study based at Jesus College, Cambridge (which continues to co-host the project), and supported by several other parts of the University.

With feasibility assessed, the team went out to global index providers to explore their interest. All of the leading players were interested in building this index, yet all grappled with a lack of access to data and the complexity of assessing companies based on their activities (e.g., whether they were building new fossil fuel infrastructure), not their business classification. An extensive Request for Proposals process resulted in naming Bloomberg Index Services Limited as our provider. The project aims to provide a genuine solution for asset owners looking to align their corporate debt instruments with their climate targets and to avoid both ineffective blanket interventions and greenwashing.

The central problem, on which the industry has faltered for decades, is how to manage the risk presented by a fossil fuel industry that continues to grow. Leading climate scenarios such as the International Energy Agency’s Net Zero by 2050 scenario are clear that fossil fuel expansion is inconsistent with the transition to a decarbonised economy.  With approximately 90% of new financing for fossil fuel expansion coming from bonds and bank loans, debt markets must be the focus of investor efforts to transition away from fossil fuel expansionism. Bonds offer a larger pool of capital than equities, and a greater proportion are purchased in the primary market, where companies gain access to new capital.

The past decade has seen a significant rise in passive investment strategies and therefore an increase in financial flows into index funds, which have as a consequence become significant ‘auto-allocators’ of capital. This research project aims to study the extent to which the new bond index influences cost, volume, and access to capital among companies who are seeking to build new fossil fuel infrastructure and delaying the phase-down of their operations. Bond markets are not just a key part of investor action on climate change: they are the very coalface of fossil fuel expansion, i.e. new gas, oil, and coal extraction and infrastructure.

“This is an enormously impactful project which showcases the high-quality research undertaken at Cambridge,” University of Cambridge Chief Financial Officer Anthony Odgers said.  “The index is a game-changer for the growing number of asset owners who invest in corporate debt and understand its impact on fossil fuel expansion, particularly the construction of new fossil fuel infrastructure such as coal- and gas-fired power plants which risk locking in fossil fuel usage for decades.”

“Once the index launches, Cambridge expects to invest some of its own money against financial products referencing it. This will enable us to align our fixed income holdings with our institution-wide objectives,” Odgers said.

There are currently no off-the-shelf products that allow for passive investments in global corporate bond markets without financing fossil fuel expansion, through fossil fuel production, utilities building new coal- and gas-fired power plants, and through the banks and insurers that continue to finance and underwrite these activities. By supporting the development of this ‘systems demonstrator’, we will be able to conduct essential research on the efficacy of such a lever.

“Instead of linear year-on-year reductions or blanket bans by business classification, the index methodology identifies companies that present the greatest systemic risks to investors, while ensuring that those companies that meet the criteria can rejoin the bond index,” said project leader Lily Tomson, a Senior Research Associate at Jesus College, Cambridge. 

Several years of close collaboration with leading global asset owners such as California State Teachers Retirement System (CalSTRS), Universities Superannuation Scheme (USS), Swiss Federal Pension Fund PUBLICA and the United Nations Joint Staff Pension Fund (UNJSPF) provided input and technical market expertise that underpins the index. Alongside the University of Cambridge, the index will be used at launch by investments from the United Nations Joint Staff Pension Fund.

“Finally, large asset owners around the world have an index for this market that aims to discourage the expansion of fossil fuels,” said Pedro Guazo, Representative of the Secretary-General (RSG) for the investment of the UNJSPF assets.

Rules-based engagement: a lever for behaviour change

Debt benchmarks have a key role to play in any real efforts to tackle the expansion of fossil fuels. This project is innovative because it focuses on exclusions and weightings of companies based on their current corporate activity, instead of using an approach that relies on blanket exclusions by business classification (which does not generate incentives to change behaviour). For example, a company might be classed as a fossil fuel company, but if it stops expanding new fossil fuel operations and aligns to an appropriate phase-down pathway, the company has an opportunity to be included in the index and gain access to capital via funds which use the index, as a result.

Across the project, we are using data sources that have never previously been used to build an index – for example, the Global Coal Exit List (GCEL) and Global Oil and Gas Exit List (GOGEL) from Urgewald. We are taking a novel approach that focuses investor attention on those actors that our framework considers ‘edge cases’: companies close to reaching, or moving away from, alignment with the index. Companies have the option of being (re-)included in the index if they change their behaviour to align with the rules of the index. Academic literature suggests this is a lever for behaviour change in equities, but as an approach it is new to debt market indices. This is one of many key hypotheses that this project tests. We are convening a community of leading global academics who will support the creation of this new form of rules-based bondholder engagement.

This bond index project is one of a suite of actions rooted in academic research and collaboration that have been developed by the collegiate University. Alongside 74 other higher education institutions, Cambridge is delivering a parallel project focused on cash deposits and money market funds. We will continue to conduct research as the associated new products begin to operate through 2025.

At a time when climate damage is growing rapidly and is visible in news stories around the world, many actors across investment markets are looking for a clear path to take necessary action. As an academic institution and a long-term investor, the University of Cambridge is committed to supporting evidence-based research and action on climate change.

The bond index will be launched later this year. If you are interested in finding out more about the project or the team’s research, contact us here: bondindex@landecon.cam.ac.uk.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Scientists create ‘metal detector’ to hunt down tumours

Serena Nik-Zainal at the Early Cancer Institute
Serena Nik-Zainal at the Early Cancer Institute
Credit: University of Cambridge

Cambridge researchers have created a ‘metal detector’ algorithm that can hunt down vulnerable tumours, in a development that could one day revolutionise the treatment of cancer.

Genomic sequencing is now far faster and cheaper than ever before. We are getting closer to the point where getting your tumour sequenced will be as routine as a scan or blood testSerena Nik-Zainal

In a paper published today in Nature Genetics, scientists at the University of Cambridge and NIHR Cambridge Biomedical Research Centre analysed the full DNA sequence of 4,775 tumours from seven types of cancer. They used that data from Genomics England’s 100,000 Genomes Project to create an algorithm capable of identifying tumours with faults in their DNA that makes them easier to treat.

The algorithm, called PRRDetect, could one day help doctors work out which patients are more likely to have successful treatment. That could pave the way for more personalised treatment plans that increase people’s chances of survival.

The research was funded by Cancer Research UK and the National Institute for Health and Care Research (NIHR).

Professor Serena Nik-Zainal  from the Early Cancer Institute at the University of Cambridge, lead author of the study, said: “Genomic sequencing is now far faster and cheaper than ever before. We are getting closer to the point where getting your tumour sequenced will be as routine as a scan or blood test.

“To use genomics most effectively in the clinic, we need tools which give us meaningful information about how a person’s tumour might respond to treatment. This is especially important in cancers where survival is poorer, like lung cancer and brain tumours.

“Cancers with faulty DNA repair are more likely to be treated successfully. PRRDetect helps us better identify those cancers and, as we sequence more and more cancers routinely in the clinic, it could ultimately help doctors better tailor treatments to individual patients.”

The research team looked for patterns in DNA created by so-called ‘indel’ mutations, in which letters are inserted or deleted from the normal DNA sequence.  

They found unusual patterns of indel mutations in cancers that had faulty DNA repair mechanisms – known as ‘post-replicative repair dysfunction’ or PRRd. Using this information, the scientists developed PRRDetect to allow them to identify tumours with this fault from a full DNA sequence.

PRRd tumours are more sensitive to immunotherapy, a type of cancer treatment that uses the body’s own immune system to attack cancer cells. The scientists hope that the PRRd algorithm could act like a ‘metal detector’ to allow them to identify patients who are more likely to have successful treatment with immunotherapy.

The study follows from a previous ‘archaeological dig’ of cancer genomes carried out by Professor Nik-Zainal, which examined the genomes of tens of thousands of people and revealed previously unseen patterns of mutations which are linked to cancer.

This time, Professor Nik-Zainal and her team looked at cancers which have a higher proportion of tumours with PRRd. These include bowel, brain, endometrial, skin, lung, bladder and stomach cancers. Whole genome sequences of these cancers were provided by the 100,000 Genomes Project – a pioneering study led by Genomics England and NHS England which sequenced 100,000 genomes from around 85,000 NHS patients affected by rare diseases or cancer.

The study identified 37 different patterns of indel mutations across the seven cancer types included in this study. Ten of these patterns were already linked to known causes of cancer, such as smoking and exposure to UV light. Eight of these patterns were linked to PRRd. The remaining 19 patterns were new and could be linked to causes of cancer that are not fully understood yet or mechanisms within cells that can go wrong when a cell becomes cancerous.

Executive Director of Research and Innovation at Cancer Research UK, Dr Iain Foulkes, said: “Genomic medicine will revolutionise how we approach cancer treatment. We can now get full readouts of tumour DNA much more easily, and with that comes a wealth of information about how an individual’s cancer can start, grow and spread.

“Tools like PRRDetect are going to make personalised treatment for cancer a reality for many more patients in the future. Personalising treatment is much more likely to be successful, ensuring more people can live longer, better lives free from the fear of cancer.”

NIHR Scientific Director, Mike Lewis, said: “Cancer is a leading cause of death in the UK so it’s impressive to see our research lead to the creation of a tool to determine which therapy will lead to a higher likelihood of successful cancer treatment.”

Chief Scientific Officer at Genomics England, Professor Matt Brown, said: “Genomics is playing an increasingly important role in healthcare and these findings show how genomic data can be used to drive more predictive, preventative care leading to better outcomes for patients with cancer.

“The creation of this algorithm showcases the immense value of whole genome sequencing not only in research but also in the clinic across multiple diverse cancer types in advancing cancer care.”

The University of Cambridge is fundraising for a new hospital that will transform how we diagnose and treat cancer. Cambridge Cancer Research Hospital, a partnership with Cambridge University Hospitals NHS Foundation Trust, will treat patients across the East of England, but the research that takes place there promises to change the lives of cancer patients across the UK and beyond. Find out more here.

Reference

Koh, GCC et al. Redefined indel taxonomy reveals insights into mutational signatures. Nat Gen; 10 Apr 2025; DOI:

Adapted from a press release from Cancer Research UK



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Handheld device could transform heart disease screening

Person wearing a grey t-shirt holding a palm-sized device to their chest
Person demonstrating use of a handheld device for heart disease screening
Credit: Acoustics Lab

Researchers have developed a handheld device that could potentially replace stethoscopes as a tool for detecting certain types of heart disease.

This device could become an affordable and scalable solution for heart health screening, especially in areas with limited medical resourcesAnurag Agarwal

The researchers, from the University of Cambridge, developed a device that makes it easy for people with or without medical training to record heart sounds accurately. Unlike a stethoscope, the device works well even if it’s not placed precisely on the chest: its larger, flexible sensing area helps capture clearer heart sounds than traditional stethoscopes.

The device can also be used over clothing, making it more comfortable for patients – especially women – during routine check-ups or community heart health screening programmes.

The heart sound recordings can be saved on the device, which can then be used to detect signs of heart valve disease. The researchers are also developing a machine learning algorithm which can detect signs of valve disease automatically. The results are reported in the IEEE Journal of Biomedical and Health Informatics.

Heart valve disease (valvular heart disease or VHD) has been called the ‘next cardiac epidemic,’ with a prognosis worse than many forms of cancer. Up to 50% of patients with significant VHD remain undiagnosed, and many patients only see their doctor when the disease has advanced and they are experiencing significant complications.

In the UK, the NHS and NICE have identified early detection of heart valve disease as a key goal, both to improve quality of life for patients, and to decrease costs.

An examination with a stethoscope, or auscultation, is the way that most diagnoses of heart valve disease are made. However, just 38% of patients who present to their GP with symptoms of valve disease receive an examination with a stethoscope.

“The symptoms of VHD can be easily confused with certain respiratory conditions, which is why so many patients don’t receive a stethoscope examination,” said Professor Anurag Agarwal from Cambridge’s Department of Engineering, who led the research. “However, the accuracy of stethoscope examination for diagnosing heart valve disease is fairly poor, and it requires a GP to conduct the examination.”

In addition, a stethoscope examination requires patients to partially undress, which is both time consuming in short GP appointments, and can be uncomfortable for patients, particularly for female patients in routine screening programmes.

The ‘gold standard’ for diagnosing heart valve disease is an echocardiogram, but this can only be done in a hospital and NHS waiting lists are extremely long – between six to nine months at many hospitals.

“To help get waiting lists down, and to make sure we’re diagnosing heart valve disease early enough that simple interventions can improve quality of life, we wanted to develop an alternative to a stethoscope that is easy to use as a screening tool,” said Agarwal.

Agarwal and his colleagues have developed a handheld device, about the diameter of a drinks coaster, that could be a solution. Their device can be used by any health professional to accurately record heart sounds, and can be used over clothes.

While a regular or electronic stethoscope has a single sensor, the Cambridge-developed device has six, meaning it is easier for the doctor or nurse – or even someone without any medical training – to get an accurate reading, simply because the surface area is so much bigger.

The device contains materials that can transmit vibration so that it can be used over clothes, which is particularly important when conducting community screening programmes to protect patient privacy. Between each of the six sensors is a gel that absorbs vibration, so the sensors don’t interfere with each other.

The researchers tested the device on healthy participants with different body shapes and sizes and recorded their heart sounds. Their next steps will be to test the device in a clinical setting on a variety of patients, against results from an echocardiogram.

In parallel with the development of the device, the researchers have developed a machine learning algorithm that can use the recorded heart sounds to detect signs of valve disease automatically. Early tests of the algorithm suggest that it outperforms GPs in detecting heart valve disease.  

“If successful, this device could become an affordable and scalable solution for heart health screening, especially in areas with limited medical resources,” said Agarwal.

The researchers say that the device could be a useful tool to triage patients who are waiting for an echocardiogram, so that those with signs of valve disease can be seen in a hospital sooner.

A patent has been filed on the device by Cambridge Enterprise, the University’s commercialisation arm. Anurag Agarwal is a Fellow of Emmanuel College, Cambridge.

Reference:
Andrew McDonald et al. ‘A flexible multi-sensor device enabling handheld sensing of heart sounds by untrained users.’ IEEE Journal of Biomedical and Health Informatics (2025). DOI: 10.1109/JBHI.2025.3551882



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

One in 3,000 people at risk of punctured lung from faulty gene – almost 100 times higher than previous estimate

Person clutching their chest in pain
Chest pain
Credit: wildpixel (Getty Images)

As many as one in 3,000 people could be carrying a faulty gene that significantly increases their risk of a punctured lung, according to new estimates from Cambridge researchers. Previous estimates had put this risk closer to one in 200,000 people.

If an individual has Birt-Hogg-Dubé syndrome, then it’s very important that we’re able to diagnose it, because they and their family members may also be at risk of kidney cancerStefan Marciniak

The gene in question, FLCN, is linked to a condition known as Birt-Hogg-Dubé syndrome, symptoms of which include benign skin tumours, lung cysts, and an increased risk of kidney cancer.

In a study published today in the journal Thorax, a team from the University of Cambridge examined data from UK Biobank, the 100,000 Genomes Project, and East London Genes & Health – three large genomic datasets encompassing more than 550,000 people.

They discovered that between one in 2,710 and one in 4,190 individuals carries the particular variant of FLCN that underlies Birt-Hogg-Dubé syndrome. But curiously, whereas patients with a diagnosis of Birt-Hogg-Dubé syndrome have a lifetime risk of punctured lung of 37%, in the wider cohort of carriers of the genetic mutation this was lower at 28%. Even more striking, while patients with Birt-Hogg-Dubé syndrome have a 32% of developing kidney cancer, in the wider cohort this was only 1%.

Punctured lung – known as pneumothorax – is caused by an air leak in the lung, resulting in painful lung deflation and shortness of breath. Not every case of punctured lung is caused by a fault in the FLCN gene, however. Around one in 200 tall, thin young men in their teens or early twenties will experience a punctured lung, and for many of them the condition will resolve itself, or doctors will remove air or fluid from their lungs while treating the individual as an outpatient; many will not even know they have the condition.

If an individual experiences a punctured lung and doesn’t fit the common characteristics – for example, if they are in their forties – doctors will look for tell-tale cysts in the lower lungs, visible on an MRI scan. If these are present, then the individual is likely to have Birt-Hogg-Dubé syndrome.

Professor Marciniak is a researcher at the University of Cambridge and an honorary consultant at Cambridge University Hospitals NHS Foundation Trust and Royal Papworth Hospital NHS Foundation Trust. He co-leads the UK’s first Familial Pneumothorax Rare Disease Collaborative Network, together with Professor Kevin Blyth at Queen Elizabeth University Hospital and University of Glasgow. The aim of the Network is to optimise the care and treatment of patients with rare, inherited forms of familial pneumothorax, and to support research into this condition. 

Professor Marciniak said: “If an individual has Birt-Hogg-Dubé syndrome, then it’s very important that we’re able to diagnose it, because they and their family members may also be at risk of kidney cancer.

“The good news is that the punctured lung usually happens 10 to 20 years before the individual shows symptoms of kidney cancer, so we can keep an eye on them, screen them every year, and if we see the tumour it should still be early enough to cure it.”

Professor Marciniak says he was surprised to discover that the risk of kidney cancer was so much lower in carriers of the faulty FLCN gene who have not been diagnosed with Birt-Hogg-Dubé syndrome.

“Even though we’ve always thought of Birt-Hogg-Dubé syndrome as being caused by a single faulty gene, there’s clearly something else going on,” Professor Marciniak said. “The Birt-Hogg-Dubé patients that we’ve been caring for and studying for the past couple of decades are not representative of when this gene is broken in the wider population. There must be something else about their genetic background that’s interacting with the gene to cause the additional symptoms.”

The finding raises the question of whether, if an individual is found to have a fault FLCN gene, they should be offered screening for kidney cancer. However, Professor Marciniak does not believe this will be necessary.

“With increasing use of genetic testing, we will undoubtedly find more people with these mutations,” he said, “but unless we see the other tell-tale signs of Birt-Hogg-Dubé syndrome, our study shows there’s no reason to believe they’ll have the same elevated cancer risk.”

The research was funded by the Myrovlytis Trust, with additional support from the National Institute for Health and Care Research Cambridge Biomedical Research Centre.

Katie Honeywood, CEO of the Myrovlytis Trust, said: “The Myrovlytis Trust are delighted to have funded such an important project. We have long believed that the prevalence of Birt-Hogg-Dubé syndrome is far higher than previously reported. It highlights the importance of genetic testing for anyone who has any of the main symptoms associated with BHD including a collapsed lung. And even more so the importance of the medical world being aware of this condition for anyone who presents at an emergency department or clinic with these symptoms. We look forward to seeing the impact this projects outcome has on the Birt-Hogg-Dubé and wider community.”

Reference
Yngvadottir, B et al. Inherited predisposition to pneumothorax: Estimating the frequency of Birt-Hogg-Dubé syndrome from genomics and population cohorts. Thorax; 8 April 2025; DOI: 10.1136/thorax-2024-221738



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Researchers demonstrate the UK’s first long-distance ultra-secure communication over a quantum network

Digital abstract background
Abstract background
Credit: MR.Cole_Photographer via Getty Images

Researchers have successfully demonstrated the UK’s first long-distance ultra-secure transfer of data over a quantum communications network, including the UK’s first long-distance quantum-secured video call.

The team, from the Universities of Bristol and Cambridge, created the network, which uses standard fibreoptic infrastructure, but relies on a variety of quantum phenomena to enable ultra-secure data transfer.

The network uses two types of quantum key distribution (QKD) schemes: ‘unhackable’ encryption keys hidden inside particles of light; and distributed entanglement: a phenomenon that causes quantum particles to be intrinsically linked.

The researchers demonstrated the capabilities of the network via a live, quantum-secure video conference link, the transfer of encrypted medical data, and secure remote access to a distributed data centre. The data was successfully transmitted between Bristol and Cambridge – a fibre distance of over 410 kilometres.

This is the first time that a long-distance network, encompassing different quantum-secure technologies such as entanglement distribution, has been successfully demonstrated. The researchers presented their results at the 2025 Optical Fiber Communications Conference (OFC) in San Francisco.

Quantum communications offer unparalleled security advantages compared to classical telecommunications solutions. These technologies are immune against future cyber-attacks, even with quantum computers, which – once fully developed – will have the potential to break through even the strongest cryptographic methods currently in use.

In the past few years, researchers have been working to build and use quantum communication networks. China recently set up a massive network that covers 4,600 kilometres by connecting five cities using both fibreoptics and satellites. In Madrid, researchers created a smaller network with nine connection points that use different types of QKD to securely share information.

In 2019, researchers at Cambridge and Toshiba demonstrated a metro-scale quantum network operating at record key rates of millions of key bits per second. And in 2020, researchers in Bristol built a network that could share entanglement between multiple users. Similar quantum network trials have been demonstrated in Singapore, Italy and the USA.

Despite this progress, no one has built a large, long-distance network that can handle both types of QKD, entanglement distribution, and regular data transmission all at once, until now.

The experiment demonstrates the potential of quantum networks to accommodate different quantum-secure approaches simultaneously with classical communications infrastructure. It was carried out using the UK’s Quantum Network (UKQN), established over the last decade by the same team, supported by funding from the Engineering and Physical Sciences Research Council (EPSRC), and as part of the Quantum Communications Hub project.

“This is a crucial step toward building a quantum-secured future for our communities and society,” said co-author Dr Rui Wang, Lecturer for Future Optical Networks in the Smart Internet Lab’s High Performance Network Research Group at the University of Bristol. “More importantly, it lays the foundation for a large-scale quantum internet—connecting quantum nodes and devices through entanglement and teleportation on a global scale.”

“This marks the culmination of more than ten years of work to design and build the UK Quantum Network,” said co-author Adrian Wonfor from Cambridge’s Department of Engineering. “Not only does it demonstrate the use of multiple quantum communications technologies, but also the secure key management systems required to allow seamless end-to-end encryption between us.”

“This is a significant step in delivering quantum security for the communications we all rely upon in our daily lives at a national scale,” said co-author Professor Richard Penty, also from Cambridge and who headed the Quantum Networks work package in the Quantum Communications Hub. “It would not have been possible without the close collaboration of the two teams at Cambridge and Bristol, the support of our industrial partners Toshiba, BT, Adtran and Cisco, and our funders at UKRI.”

“This is an extraordinary achievement which highlights the UK’s world-class strengths in quantum networking technology,” said Gerald Buller, Director of the IQN Hub, based at Heriot-Watt University. “This exciting demonstration is precisely the kind of work the Integrated Quantum Networks Hub will support over the coming years, developing the technologies, protocols and standards which will establish a resilient, future-proof, national quantum communications infrastructure.”

The current UKQN covers two metropolitan quantum networks around Bristol and Cambridge, which are connected via a ‘backbone’ of four long-distance optical fibre links spanning 410 kilometres with three intermediate nodes.

The network uses single-mode fibre over the EPSRC National Dark Fibre Facility (which provides dedicated fibre for research purposes), and low-loss optical switches allowing network reconfiguration of both classical and quantum signal traffic.

The team will pursue this work further through a newly funded EPSRC project, the Integrated Quantum Networks Hub, whose vision is to establish quantum networks at all distance scales, from local networking of quantum processors to national-scale entanglement networks for quantum-safe communication, distributed computing and sensing, all the way to intercontinental networking via low-earth orbit satellites.

Reference:
R. Yang et al. ‘A UK Nationwide Heterogeneous Quantum Network.’ Paper presented at the 2025 Optical Fiber Communications Conference and Exhibition (OFC): https://www.ofcconference.org/en-us/home/schedule/



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Turbocharging the race to protect NATURE and CLIMATE with AI

By Jacqueline Garget

“We’re in a time of unprecedented change. We must accelerate progress towards equitably rebalancing how humans and nature coexist across the world. AI is our chance to do it!”

Anil Madhavapeddy, Professor of Planetary Computing, Department of Computer Science and Technology

Across Cambridge, researchers are using AI to transform climate and biodiversity research.

Improving land use decisions to benefit nature

“Around one third of the world’s land surface has been transformed for human use in the last 60 years. It’s mad just how terrible local decision-making can be for preserving global biodiversity. By protecting one place we’re often just outsourcing the impact to somewhere else,” says Anil MadhavapeddyProfessor of Planetary Computing in the Department of Computer Science and Technology.

Madhavapeddy is part of the team building a new AI tool called Terra – a predictive model of all terrestrial life on Earth. It’s a hugely complex task. The aim is to understand, then help decision-makers try to reverse, ecosystem deterioration and biodiversity loss while also producing enough food, energy and water for our needs.

Terra will combine extensive terrestrial data with earth observation data from satellites and drones, predictively filling in the blanks to build accurate global maps of biodiversity and human activity, and to reveal the world’s biodiversity hotspots.

The tool will help governments and businesses predict the global biodiversity impact of land-use decisions about vital human activities, like farming. It will vastly improve analysis of the effects of climate change, and the accuracy of biodiversity monitoring.

“Every bit of land on the planet is used for something – either by humans, or by nature. With Terra we’re trying to map it all.”

“We’ll use this to ask which land is most valuable for nature, and which for humanity, on a global scale, and show the potential impact of any land-use decision – aiming to protect the highly biodiverse areas,” says Madhavapeddy.

“We’re also working with Bill Sutherland’s Conservation Evidence project, which is gathering all human knowledge of biodiversity conservation to see which interventions are most effective for each species. The Holy Grail is to combine the observational data with this knowledge-based data. Then we can build really accurate maps of the world.”

“We can’t just rewild everything or humans will starve, and we can’t expand agricultural land so everyone can eat low-intensity organic food because there won’t be enough wildlife left. Terra could help us find the solutions.”

Anil Madhavapeddy holds a map representing the extinction impact as a result of converting natural land to arable use.

Speeding up effective biodiversity conservation

Professor Bill Sutherland has an ambitious goal: “to change the culture so it becomes unthinkable not to use evidence in conservation projects.” He wants to stop time and money being wasted on nature conservation projects that don’t work – something he’s seen surprisingly often.

Twenty years, and the equivalent of 75 years of researchers’ time, after he began the project, his freely available Conservation Evidence database is being used by a wide range of conservation organisations and policy makers, and resulting in better outcomes for nature.

But with up to one million species still facing extinction, things need to happen faster.

The team is using Large Language Models (LLMs) – a type of AI that can understand and process text, learning as it goes – to help comb through the vast conservation literature. They’re training it using their existing database, aiming for an automatic review system where AI can keep adding evidence from the ever-growing number of published papers: 300 million at last count.

“The problem is that it takes a long time to summarise evidence of what works in conservation. The team has read 1.6 million papers in 17 languages, and new science is being generated all the time,” says Sutherland, Professor of Conservation Biology in the Department of Zoology, adding:

“AI can make us much more efficient. It can find papers it thinks are suitable for our database, summarise them, classify them, and explain its decisions. It’s just incredible!”

Planetary Computing Fellow Dr Sadiq Jaffer, in the Department of Computer Science and Technology, is part of the interdisciplinary team building a ‘Conservation Co-Pilot’ based on the system. This will guide decision-makers towards the best way to manage specific land types to conserve particular species – and should be available within a year.

“The Conservation Co-Pilot will enable people to get answers to specific questions using the Conservation Evidence database almost instantaneously – quite a contrast to a traditional systematic review, which might take a year and cost £100,000,” says Jaffer. “Humans will still make the decisions, but the Co-Pilot will suggest the best course of action for nature, and can massively increase productivity.”

More: A man with a big idea. AI-Driven Conservation Co-Pilot is supported by ai@cam.

Professor Bill Sutherland

Understanding climate complexity for better forecasting

If hurricane warnings were taken more seriously because they were highly accurate, could more lives be saved? If we’d foreseen the current state of our climate fifty years ago, could more have been done to curb global temperature rise?

As the climate warms, the Earth’s natural systems are starting to behave in increasingly unpredictable ways. The models behind both short and long-term climate forecasts are getting more complex, and huge amounts of data being gathered, as scientists scramble to work out what’s going on. Machine learning, and software engineers, are becoming vital.

“Reliable forecasts of future climate trends – like temperature rise and sea-level change – are crucial for policy-makers trying to plan for the impacts of climate change,” says Dr Joe Wallwork, a Research Software Engineer at Cambridge’s Institute of Computing for Climate Science (ICCS), adding:

“We need better early warning systems to accurately forecast when and where extreme events will occur.”

“A lot of climate models are limited in resolution,” adds Dr Jack Atkinson, also a Research Software Engineer at ICCS. “Climate processes can happen at very small scales, and machine learning can help us improve the representation of these processes in models.”

Atkinson is lead developer of FTorch, a software that bridges the gap between traditional climate models and the latest machine learning tools. This breakthrough is improving climate predictions by better representing small-scale processes that are challenging to capture in models. FTorch is now used by major institutions, including the National Center for Atmospheric Research, to enhance climate simulations and inform global policy.

Models in development by the team, which use FTorch, use fewer assumptions and more real climate data. They’re powering the science of many climate studies – from sea-ice change, to cloud behaviour, to greenhouse gases in the atmosphere.

They’re helping to make climate predictions faster, which has important climate implications too. “It’s an unfortunate irony that climate models require quite a bit of energy to run,” says Wallwork. “With machine learning we can make the models more efficient, running faster so we can potentially lower our emissions.”

Towards more energy-efficient homes

Dr Ronita Bardhan, Associate Professor of Sustainable Built Environment, Department of Architecture and her team have developed an algorithm using open-source data, that uses satellite images to reveal the heat loss from almost every home in the UK.

This is helping identify the homes at risk of losing most heat, which could be prioritised to improve their energy performance. Bardhan has identified almost 700 homes in Cambridge alone that are particularly vulnerable to heat loss and hard to decarbonise.

The UK government is currently consulting on whether private rental homes should meet a minimum energy efficiency standard (an EPC rating of C) – with the outcome due in May 2025. A limited number of homes have an EPC rating, but according to Bardhan’s research, over half of the houses in England and Wales would fail this requirement.

“Our aim is to inform policy discussions in support of decarbonisation strategies. We’re using AI to unearth the hidden layers affecting population health, including the energy efficiency of our homes.”

Bardhan adds: “Prioritising the risks faced by individual households must become a central focus in climate change discussions. By capturing thermal images, we can distinctly visualise rooftops, walls, and other structural elements to precisely identify how each building loses heat.”

“This allows us to ask critical questions: How much of this heat could be retained through retrofitting? How can we empower households to become more resilient in the face of a changing climate? It allows us to identify poorly performing homes, and those whose owners most need government support to increase the property’s energy efficiency. To reach net zero by 2050 this must be done as a priority.”

Household heat loss isn’t the only issue that can affect health – indoor overheating can too. In the summer of 2022, thousands of people died in the UK due to extreme heat.

“We want to take maximum advantage of our algorithm – scaling it up to look not only at heat loss from homes for the winter months, but also what happens during the summer months when outside temperatures rise,” she says, adding:

“As the climate warms, buildings need to become more energy-efficient to help combat climate change and avoid excessive energy consumption. We shouldn’t need to use air conditioning in the UK!”

More: https://www.sustainabledesign.arct.cam.ac.uk/projects/decarbonisation. Bardhan’s research is supported by the UK and European Space Agencies. She contributes to the Health-Driven Design for Cities (HD4) project based at CARES in Singapore.

Enhancing forest monitoring and carbon tracking

The immense value of forests for capturing atmospheric carbon and supporting biodiversity is now clear. As market initiatives like carbon and biodiversity credits gain momentum to help offset our environmental impact, the science behind them is evolving too. Two new Cambridge projects are harnessing the power of AI to improve forest monitoring and carbon tracking, by diving into remarkable levels of detail.

“Our AI model will provide stronger evidence to support programmes for carbon accounting and biodiversity credits, by validating large-scale satellite images with extensive, high-quality data collected on the ground,” says Dr Emily Lines in the Department of Geography.

She’s working with Dr Harry Owen, also in the Department of Geography, to develop an AI model trained on data collected across European forests. This includes billions of terrestrial laser scans, drone images, and even manual measurements with a tape. While the sheer volume of data has previously been difficult to manage, AI is now speeding up the process and making the data easier to interpret.

“AI allows us to create high-resolution, high-quality and large-scale estimates of ecosystem properties – including their species composition and ecological function. This opens up new opportunities to rigorously test and validate credit-based valuations of ecosystems,” says Lines.

In a complementary project, researcher Frank Feng in the Department of Computer Science and Technology has developed an app called GreenLens, to measure trees on the ground much more quickly and accurately than a person with a tape measure. Estimating tree trunk diameter is key in understanding a tree’s health and its ability to store carbon. The freely available app is a user-friendly tool that provides reliable data to support reforestation projects and advance sustainability efforts.

“GreenLens uses AI-powered computer vision and depth detection to measure tree trunk diameters on affordable Android devices. This makes it easier and faster for researchers, landowners, and communities to collect data and monitor forests – without the need for expensive equipment,” says Feng.

Together these projects demonstrate the growing role of AI and mobile technology in improving forest monitoring, accelerating effective climate action, and supporting biodiversity conservation worldwide.

More: Harnessing AI for Forest Monitoring is supported by ai@cam.

source: cam.ac.uk

AI can be good for our HEALTH and WELLBEING

By Craig Brierley

“If we get things right, the possibilities for AI to transform health and medicine are endless. It can be of massive public benefit. But more than that, it has to be.”

Professors Andres Floto, Mihaela van der Schaar and Eoin McKinney, Cambridge Centre for AI in Medicine

Cambridge researchers are looking at ways that AI can transform everything from drug discovery to Alzheimer’s diagnoses to GP consultations.

Tackling dementia

In 2024, Professor Zoe Kourtzi in the Department of Psychology showed that an AI tool developed by her team could outperform clinical tests at predicting whether people with early signs of dementia will remain stable or develop Alzheimer’s disease. 

At a time of intense pressure on the NHS, tools such as this could help doctors prioritise care for those patients who need it most, while removing the need for invasive and costly diagnostic tests for those whose condition will remain stable. They can also give patients peace of mind that their condition is unlikely to worsen, or, for those less fortunate, it can help them and their families prepare. 

These tools could also be transformational in the search for new drugs, making clinical trials more effective, faster and cheaper, says Kourtzi. 

Recently, two dementia drugs – lecanemab and donanemab – have shown promise in slowing the disease, but the benefits compared to the costs were judged insufficient to warrant approval for use within the NHS. Beyond these, there’s been limited progress in the field. 

Part of the problem is that clinical trials often focus on the wrong people, which is where AI may help to better decide who to include in trials. 

“If you have people that the AI models say will not develop pathology, you won’t want to put them in your trial. They’ll only mess up the statistics, and then [the trials] will never show an effect, no matter if you have the best drug in the world. And if you include people who will progress really fast, it might be already too late for the drug to show benefit.” 

Kourtzi is leading one of ai@cam’s AI-deas projects to create a ‘BrainHealth hub’ to tackle the global brain and mental health crisis. It will bridge the gap between engineers, mathematicians and computer scientists who have the tools but lack the data, and clinicians and neuroscientists who have the data but lack advanced tools to mine them. 

“Our idea is to create a ‘hothouse’ of ideas where people can come together to ask and answer challenging questions.“

University researchers, industry partners, the charity sector and policymakers will explore questions such as: how can we use AI for drug discovery, to accelerate clinical trials and develop new treatments, and how can we build interpretable AI models that can be translated to clinical tools?” 

The need for such AI to be reliable and responsible is a theme that comes up frequently when Kourtzi speaks to patient groups. 

“When doctors are using a complex diagnostic tool like an MRI machine, patients don’t query whether they understand what’s in this machine, why it works this way. What they want to know is that it’s gone through regulatory standards, it’s safe to use and can be trusted. It’s exactly the same with AI.”

Elderly patient speaking to a healthcare worker

Making GP practices more efficient

Professor Niels Peek from The Healthcare Improvement Studies (THIS) Institute believes that AI could have a major impact on primary care services, such as GP practices, by tackling some of their most mundane tasks.

One such application involves the use of ‘digital scribes’ to record, transcribe, and summarise conversations between GPs and patients.

“If you look at the amount of time that clinicians spend on that type of work, it’s just incredible,” he says.

“Considering that clinician time is probably the most precious commodity within the NHS, this is technology that could be transformational.”

It is likely that the NHS will increasingly adopt digital scribe technology in the future, so it will be important to ensure the summaries are accurate and do not omit key points or add things that were not mentioned (a ‘hallucination’). With support from The Health Foundation, Peek is asking whether the technology actually saves time? “If you have to spend a lot of time correcting its outputs, then it’s no longer a given that it actually does save you time.”

Peek believes that in the future, every clinical consultation will be recorded digitally, stored as part of a patient’s record, and summarised with AI. But the existing technology environment, particularly in primary care, presents a challenge.

“GPs use electronic health records that have evolved over time and often look outdated. Any new technology must fit within these systems. Asking people to log into a different system is not feasible.”

Peek is also involved in evaluating Patchs, a tool that applies AI to the process of booking GP appointments and conducting online consultations. It was designed by GP staff and patients, in collaboration with The University of Manchester (where Peek was formerly based) and commercialised by the company Patchs Health. It is now used by around one in 10 GP practices across England.

Working with end users – patients, GPs, and particularly the administrative staff who use these systems on a day-to-day basis – is crucial.  “You have to make sure they fit both with the systems people are already using, and also with how they do things, with their workflows. Only then will you see differences that translate into real benefits to people.”

GP speaking to a patient

Addressing mental health among young people

Over recent years, there has been a significant increase in the prevalence of mental health disorders among young people. But with stretched NHS resources, it can be difficult to access Child and Adolescent Mental Health Services (CAMHS).

Not every child recommended for a referral will need to see a mental health specialist, says Dr Anna Moore from the Department of Psychiatry, but the bottleneck means they can be on the waiting list for up to two years only to be told they don’t meet the criteria for treatment. The quality of advice they get about alternative options that do meet their needs varies a lot.

Moore is interested in whether AI can help manage this bottleneck by identifying those children in greatest need for support, and helping those who don’t need specialist CAMHS to find suitable support from elsewhere. One way to do so is by using data collected routinely on children.

“The kinds of data that help us do this can be some of the really sensitive data about people,” she says. “It might be health information, how they’re doing at school, but it could also be information such as they got drunk last weekend and ended up in A&E.”

For this reason, she says, it’s essential that they work closely with members of the public when designing such a system to make sure people understand what they are doing, the kinds of data they are considering using and how it might be used, but also how it might improve the care of young people with mental health problems.

One of the questions that often comes up from ethicists is whether, given the difficulties in accessing CAMHS, it is necessarily a good thing to identify children if they cannot then access services.

“Yes, we can identify those kids who need help, but we need to ask, ‘but so what?’,” she says. The tool will need to suggest a referral to CAMHS for the children who need it, but for those who have a problem but could be better supported in other ways than CAMHS that could be more flexible to their needs, can it signpost them to helpful, evidence-based, age-appropriate information?

Moore is designing the tool to help find those children who might otherwise get missed. In the most extreme cases, these might be children such as Victoria Climbié and Baby P, who were tortured and murdered by their guardians. The serious case reviews highlighted multiple missed opportunities for action, often because systems were not joined up, meaning no one was able to see full picture.

“If we’re able to look at all of the data across the system relating to a child, then it might well be possible to bring that together and say, actually there’s enough information here that we can do something about it.”

From womb to world

Across the world, fertility rates are falling, while families are choosing to have children later on in life. To help them conceive, many couples turn to assisted reproductive technologies such as IVF; however, success rates remain low and the process can be expensive. In the UK, treatment at a private clinic can cost more than £5,000 per cycle – in the US, it can be around $20,000 – and with no guarantee of success.

Mo Vali and Dr Staci Weiss hope that AI can change this. They are leading From Womb to World, one of ai@cam’s flagship AI-deas projects, which aims to improve prospective parents’ chances of having a baby by diagnosing fertility conditions early on and personalising fertility treatments.

“We’re trying to democratise access to IVF outcomes and tackle a growing societal problem of declining fertility rates.”
Mo Vali

They are working with Professor Yau Thum at The Lister Fertility Clinic, one of the largest standalone private IVF clinics in the UK, to develop cheaper, less invasive and more accurate AI-assisted tests that can be used throughout the patient’s IVF journey. To do this, they are making use of the myriad different samples and datasets collected during the fertility process, from blood tests and ultrasound images to follicular fluid, as well as data encompassing demographic and cultural factors.

Building the AI tools was the easy bit, says Vali. The bigger challenge has been generating the datasets, clearing ethical and regulatory hurdles, and importantly, ensuring that sensitive data is properly anonymised and de-identified – vital for patient privacy and building public trust.

The team also hopes to use AI to improve, and make more accessible, 4D ultrasound scans that let the parents see their baby moving in the womb, capturing movements like thumb-sucking and yawning. This is important for strengthening the maternal bond during a potentially stressful time, says Weiss.

“Seeing their baby’s face and watching it move creates a very different kind of physical, embodied reality and a bond between the mother and her child,” she says.

Consulting with women who have experienced first-hand the challenges of fertility treatments is providing valuable insights, while The Lister Fertility Clinic – a private clinic – is an ideal platform in which to test their ideas before providing tools for the wider public. It offers a smaller, more controlled environment where they can engage directly with senior clinicians.

“We want to ensure that the research that we are doing and the AI models that we’re building work seamlessly before we go at scale,” says Vali.

Pregnant women looking at a fertility app

Preventing cancer

Antonis Antoniou, Professor of Cancer Risk Prediction at Cambridge, has spent most of his career developing models that predict our risk of developing cancers. Now, AI promises to take his work to an entirely new level.

Antoniou has recently been announced as Director of the Cancer Data-Driven Discovery Programme, a £10million initiative that promises to transform how we detect, diagnose – and even prevent – cancer in the future. It’s a multi-institutional project, with partners across the UK, that will build infrastructure and create a multidisciplinary community of researchers, including training the next generation of researchers, with funding for 30 PhD places and early career research positions in cancer data sciences.

The programme will enable scientists to access and link a vast array of diverse health data sets, from GP clinics and cancer screening programmes to large cohort studies through to data generated through interactions with public services such as on occupation, educational attainment and other geospatial data on air pollution, housing quality and access to services. These will be used in combination with AI and state-of-the-art analytics. 

“The funding will allow us to use these data sets to develop models that help us predict individual cancer risk and greatly improve our understanding of who is most at risk of developing cancer,” he says. “It will hopefully help us transform how we detect and prevent and diagnose cancer in the future.”

One of the key considerations of their work will be to ensure that the AI tools they develop do not inadvertently exacerbate inequalities.

“We have to be careful not to develop models that only work for people who are willing to participate in research studies or those who frequently interact with the healthcare sector, for example, and ensure we’re not ignoring those who can’t easily access healthcare services, perhaps because they live in areas of deprivation.”

Key to their programme has been the involvement of patients and members of the public, who, alongside clinical practitioners, have helped them from the outset to shape their programme.

“They were involved in co-developing our proposals from the planning phase, and going forward, they’ll continue to play a key role, helping guide how we work and to make sure that the data are used responsibly and safely,” he says.

The Cancer Data-Driven Detection programme is jointly supported by Cancer Research UK, the National Institute for Health & Care Research, the Engineering & Physical Sciences Research Council, Health Data Research UK, and Administrative Data Research UK.

Read more about AI and cancer here

Female patient undergoing a mammogram

Innovations in drug discovery

It’s just over 20 years since the first human genome was sequenced, opening up a new scientific field – genomics – and helping us understand how our bodies function. Since then, the number of so-called ‘omics’ – complete readouts of particular types of molecules in our bodies, such as proteins (proteomics) and metabolites (metabolomics) – has blossomed.

Dr Namshik Han from the Milner Therapeutics Institute is interested in how AI can mine this treasure trove to help discover new drugs.

“We’re applying AI approaches to dissect those really big data sets and try to identify meaningful, actionable drug targets,” he says. 

His team works with partners who can take these targets to the next stage, such as by developing chemical compounds to act on these targets, testing them in cells and animals, and then taking them through clinical trials.

The Milner Institute acts as a bridge between academia and industry to accelerate this process, partnering with dozens of academic institutes, industry partners, biotech, pharma and venture capitalists. But at the ‘bleeding edge’ of Han’s work is his collaborations with tech companies. 

Han is interested in how quantum computers, which use principles of quantum mechanics to enable much faster and more powerful calculations, can address problems such as the complex chemistry underpinning drug development.

“We’ve shown that quantum algorithms see things that conventional AI algorithms don’t.” Han says.

His lab has used quantum algorithm to explore massive networks comprising tens of thousands of human proteins. When conventional AI explores these networks, it only looks at certain areas, whereas Han showed that quantum algorithms cover the entire network.

AI has the potential to improve every aspect of drug discovery – from identifying targets, as Han is doing, to optimising clinical trials, potentially reducing the cost of new medications and ensuring patients benefit faster. But that’s not what really excites Han.

“Take cancer, for example,” he says. “There are many different types, and for some of them we don’t have specific drugs to treat them. Instead, we have to use a drug for a related cancer and give that to the patient, which is not ideal. 

“Quantum based AI will open up a completely new door to find truly innovative drugs which we’ve never thought of before. That’s where the real impact has to be.” 

Biomedical researcher pipetting in a lab

source: cam.ac.uk

Opinion: AI can democratise weather forecasting

Richard Turner
Professor Richard Turner

AI will give us the next leap forward in forecasting the weather, says Richard Turner, and make it available to all countries, not just those with access to high-quality data and computing resources.

From farmers planting crops to emergency responders preparing for natural disasters, the ability to predict the weather is fundamental to societies all across the globe.

The modern approach to forecasting was invented a century ago. Lewis Fry Richardson, a former student of King’s College Cambridge, who was working as an ambulance driver during the First World War, realised that being able to predict the weather could help save lives. This led him to develop the first mathematical approach to forecasting the weather.

Richardson’s method was a breakthrough, but to say that it was time-consuming is an understatement: he calculated it would require 64,000 people working with slide rules to produce a timely forecast for the following day. It was the development of supercomputers in the 1950s that made Richardson’s approach practical.

Since then, weather forecasting methods have become more sophisticated and more accurate, driven by advances in computing and by the increased amount of information we have about the weather from satellites and other instruments. But now, we are poised to make another big leap forward, thanks to AI.

The last few years have seen an AI revolution in weather forecasting and my group has recently taken this to the next level. Working with colleagues at The Alan Turing Institute, Microsoft Research and the European Centre for Medium Range Weather Forecasts, we’ve developed Aardvark Weather, a fully AI-driven weather prediction system that can deliver accurate forecasts tens of times faster and using thousands of times less computing power than both physics-based forecasting systems and previous AI-based approaches.

We believe that Aardvark could democratise access to accurate forecasts, since it can be run and trained on a regular desktop computer, not the powerful supercomputers that power most of today’s weather forecasting technology. In developing countries where access to high-quality data and computing resources is limited, platforms like Aardvark could be transformational.


AI is a game changer

The need for improved forecasting systems is more crucial than ever. Extreme weather events – from the recent wildfires in Los Angeles to last year’s flash floods in Spain – are becoming more frequent. Predicting other parts of the Earth system is equally as important. For example, 1.5 million people die each year in India due to poor air quality, and changes in ice on the sea and land at the poles have huge implications.

AI could help mitigate these risks by delivering timely, hyper-local forecasts, even in regions with limited observational data. These AI systems have the potential to dramatically improve public safety, food security, supply chain management, and energy planning in an increasingly volatile climate.

AI-driven forecasting is also well-placed to play a crucial role in our transition to a net-zero future. If we can better predict fluctuations in supply from wind and solar energy sources, we can optimise energy grids, reducing reliance on fossil fuels and making clean energy more viable on a global scale.

Richardson’s weather forecasting approach relied on numerical models – mathematical representations of the Earth’s atmosphere, land, and oceans that require massive computing power. These models, though incredibly advanced, have limitations: they are expensive, slow to run, time consuming to improve, and often struggle to deliver accurate predictions in areas like the tropics or the poles. The arrival of AI is changing the game entirely.


Achieving its potential

Results from Aardvark and other AI-driven systems have demonstrated that they can perform weather forecasting tasks with excellent speed and accuracy. These models, trained on vast amounts of historical data, can learn patterns and generate forecasts in a fraction of the time that traditional methods require. Through the Aurora project with Microsoft Research, I’ve also shown that the same approaches can transform forecasts of air quality, ocean waves, and hurricanes.

Companies like Google DeepMind, Microsoft, and various research institutions – including my team at Cambridge – are achieving results that rival or even surpass conventional numerical models at a fraction of the computational cost.

Of course, this transformation comes with challenges.

Ensuring trust and transparency in AI weather forecasting technologies is paramount. Weather forecasting has long been a domain where public institutions – like the UK Met Office and the European Centre for Medium-Range Weather Forecasts – play a critical role in ensuring reliability and accountability. AI models, though promising, must be rigorously tested and validated to build public confidence. These systems should be implemented alongside existing methods, rather than replacing them outright, and continuous retraining and re-evaluation will likely be needed due to the changing climate.

National weather services and universities like Cambridge must step up to ensure that AI-driven forecasting remains a public good, not a commercial commodity. The rise of AI weather forecasting has opened the door to more commercial involvement in an area that would previously have been dominated by public institutions and international centres. While start-ups and big tech companies are making significant strides in AI weather prediction and are a valuable part of the forecasting ecosystem, business interests are not necessarily aligned with societal need. The risk is that critical forecasting capabilities could become privatised, limiting access for those who need it most.

Universities are uniquely positioned to act as a balancing force, driving research that prioritises long-term societal benefit. However, traditional academic structures are often ill-equipped to handle the scale and speed required for AI research. If we are to compete with industry, we must rethink how AI research is conducted  embracing interdisciplinary collaboration, investing in large-scale computational infrastructure, rethinking funding models so that they are faster and more agile, and fostering partnerships that ensure AI development aligns with the public good.

The future of weather forecasting will not be decided solely in the labs of tech giants or the halls of government. It will be shaped by the choices we make now – how we invest in research, how we regulate AI deployment, and how we ensure that life-saving technology remains accessible to all.

Richard Turner is Professor of Machine Learning in the Machine Learning Group of the Department of Engineering, a Research Lead at the Alan Turing Institute, and previously a Visiting Researcher at Microsoft Research. He is also a Bye Fellow of Christ’s College.

source: cam.ac.uk

Opinion: Humans should be at the heart of AI

Anna Korhonen
Professor Anna Korhonen

With the right development and application, AI could become a transformative force for good. What’s missing in current technologies is human insight, says Anna Korhonen.

AI has immense potential to transform human life for the better. But to deliver on this promise it must be equipped with a better understanding of human intelligence, values and needs.

AI could help tackle some of the world’s most pressing challenges – advancing climate science, improving healthcare, making education more accessible, and reducing inequalities.

In the public sector, AI could enhance decision-making, optimise service delivery, and ensure that resources reach the people and places where they are most needed. With the right development and application, it could become a transformative force for good.

But today’s AI technologies struggle to grasp the nuances of human behaviour, social dynamics and the complex realities of our world.

They lack the flexibility and contextual understanding of human intelligence. Their limitations in communication, reasoning and judgment mean that they fall short of supporting us in many critical tasks. Meanwhile, concerns around bias, misinformation, safety and job displacement continue to grow.


Achieving its potential

To unlock AI’s potential for good, we need a fundamental shift in how it is developed.

That starts with designing technologies to work in harmony with people – to be more human-centric. Rather than replacing us, AI should enhance our capabilities, support our intelligence and creativity, and reflect our values and priorities. To truly benefit everyone, it should be designed to be trustworthy, inclusive, and accessible, serving diverse communities worldwide – not just a privileged few.

To enable this, we need to move beyond viewing AI as a purely technical field. Building technologies that genuinely understand and support people requires insights from the diverse range of disciplines that explore the human condition – social, behavioural, cognitive, clinical and environmental sciences, the arts and more. Universities are uniquely positioned to lead this shift by promoting interdisciplinary research and connecting technical fields with human-centred perspectives.

We must also take AI research beyond the lab and into the real world by collaborating across sectors – bringing together academia, industry, policymakers, NGOs, and civil society to understand the needs, ensure technologies are fit for purpose, and test them in real-world settings. These partnerships are crucial to building systems that are robust, scalable, and socially beneficial.

Finally, AI education must evolve. The next generation of AI practitioners needs more than technical expertise – they must also understand the wider social, ethical, environmental, and industrial contexts of their work. At Cambridge, we are launching new MPhil and PhD programmes in Human-Inspired Artificial Intelligence to help meet this need. These programmes, starting in October 2025, will equip students with the interdisciplinary and cross-sector knowledge needed to innovate AI that is not only powerful, but also aligned with human values and needs.

The opportunity is vast – but progress depends on the choices we make today. By rethinking how AI is developed, embedding human insight at every stage, working collaboratively across sectors, and reimagining how we educate the next generation, we can ensure that AI becomes a force for public good – one that helps shape a more just, inclusive and equitable future.

Anna Korhonen is Professor of Natural Language Processing, Director of the Centre for Human-Inspired Artificial Intelligence (CHIA) and Co-Director of the Institute for Technology and Humanity at the University of Cambridge.

source: cam.ac.uk

Opinion: AI belongs in classrooms

Jill Duffy
Jill Duffy

AI in education has transformative potential for students, teachers and schools but only if we harness it in the right way – by keeping people at the heart of the technology, says Jill Duffy.

When you think about AI and education, the first thing that comes to mind is probably students using ChatGPT to write their essays and coursework. But, important as this issue is, the debate about AI in education should go way beyond it.

As head of an exam board (OCR), I am well aware of how serious this issue is. Deciphering whether a piece of work was AI generated was not part of the job description for educators a decade ago, and I’m sure not many appreciate this new addition to their workload.

ChatGPT writing essays may be the most noticeable phenomenon right now, but it is far from the only way that this technology will transform how we teach and assess young people. Crucially, AI offers opportunities as well as threats. But only if we harness it in the right way – by keeping people at the heart of education.

What does that mean in practice? Let’s look again at the concerns over AI and coursework. As I’ve previously argued, we cannot put generative AI back in its box. Demanding that students never use it in any capacity is obviously not enforceable, and I would also argue is not desirable: the proper use of this technology will be a vital skill in their working lives.

In future, instead of asking students “did you use AI?” teachers will be asking them “how did you use AI?” It’s about accepting where this technology can help students – finessing arguments, helping with research – while protecting the human skills they will still need – fact checking, rewriting, thinking analytically.

The same human-centric approach is needed when it comes to teaching and AI. We can’t afford to ignore the obvious benefits of this technology, but we cannot embrace it blindly at the cost of real, human teaching. At OCR we are looking into various tools that could help teachers who are struggling with ever-increasing workloads. This could be about helping them with lesson planning, or searching through subject specifications or guidance materials.

So, we don’t expect AI to replace the very human skills of intelligently questioning a student to guide their learning, or safeguarding their wellbeing, or passing on a passion for their subject. Instead, AI can take care of some of the time-consuming admin, giving teachers more time to actually teach.

This human centered approach guides everything we are doing at Cambridge and OCR. We have been developing digital exams for the past few years, for Cambridge’s international exams and for OCR’s popular Computer Science GCSE. What we are not doing here is simply transferring the paper exam on to a screen. We have been testing and monitoring how students perform in these on-screen exams, using mocks and trials, to make sure there is no advantage or disadvantage to a particular method.


Achieving its potential

But keeping humans at the heart of education while getting the most out of new technology will take more than the efforts of one exam board.

As OCR recently warned in its report Striking the Balance, there is a risk that the move towards digital exacerbates existing inequalities in the system. If digital learning can be more effective, what happens to schools that can’t afford the required technology?

A national strategy is required – involving the government, regulators, and other stakeholders – to ensure every school can benefit from the transformative potential of this technology.

Jill Duffy leads OCR and is managing director for UK Education at Cambridge University Press and Assessment.

source: cam.ac.uk

Opinion: Universities play a vital role in the future of AI

Neil Lawrence and Jessica Montgomery
Neil Lawrence and Jessica Montgomery

Universities can bridge the gap between those who develop AI systems and those who will use and be affected by them. We must step up to deliver this role, say Neil Lawrence and Jess Montgomery.

As the government considers its ambitious agenda to drive wider roll out of AI across the public sector in areas that directly affect people’s lives, we need to find different approaches to innovation that avoid failures like the Horizon Post Office scandal.

For almost a decade, public dialogues have been highlighting what people want from AI: technologies that tackle challenges affecting our shared health and wellbeing; tools that strengthen our communities and personal interactions; and systems that support democratic governance. As these conversations continue, they reveal a growing public scepticism about AI’s ability to deliver on these promises.

This scepticism is warranted. Despite impressive technical advances and intense policy activity over the last ten years, a significant gap has emerged between AI’s capabilities in the lab and its ability to deliver meaningful benefits in the real world. This disconnect stems in part from a lack of understanding of real-world challenges.

We’ve seen the impact of this lack of understanding in previous attempts to drive technology adoption. In the UK, both the Horizon Post Office and Lorenzo NHS IT scandals demonstrated how IT projects can fail catastrophically.

These failures share common patterns that we must avoid repeating. Insufficient understanding of local needs led to systems being designed without considering how they would integrate into existing workflows. Lack of effective feedback mechanisms prevented early identification of problems and blocked adaptation to user experiences. Rigid implementation approaches imposed technology without allowing for local variation or iteration based on real-world testing. Together, these factors created systems that burdened rather than benefited their intended users.

As the government considers its ambitious agenda to drive wider roll out of AI across the public sector – in areas that directly affect people’s lives – we need to find different approaches to innovation that avoid these failures.


Achieving its potential

There is an alternative. The UK has strategic advantages in research and human capital that it can leverage to bridge this gap by building AI from the ground up.

Work across Cambridgeshire demonstrates this alternative approach in action. In local government, Greater Cambridge Shared Planning is collaborating with universities to develop AI tools that analyse public consultation responses. By combining planners’ expertise with research capabilities, they’re creating systems that could reduce staff time for analysis from over a year to just two months.

Similar collaborations are emerging in healthcare, where clinicians and researchers are leading the development of AI tools for cancer diagnosis. Their work shows how frontline staff can ensure AI enhances rather than replaces clinical judgment, while improving outcomes for patients.

We’ve already seen the value of this approach during COVID-19, when NHS England East collaborated with researchers to develop AI models that helped hospital leaders make critical decisions about resource allocation. This partnership demonstrated how AI can support operational decisions when developed with those who understand local needs.

This points toward what we call an ‘attention reinvestment cycle’. The key to scaling innovation comes when some of the time that professionals save by using AI is reinvested in sharing knowledge and mentoring colleagues, allowing solutions to spread organically through professional networks. Unlike top-down implementation, this approach builds momentum through peer-to-peer learning, with frontline workers becoming both beneficiaries and champions of technology

Too often in the past, universities have been distant from the difficulties that society faces. However, universities have access to the research and human capital that are vital for the next wave of AI innovation. Their position as neutral conveners allows the creation of spaces where people working to deploy AI in public services and industry can collaborate with diverse communities of expertise, from engineering to ethics.

This bottom-up, human-centred approach offers more effective and ethical AI implementation. It is a vital component of how government can successfully implement its national AI strategy and deliver on the promise of AI for all citizens.

We must step up to deliver this role. By fostering collaboration between those who develop AI systems and those who will use and be affected by them, universities can ensure that technological progress truly serves the public good.

Jessica Montgomery is Director of ai@cam and Neil Lawrence is DeepMind Professor of Machine Learning and Chair of ai@cam, the University of Cambridge’s flagship mission on artificial intelligence. Leveraging the world-leading research pursued across the University, ai@cam creates connections between disciplines, sectors and communities to unlock a new wave of progress in AI and its application to benefit science and society. ai@cam’s mission is to drive a new wave of AI innovation that serves science, citizens and society.

source: cam.ac.uk

Opinion: AI can transform health and medicine

L to R: Professors Eoin McKinney, Mihaela van der Schaar and Andres Floto

If you walk around Addenbrooke’s Hospital on the Cambridge Biomedical Campus, sooner or later you will come across a poster showing a clinician in his scrubs standing by a CT scanner, smiling out at you.

This is Raj Jena, one of our colleagues and Cambridge’s first – in fact, the UK’s first – Professor of AI in Radiology. One of the reasons Raj has been chosen as a face of Addenbrooke’s is his pioneering use of AI in preparing radiotherapy scans. OSAIRIS, the tool he developed, can automate a complex, but routine task, saving hours of doctors’ time and ensuring patients receive crucial treatment sooner.

It’s just one example of the ways that AI will transform medicine and healthcare – and of how Cambridge is leading the charge.


The impact of AI in medicine will likely be in four main areas:

First, it will ‘turbocharge’ biomedical discovery, helping us to understand how cells work, how diseases arise, and how to identify new drug targets and design new medicines.

Second, it will unlock huge datasets – so-called ‘omics’, such as genomics and proteomics – to help us predict an individual’s disease risk, detect diseases early and develop more targeted treatments.

Third, it will optimise the next generation of clinical trials, allowing us to recruit the most suitable participants, and analysing and interpreting outcomes in real time so that we can adapt the trials as they go along.

All of these will lead to the fourth way – transforming the treatments we receive and the healthcare systems that deliver them. It will allow us to personalise therapies, offering the right drug at the right time at the right dose for the right person.


Achieving its potential

None of this, of course, will be straightforward.

While the technical knowhow to develop AI tools has progressed at almost breakneck speed, accessing the data to train these models can present a challenge. We risk being overwhelmed by the ‘three Vs’ of data – its volume, variety and velocity. At present, we’re not using this data at anywhere near its full potential.

To become a world leader in driving AI innovation in healthcare, we will need massive investment from the UK government to enable researchers to access well-curated data sets. A good example of this is UK Biobank, which took a huge amount of foresight, effort and money to set up, but is now used widely to drive innovation by the medical research community and by industry.

Clinical data is, by its very nature, highly sensitive, so it needs to be held securely, and if researchers want to access it, they must go through a strict approvals process. Cambridge University Hospitals NHS Foundation Trust has established the Electronic Patient Record Research and Innovation (ERIN), a secure environment created for just this reason, with an audit trail so that it is clear how and where data is being used, and with data anonymised so that patients cannot be identified. It is working with other partners in the East of England to create a regional version of this database.

We need this to happen at a UK-wide level. The UK is fortunate in that it has a single healthcare system, the NHS, accessible to all and free of charge at the point of use. What it lacks is a single computer infrastructure. Ideally, all hospitals in the UK would be on the same system, linked up so that researchers can extract data across the network without having to seek permission from every NHS trust.

Of course, AI tools are only ever going to be as good as the data they are trained on, and we have to be careful not to inadvertently exacerbate the very health inequalities we are trying to solve. Most data collected in medical research is from Western – predominantly Caucasian – populations. An AI tool trained on these data sets may not work as effectively at diagnosing disease in, say, a South Asian population, which is at a comparatively higher risk of diseases such as type 2 diabetes, heart disease and stroke.

There is also a risk that AI tools that work brilliantly in the lab fail when transferred to the NHS. That’s why it’s essential that the people developing these tools work from the outset with the end users – the clinicians, healthcare workers, patients, for example – to ensure the devices have the desired benefit. Otherwise, they risk ending up in the ‘boneyard of algorithms’.

Public trust and confidence that AI tools are safe is a fundamental requirement for what we do. Without it, AI’s potential will be lost. However, regulators are struggling to keep up with the pace of change. Clinicians can – and must – play a role in this. This will involve training them to read and appraise algorithms, in much the same way they do with clinical evidence. Giving them a better understanding of how the algorithms are developed, how accuracy and performance are reported and tested, will help them judge whether they work as intended.

Jena’s OSAIRIS tool was developed in tandem with Microsoft Research, but he is an NHS radiologist who understood firsthand what was needed. It was, in a sense, a device developed by the NHS, in the NHS, for the NHS. While this is not always essential, the healthcare provider needs to be involved at an early stage, because otherwise the person developing it risks building it in such a way that it is essentially unusable.


Speaking each other’s language

In 2020, Cambridge established a Centre for AI in Medicine with the ambition of developing ‘novel AI and machine learning technologies to revolutionise biomedical science, medicine and healthcare’.

The Centre was initially set up with funding from AstraZeneca and GSK to support PhD studentships, with each student having as supervisors someone from industry, a data scientist and a ‘domain expert’ (for example, a clinician, biologist or chemist). Another industry partner – Boehringer Ingelheim – has since joined.

We are very fortunate in Cambridge because we have a mixture of world-leading experts in AI and machine learning, discovery biology, and chemistry, as well as scientifically-minded clinicians who are keen to engage, and high performance computing infrastructure, such as the Dawn supercomputer. It puts us in the perfect position to be leaders in the field of AI and medicine.

But these disciplines have different goals and requirements, different ways of working and thinking. It’s our role at the Centre to bring them together and help them learn to speak each other’s language. We are forging the road ahead, and it is hugely exciting.

If we get things right, the possibilities for AI to transform health and medicine are endless. It can be of massive public benefit. But more than that, it has to be.

Professor Andres Floto and Professor Mihaela van der Schaar are Co-Directors of the Cambridge Centre for AI in Medicine. Professor Eoin McKinney is a Faculty Member

source: cam.ac.uk

Opinion: AI can help us heal the planet

Professor Anil Madhavapeddy

We need to act fast to mitigate the impacts of climate change, and to protect and restore biodiversity. There’s incredible potential in using AI to augment our work. It enables us to do things much more quickly – it’s like giving humans global knowledge superpowers!

It can turbocharge human capabilities, simulate complicated systems and search through vast amounts of data. It would help us make rapid progress in reversing the damage we’ve done to the planet with well-targeted interventions, while continuing to supply human needs.

Of course, this comes with risks of runaway systems causing harm, so everything we do must include a ‘human in the loop’ to guide what’s going on. AI systems have no capability for nuanced judgement.

Humans are generating vast amounts of information about our natural world. The imaging data alone spans every scale, from satellite and drone images of countries and landscapes, to land-based photographs of species and habitats, to microscopic observations of life. Alongside the visuals, conservation and climate scientists and practitioners are publishing an ever-increasing amount of written information on their ideas, experiments and real-world trials.

Imagine having access to all of this, plus razor-sharp climate models, available at your fingertips – with answers about any real or imagined situation available in seconds.

The Holy Grail is to combine all this observational data with all knowledge-based data from the whole of humanity and generate evidence-driven insights to accelerate our collective healing of the planet.

AI algorithms, searching and analysing the data, could help empower decision-makers to be confident that they’re making the best choices.

Ultimately, we should be able to create AI ‘Co-Pilots’ for policy-makers, to help them make decisions about all sorts of things in the best interests of our planet – whether a new development in Cambridge is a good idea, for example. AI could quickly create a referenced, in-depth report on anything a policy-maker wants to know, and forecast what will happen as a result of any specific decision.


Achieving its potential

There are currently three barriers to achieving this promising vision: access to enough hardware, energy and data.

Data is fuel for AI – but it’s been an enormous challenge getting hold of enough of it – particularly accessing published journal papers. The government has a desire to create a National Data Library, which is a great idea, because it would allow us to access huge amounts of existing knowledge and run it through AI algorithms while preserving privacy needs. Right now, the data is scattered and difficult to securely access for researchers and policymakers.

We also don’t have enough hardware. We need more GPUs – graphics processing units – they cost around £40,000 each and we need hundreds of thousands of hours of GPU time to unlock the scale required for modern learning algorithms.

And on energy, the fact that AI uses huge amounts of it is a big concern, but there have been recent research advances in the core AI approaches to make our energy expenditure much more efficient. Simulating the planet is also what’s called a ‘root node problem’ in that it is the beginning of a continuously ‘branching tree’ of other computational possibilities that will unlock ways to improve human lives.

If the barriers can be overcome, then the potential for AI to help us address the climate and biodiversity crises is huge. Through collaborative efforts across departments, Cambridge is harnessing the power of AI to work alongside some of the world’s brightest minds. There has never been a greater opportunity to develop solutions for our planet’s future – and to help rebalance the relationship between humans and nature across the world.

Anil Madhavapeddy is Professor of Planetary Computing in the Department of Computer Science and Technology.

source: cam.ac.uk

Opinion: We must balance the risks and benefits of AI

Professor Michael Barrett

The potential of AI to transform people’s lives in areas ranging from healthcare to better customer service are enormous. But as the technology advances, we must adopt policies to make sure the risks don’t overwhelm and stifle those benefits.

Importantly, we need to be on alert for algorithmic bias that could perpetuate inequality and marginalisation of communities around the world.

Algorithmic bias occurs when systems – often based on machine learning or AI – deliver biased outcomes or decisions because the data it has been given is incomplete, imbalanced or not fully representative.

I and colleagues here in Cambridge and at Warwick Business School have proposed a new way of thinking about the issue – we call this a ‘relational risk perspective’. This approach looks at not just how AI is being used now, but how it may be used in the future and across different geographies, avoiding what we call ‘the dark side of AI’. The goal is to safeguard the benefits of AI for everyone, while minimising the harm.

We look at the workplace as one example. AI is already having a huge impact on jobs, affecting both routine and creative tasks, and affecting activities that we’ve thought of as uniquely human – like creating art or writing film scripts.

As businesses use the technology more, and perhaps become over-dependent on it, we are at risk of undermining professional expertise and critical thinking, leaving workers de-motivated and expected to defer to machine-generated decisions.

This will impact not just tasks but also the social fabric of the workplace, by influencing how workers relate to each other and to organisations. If AI is used in recruitment, a lack of representation in datasets can reinforce inequalities when used to make decisions about hiring or promotions.

We also explore how this billion-dollar industry is often underpinned by largely ‘invisible’ workers in the Global South who clean data and refine algorithms for a user-group predominantly in the Global North. This ‘data colonialism’ not only reflects global inequalities but also reinforces marginalisation: the people whose labour enables AI to thrive are the same people who are largely excluded from the benefits of that technology.

Healthcare data is in particular danger from such data-driven bias, so we need to ensure that health-related information analysed by the Large Language Models used to train AI tools reflects a diverse population. Basing health policy on data from selected and perhaps more privileged communities can lead to a vicious cycle in which disparity is more deeply entrenched.


Achieving its potential

I believe that we can counter these threats, but time is of the essence as AI quickly becomes embedded into society. We should remember that generative AI is still an emerging technology, and take note that it is progressing faster than the ethical or regulatory landscape can keep pace with.

Our relational risk perspective does not present AI as inherently good or bad. Rather, AI is seen as having potential for benefit and harm depending on how it is developed and experienced across different social contexts. We also recognise that the risks are not static, as they evolve with the changing relationships between technology, its users and broader societal structures.

Policymakers and technologists should anticipate, rather than react to, the ways in which AI can entrench or challenge existing inequities​. They should also consider that some countries may develop AI maturity more quickly than others.

Finally, let’s draw on stakeholders far and wide in setting AI risk policy. A multidisciplinary approach which will help avoid bias, while at the same time helping to demonstrate to the public that AI policy really does reflect varied and diverse interests and communities.

Michael Barrett is Professor of Information Systems and Innovation Studies, Vice-Dean for Strategy and University Engagement at Cambridge Judge Business School, and a Fellow of Hughes Hall.

source: cam.ac.uk

Play ‘humanises’ paediatric care and should be key feature of a child-friendly NHS – report

Children’s hospital ward
Children’s hospital ward
Credit: Sturti, via Getty Images

The Cambridge report argues that play should be a recognised component of children’s healthcare in the Government’s forthcoming 10-year plan for the NHS.

Hospital-based play opens up a far more complete understanding of what it means for a child to be a healthy or wellDr Kelsey Graber

Play should be a core feature of children’s healthcare in forthcoming plans for the future of the NHS, according to a new report which argues that play “humanises” the experiences of child patients.

The report, by University of Cambridge academics for the charity Starlight, calls for play, games and playful approaches to be integrated into a ‘holistic’ model of children’s healthcare – one that acknowledges the emotional and psychological dimensions of good health, alongside its physical aspects.

Both internationally and in the UK, health systems have, in recent decades, increasingly promoted play in paediatric healthcare. There is a growing understanding that making healthcare more child-friendly can reduce stress and positively improve younger patients’ experiences.

Despite this recognition, play often remains undervalued and inconsistently integrated across healthcare contexts. For the first time, the report compiles evidence from over 120 studies to make the case for its more systematic incorporation.

In the case of the UK, the authors argue that the Government’s forthcoming 10-year health plan for the NHS offers an important opportunity to embed play within a more holistic vision for childhood health.

The report was produced by academics at the Centre for Play in Education, Development and Learning (PEDAL) at the Faculty of Education, University of Cambridge. Starlight, which commissioned the review, is a national charity advocating to reduce trauma through play in children’s healthcare.

Dr Kelsey Graber, the report’s lead author, said: “Play and child-centred activities have a unique capacity to support the emotional and mental aspects of children’s healthcare experiences, whether in hospital or during a routine treatment at the GP. It won’t directly change the course of an illness, but it can humanise the experience by reducing stress and anxiety and enhancing understanding and comfort. Hospital-based play opens up a far more complete understanding of what it means for a child to be a healthy or well.”

Adrian Voce, Head of Policy and Public Affairs at Starlight, said: “With the government promising to create the healthiest generation of children ever as part of its new long term health plan, this compelling evidence of the benefits of play to children’s healthcare is very timely. We encourage ministers and NHS leaders to make health play teams an integral part of paediatric care.”

The report synthesised evidence from 127 studies in 29 countries. Most were published after 2020, reflecting intensified interest in children’s healthcare interventions following the COVID-19 outbreak.

Some studies focused on medically-relevant play. For example, hospital staff sometimes use role-play, or games and toys like Playmobil Hospital to familiarise children with medical procedures and ease anxiety. Other studies focused on non-medical play: the use of activities like social games, video games, arts and crafts, music therapy and storytelling to help make patients more comfortable. Some hospitals and surgeries even provide “distraction kits” to help children relax.

In its survey of all these studies, the report finds strong evidence that play benefits children’s psychological health and wellbeing. Play is also sometimes associated with positive physical health; one study, for example, found that children who played an online game about dentistry had lower heart rates during a subsequent dental procedure, probably because they felt more prepared.

The authors identify five main ways in which play enhances children’s healthcare based on the available body of evidence:

Reducing stress and discomfort during medical procedures. Play is sometimes associated with physiological markers of reduced distress, such as lower heart rates and blood pressure. Therapeutic play can also ease pain and anxiety.

Helping children express and manage emotions. Play can help to alleviate fear, anxiety, boredom and loneliness in healthcare settings. It also provides an outlet for emotional expression among all age groups.

Fostering dignity and agency. In an environment where children often feel powerless and a lack of personal choice, play provides a sense of control which supports mental and emotional wellbeing.

Building connection and belonging. Play can strengthen children’s relationships with other patients, family members and healthcare staff, easing their experiences in a potentially overwhelming environment. This may be particularly important for children in longer term or palliative care.

Preserving a sense of childhood. Play helps children feel like children, and not just patients, the report suggests, by providing “essential moments of happiness, respite and emotional release”.

While play is widely beneficial, the report stresses that its impact will vary from child to child. This variability highlights a need, the authors note, for informed, child-centred approaches to play in healthcare settings. Unfortunately, play expertise in these settings may often be lacking: only 13% of the studies reviewed covered the work of health play specialists, and most of the reported activities were directed and defined by adults, rather than by children themselves.

The report also highlights a major gap in research on the use of play in mental healthcare. Just three of the 127 studies focused on this area, even though 86% emphasised play’s psychological benefits. The report calls for greater professional and academic attention to the use of play in mental health support, particularly in light of escalating rates of mental health challenges among children and young people. More work is also needed, it adds, to understand the benefits of play-based activities in healthcare for infants and adolescents, both of which groups were under-represented in the research literature.

Embedding play more fully in healthcare as part of wider Government reforms, the authors suggest, could reduce healthcare-related trauma and improve long-term outcomes for children. “It is not just healthcare professionals, but also policy leaders who need to recognise the value of play,” Graber said. “That recognition is foundational to ensuring that children’s developmental, psychological, and emotional health needs are met, alongside their physical health.”

The report, Playing with children’s health? will be published on the Starlight website on 31 March: https://www.starlight.org.uk/ 



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Students from across the country get a taste of studying at Cambridge at the Cambridge Festival

Students make antibody keychains during a workshop with the MRC Toxicology Unit

Over 500 KS2 and KS3 students from as far away as Warrington got the chance to experience studying at the University of Cambridge with a selection of lectures and workshops held as part of the Cambridge Festival. 

We were delighted to welcome pupils from Warrington’s Lymm High School, Ipswich High School, The Charter School in North Dulwich, Rickmansworth School, Sutton Valance School in Maidstone as well as schools closer to home such as St Peter’s Huntingdon, Fenstanton Primary School, Barton Primary School, Impington Village College and St Andrews School in Soham. 

Running over two days (25/26 March 2025) and held in the Cambridge Sports Centre, students went on a great alien hunt with Dr Matt Bothwell from the Institute of Astronomy, stepped back in time to explore Must Farm with Department of Archaeology and the Cambridge Archaeological Unit as well as learning to disagree well with Dr Elizabeth Phillips from The Woolf Institute. 

Schools had a choice of workshops from a range of departments including, how to think like an engineer and making sustainable food with biotechnology with researchers from the Department of Chemical Engineering and Biotechnology, as well as the chance to get hands-on experience in the world of materials science and explore how properties of materials can be influenced by temperature at the Department of Materials Science and Metallurgy. 

The Department of Veterinary Medicine offered students the opportunity to find out what a career in veterinary medicine may look like with workshops on animal x-rays, how different professionals work together to treat animals in a veterinary hospital as well as meeting the departments horses and cows and learn how veterinarians diagnose and treat these large animals. 

Students also had the opportunity to learn about antibodies and our immune system with the MRC Toxicology Unit. The students learnt about the incredible job antibodies do defending our bodies against harmful invaders like bacteria and viruses. 

Alongside this, a maths trail, developed by Cambridgeshire County Council, guided students around the West Cambridge site whilst testing their maths skills with a number of problems to solve. 

Now in their third year, the Cambridge Festival schools days are offering students the opportunity to experience studying at Cambridge with a series of curriculum linked talks and hands on workshops.   

The Cambridge Festival runs from 19 March – 4 April and is a mixture of online, on-demand and in-person events covering all aspects of the world-leading research happening at Cambridge. The public have the chance to meet some of the researchers and thought-leaders working in some of the pioneering fields that will impact us all.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk

Farewell, Gaia: spacecraft operations come to an end

Artist’s impression of our galaxy, the Milky Way, based on data from ESA’s Gaia space telescope.
Artist’s impression of the Milky Way
Credit: ESA/Gaia/DPAC, Stefan Payne-Wardenaar

The European Space Agency’s Gaia spacecraft has been powered down, after more than a decade spent gathering data that are now being used to unravel the secrets of our home galaxy.

On 27 March 2025, Gaia’s control team at ESA’s European Space Operations Centre switched off the spacecraft’s subsystems and sent it into a ‘retirement orbit’ around the Sun.

Though the spacecraft’s operations are now over, the scientific exploitation of Gaia’s data has just begun.

Launched in 2013, Gaia has transformed our understanding of the cosmos by mapping the positions, distances, motions, and properties of nearly two billion stars and other celestial objects. It has provided the largest, most precise multi-dimensional map of our galaxy ever created, revealing its structure and evolution in unprecedented detail.

The mission uncovered evidence of past galactic mergers, identified new star clusters, contributed to the discovery of exoplanets and black holes, mapped millions of quasars and galaxies, and tracked hundreds of thousands of asteroids and comets. The mission has also enabled the creation of the best visualisation of how our galaxy might look to an outside observer.

“The data from the Gaia satellite has and is transforming our understanding of the Milky Way, how it formed, how it has evolved and how it will evolve,” said Dr Nicholas Walton from Cambridge’s Institute of Astronomy, lead of the Gaia UK project team. “Gaia has been in continuous operation for over 10 years, faultless, without interruption, reflecting the quality of the engineering, with significant elements of Gaia designed and built in the UK. But now it is time for its retirement. Gaia has finished its observations of the night sky. But the analysis of the Gaia mission data continues. Later in 2026 sees the next Gaia Data Release 4, to further underpin new discovery unravelling the beauty and mystery of the cosmos.”

Gaia far exceeded its planned lifetime of five years, and its fuel reserves are dwindling. The Gaia team considered how best to dispose of the spacecraft in line with ESA’s efforts to responsibly dispose of its missions.

They wanted to find a way to prevent Gaia from drifting back towards its former home near the scientifically valuable second Lagrange point (L2) of the Sun-Earth system and minimise any potential interference with other missions in the region.

“Switching off a spacecraft at the end of its mission sounds like a simple enough job,” said Gaia Spacecraft Operator Tiago Nogueira. “But spacecraft really don’t want to be switched off.

“We had to design a decommissioning strategy that involved systematically picking apart and disabling the layers of redundancy that have safeguarded Gaia for so long, because we don’t want it to reactivate in the future and begin transmitting again if its solar panels find sunlight.”

On 27 March, the Gaia control team ran through this series of passivation activities. One final use of Gaia’s thrusters moved the spacecraft away from L2 and into a stable retirement orbit around the Sun that will minimise the chance that it comes within 10 million kilometres of Earth for at least the next century.

The team then deactivated and switched off the spacecraft’s instruments and subsystems one by one, before deliberately corrupting its onboard software. The communication subsystem and the central computer were the last to be deactivated.

Gaia’s final transmission to ESOC mission control marked the conclusion of an intentional and carefully orchestrated farewell to a spacecraft that has tirelessly mapped the sky for over a decade.

Though Gaia itself has now gone silent, its contributions to astronomy will continue to shape research for decades. Its vast and expanding data archive remains a treasure trove for scientists, refining knowledge of galactic archaeology, stellar evolution, exoplanets and much more.

“No other mission has had such an impact over such a broad range of astrophysics. It continues to be the source of over 2,000 peer-reviewed papers per year, more than any other space mission,” said Gaia UK team member Dr Dafydd Wyn Evans, also from the Institute of Astronomy. “It is sad that its observing days are over, but work is continuing in Cambridge, and across Europe, to process and calibrate the final data so that Gaia will still be making its impact felt for many years in the future.”

A workhorse of galactic exploration, Gaia has charted the maps that future explorers will rely on to make new discoveries. The star trackers on ESA’s Euclid spacecraft use Gaia data to precisely orient the spacecraft. ESA’s upcoming Plato mission will explore exoplanets around stars characterised by Gaia and may follow up on new exoplanetary systems discovered by Gaia.

The Gaia control team also used the spacecraft’s final weeks to run through a series of technology tests. The team tested Gaia’s micro propulsion system under different challenging conditions to examine how it had aged over more than ten years in the harsh environment of space. The results may benefit the development of future ESA missions relying on similar propulsion systems, such as the LISA mission.

The Gaia spacecraft holds a deep emotional significance for those who worked on it. As part of its decommissioning, the names of around 1500 team members who contributed to its mission were used to overwrite some of the back-up software stored in Gaia’s onboard memory.

Personal farewell messages were also written into the spacecraft’s memory, ensuring that Gaia will forever carry a piece of its team with it as it drifts through space.

As Gaia Mission Manager Uwe Lammers put it: “We will never forget Gaia, and Gaia will never forget us.”

The Cambridge Gaia DPAC team is responsible for the analysis and generation of the Gaia photometric and spectro-photometric data products, and it also generated the Gaia photometric science alert stream for the duration of the satellite’s in-flight operations.

Adapted from a media release by the European Space Agency. 



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified. All rights reserved. We make our image and video content available in a number of ways – on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

source: cam.ac.uk