All posts by Admin

University academics ranked among best in the world

Professor Kay-Tee Khaw who has been named as the top female scientist in Europe by Research.com

source: www.cam.ac.uk

Twelve academics from the University of Cambridge have been ranked among the top female scientists in the world – with one claiming the top spot for Europe.

The Research.com Best Female Scientists in the World 2023 rankings are based on an analysis of more than 166,000 profiles of scientists across the globe. Position in the ranking is according to a scientist’s total ‘H-index’ – rate of the publications made within a given area of research as well as awards and recognitions. Only the top 1000 scholars with the highest H-index are featured in the ranking.

Kay-Tee Khaw, an Emeritus Professor in Gerontology and a Gonville & Caius Fellow, is placed fifth worldwide and tops the list for Europe. Professor Khaw, who was named a CBE in 2003 for Services to Medicine, published a study on how modest differences in lifestyle are associated with better life expectancy which informed the UK Government’s ‘Small changes, big difference’ campaign in 2006.

Also high in the rankings is Barbara Sahakian, Professor of Clinical Neuropsychology in the Department of Psychiatry and a Fellow of Clare Hall, who is placed sixth in the UK. Professor Sahakian’s recent research includes a study showing the benefits to mental health and cognitive performance of reading for pleasure at an early age, and seven healthy lifestyle factors that reduce the risk of depression.

Joining Professor Khaw and Professor Sahakian in the top 10 in the UK is Carol Brayne, Professor of Public Health Medicine in the Department of Psychiatry and Fellow of Darwin. Awarded a CBE in 2017 for Services to Public Health Medicine, Professor Brayne has pioneered the study of dementia in populations.

Nine other University of Cambridge scientists also make the rankings:

Professor Gillian Murphy (Department of Oncology), an international leader in the field of metalloproteinases, who has defined their roles in arthritis and cancer.

Professor Claudia Langenberg (MRC Epidemiology Unit), a public health specialist combining her expertise with research focused on molecular epidemiology.

Professor Nita Forouhi (MRC Epidemiology Unit), a physician scientist, MRC Investigator and Programme Leader in Nutritional Epidemiology, whose research on the link between diet, nutrition and chronic diseases like type 2 diabetes has informed health policy.

Professor Alison Dunning (Centre for Cancer Genetic Epidemiology, Department of Oncology), a genetic epidemiologist working on the risk of breast and other hormonal cancers.

Professor Karalyn Patterson (MRC Cognition and Brain Sciences Unit, Cambridge Centre for Frontotemporal Dementia and Related Disorders), a Fellow of Darwin College, who specialises in what we can learn about the organisation and neural representation of language and memory from the study of neurological patients suffering from the onset of brain disease or damage in adulthood.

Professor Dame Clare Grey (Department of Chemistry), a materials chemist whose work has paved the way for less expensive, longer-lasting batteries and helped improve storage systems for renewable energy, she is Chief Scientist and co-founder of Nyobolt, a company that is developing ultrafast-charging batteries for electric vehicles.

Professor Sharon Peacock (Department of Medicine), who has built her scientific expertise around pathogen genomics, antimicrobial resistance, and a range of tropical diseases, was the founding director of the COVID-19 Genomics UK Consortium which informed the COVID-19 pandemic response.

Professor Maria Grazia Spillantini (Department of Clinical Neurosciences and Fellow of Clare Hall) has been researching the cause of dementia for many years and was the first to identify the specific protein deposit found in Parkinson’s disease.

Professor Dame Theresa Marteau (Department of Public Health and Primary Care and Honorary Fellow of Christ’s College), a behavioural scientist, focuses on the development and evaluation of interventions to change behaviour (principally food, tobacco and alcohol consumption) to improve population health and reduce health inequalities, with a particular focus on targeting non-conscious processes.

Speaking on publication of this year’s rankings, Imed Bouchrika, Co-Founder of Research.com and Chief Data Scientist, said: “The purpose of this online ranking of the world’s leading female scientists is to recognize the efforts of every female scientist who has made the courageous decision to pursue opportunities despite barriers.

“Their unwavering determination in the face of difficulties serves as a source of motivation for all young women and girls who pursue careers in science.”



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Academics to explore legacy of Genghis Khan

Large statue of Genghis Khan that sits in the central square of Ulaanbaatar in Mongolia

source: www.cam.ac.uk

Researchers at the University of Cambridge have signed an agreement with the Mongolian government which will see them explore the legacy of the legendary figure Genghis Khan – or Chinggis Khaan as he is known in Mongolia.

Under the recently signed Memorandum of Understanding, Cambridge’s Mongolia & Inner Asia Studies Unit (MIASU) will work together with the Mongolian government to promote and further academic links, including the possibility of a programme for visiting research fellowships and travel grants to promote the study of Chinggis Khaan.

The agreement was signed during a visit to the UK by Mongolian Culture Minister Nomin Chinbat, a former media CEO who brought the TV show Mongolia’s Got Talent to the Asian country. The visit adds to a growing awareness of Mongolian culture in the UK, with historic art and precious artefacts from the early years of the nomadic Mongol Empire set to be displayed at the Royal Academy of Arts in London, and the opening of The Mongol Khan theatre production at the London Coliseum.

Professor David Sneath, Director of the Mongolia & Inner Asia Studies Unit at the University of Cambridge, said:

 “This is all about exploring the historical reality behind the myth… We are interested not just in the man himself, Chinggis Khaan – although of course he is of great historical interest – but in his legacy. We are trying to encourage a deeper study of Chinggis Khan and his impact.”

Minister Chinbat said: “Of course Chinggis Khaan is primarily known for his warriorship, but he was also a great diplomat, innovator and ruler.  How many people know he invented the postal service, the first passports? That he showed great religious tolerance, and he himself was a peacemaker?

“That’s why we look forward to working with the University of Cambridge to foster the next generation of Mongolian academics and strengthen understanding of the Mongol Empire’s impact across the world.”

MIASU’s Professor Uradyn E Bulag added: “Because in Mongolia we didn’t have a written tradition as strong as our neighbours, to some extent our history – and the history of Chinggis Khan – was written by others… This will be a chance to hopefully reset the balance.”



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Using machine learning to monitor driver ‘workload’ could help improve road safety

Head up display of traffic information and weather as seen by the driver

source: www.cam.ac.uk

Researchers have developed an adaptable algorithm that could improve road safety by predicting when drivers are able to safely interact with in-vehicle systems or receive messages, such as traffic alerts, incoming calls or driving directions.

There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driverBashar Ahmad

The researchers, from the University of Cambridge, working in partnership with Jaguar Land Rover (JLR) used a combination of on-road experiments and machine learning as well as Bayesian filtering techniques to reliably and continuously measure driver ‘workload’. Driving in an unfamiliar area may translate to a high workload, while a daily commute may mean a lower workload.

The resulting algorithm is highly adaptable and can respond in near real-time to changes in the driver’s behaviour and status, road conditions, road type, or driver characteristics.

This information could then be incorporated into in-vehicle systems such as infotainment and navigation, displays, advanced driver assistance systems (ADAS) and others. Any driver-vehicle interaction can be then customised to prioritise safety and enhance the user experience, delivering adaptive human-machine interactions. For example, drivers are only alerted at times of low workload, so that the driver can keep their full concentration on the road in more stressful driving scenarios. The results are reported in the journal IEEE Transactions on Intelligent Vehicles.

“More and more data is made available to drivers all the time. However, with increasing levels of driver demand, this can be a major risk factor for road safety,” said co-first author Dr Bashar Ahmad from Cambridge’s Department of Engineering. “There is a lot of information that a vehicle can make available to the driver, but it’s not safe or practical to do so unless you know the status of the driver.”

A driver’s status – or workload – can change frequently. Driving in a new area, in heavy traffic or poor road conditions, for example, is usually more demanding than a daily commute.

“If you’re in a demanding driving situation, that would be a bad time for a message to pop up on a screen or a heads-up display,” said Ahmad. “The issue for car manufacturers is how to measure how occupied the driver is, and instigate interactions or issue messages or prompts only when the driver is happy to receive them.”

There are algorithms for measuring the levels of driver demand using eye gaze trackers and biometric data from heart rate monitors, but the Cambridge researchers wanted to develop an approach that could do the same thing using information that’s available in any car, specifically driving performance signals such as steering, acceleration and braking data. It should also be able to consume and fuse different unsynchronised data streams that have different update rates, including from biometric sensors if available.

To measure driver workload, the researchers first developed a modified version of the Peripheral Detection Task to collect, in an automated way, subjective workload information during driving. For the experiment, a phone showing a route on a navigation app was mounted to the car’s central air vent, next to a small LED ring light that would blink at regular intervals. Participants all followed the same route through a mix of rural, urban and main roads. They were asked to push a finger-worn button whenever the LED light lit up in red and the driver perceived they were in a low workload scenario.

Video analysis of the experiment, paired with the data from the buttons, allowed the researchers to identify high workload situations, such as busy junctions or a vehicle in front or behind the driver behaving unusually.

The on-road data was then used to develop and validate a supervised machine learning framework to profile drivers based on the average workload they experience, and an adaptable Bayesian filtering approach for sequentially estimating, in real-time, the driver’s instantaneous workload, using several driving performance signals including steering and braking. The framework combines macro and micro measures of workload where the former is the driver’s average workload profile and the latter is the instantaneous one.

“For most machine learning applications like this, you would have to train it on a particular driver, but we’ve been able to adapt the models on the go using simple Bayesian filtering techniques,” said Ahmad. “It can easily adapt to different road types and conditions, or different drivers using the same car.”

The research was conducted in collaboration with JLR who did the experimental design and the data collection. It was part of a project sponsored by JLR under the CAPE agreement with the University of Cambridge.

“This research is vital in understanding the impact of our design from a user perspective, so that we can continually improve safety and curate exceptional driving experiences for our clients,” said JLR’s Senior Technical Specialist of Human Machine Interface Dr Lee Skrypchuk. “These findings will help define how we use intelligent scheduling within our vehicles to ensure drivers receive the right notifications at the most appropriate time, allowing for seamless and effortless journeys.”

The research at Cambridge was carried out by a team of researchers from the Signal Processing and Communications Laboratory (SigProC), Department of Engineering, under the supervision of Professor Simon Godsill. It was led by Dr Bashar Ahmad and included Nermin Caber (PhD student at the time) and Dr Jiaming Liang, who all worked on the project while based at Cambridge’s Department of Engineering.

Reference:
Nermin Caber et al. ‘Driver Profiling and Bayesian Workload Estimation Using Naturalistic Peripheral Detection Study Data.’ IEEE Transactions on Intelligent Vehicles (2023). DOI: 10.1109/TIV.2023.3313419



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Researchers redesign future mRNA therapeutics to prevent potentially harmful immune responses

Illustration of mRNA strand

source: www.cam.ac.uk

Researchers have discovered that misreading of therapeutic mRNAs by the cell’s decoding machinery can cause an unintended immune response in the body. They have identified the sequence within the mRNA that causes this to occur and found a way to prevent ‘off-target’ immune responses to enable the safer design of future mRNA therapeutics.

As billions of pounds flow into the next set of mRNA treatments, it is essential that these therapeutics are designed to be free from unintended side-effects.Anne Willis

mRNA – or ‘messenger ribonucleic acid’ – is the genetic material that tells cells in the body how to make a specific protein. Researchers from the Medical Research Council (MRC) Toxicology Unit have discovered that the cellular machinery that ‘reads’ mRNAs ‘slips’ when confronted with repeats of a chemical modification commonly found in mRNA therapeutics. In addition to the target protein, these slips lead to the production of ‘off-target’ proteins triggering an unintended immune response.

mRNA vaccines are considered game changing. They have been used to control the COVID-19 pandemic and are already proposed to treat various cancers, cardiovascular, respiratory, and immunological diseases in the future.

This revolutionary class of therapeutics was made possible in part through the work of biochemist Katalin Karikó and immunologist Drew Weissman. They demonstrated that by adding chemical modifications to the bases – the building blocks of mRNA – the synthetic mRNAs could bypass some of our body’s immune defences allowing a therapeutic to enter the cell and exert its effects. This discovery led to their award of the Nobel Prize in Physiology and Medicine in 2023.

The latest developments, led by biochemist Professor Anne Willis and immunologist Dr James Thaventhiran from the MRC Toxicology Unit at the University of Cambridge, build upon previous advances to ensure the prevention of any safety issues linked with future mRNA-based therapeutics. Their report is published today in the journal Nature.

The researchers identified that bases with a chemical modification called N1-methylpseudouridine – which are currently contained in mRNA therapies – are responsible for the ‘slips’ along the mRNA sequence.

In collaboration with researchers at the Universities of Kent, Oxford and Liverpool, the MRC Toxicology Unit team tested for evidence of the production of ‘off-target’ proteins in people who received the mRNA Pfizer vaccine against COVID-19. They found an unintended immune response occurred in one third of the 21 patients in the study who were vaccinated – but with no ill-effects, in keeping with the extensive safety data available on these COVID-19 vaccines.

The team then redesigned mRNA sequences to avoid these ‘off-target’ effects, by correcting the error-prone genetic sequences in the synthetic mRNA. This produced the intended protein. Such design modifications can easily be applied to future mRNA vaccines to produce their desired effects while preventing hazardous and unintended immune responses.

“Research has shown beyond doubt that mRNA vaccination against COVID-19 is safe. Billions of doses of the Moderna and Pfizer mRNA vaccines have been safely delivered, saving lives worldwide,” said Dr James Thaventhiran from the MRC Toxicology Unit, joint senior author of the report.

He added: “We need to ensure that mRNA vaccines of the future are as reliable. Our demonstration of ‘slip-resistant’ mRNAs is a vital contribution to future safety of this medicine platform.”

“These new therapeutics hold much promise for the treatment of a wide range of diseases. As billions of pounds flow into the next set of mRNA treatments, it is essential that these therapeutics are designed to be free from unintended side-effects,” said Professor Anne Willis, Director of the MRC Toxicology Unit and joint senior author of the report.

Thaventhiran, who is also a practising clinician at Addenbrooke’s hospital, said: “We can remove the error-prone code from the mRNA in vaccines so the body will make the proteins we want for an immune response without inadvertently making other proteins as well. The safety concern for future mRNA medicines is that mis-directed immunity has huge potential to be harmful, so off-target immune responses should always be avoided.”

Willis added: “Our work presents both a concern and a solution for this new type of medicine, and result from crucial collaborations between researchers from different disciplines and backgrounds. These findings can be implemented rapidly to prevent any future safety problems arising and ensure that new mRNA therapies are as safe and effective as the COVID-19 vaccines.”

Using synthetic mRNA for therapeutic purposes is attractive because it is cheap to produce, so can address substantial health inequalities across the globe by making these medicines more accessible. Moreover, synthetic mRNAs can be changed rapidly – for example to create a new COVID-19 variant vaccine.

In the Moderna and Pfizer COVID-19 vaccines, synthetic mRNA is used to enable the body to make the spike protein from SARS-CoV-2. The body recognises the viral proteins generated by mRNA vaccines as foreign and generates protective immunity. This persists, and if the body is later exposed to the virus its immune cells can neutralise it before it can cause serious illness.

The cell’s decoding machinery is called a ribosome. It ‘reads’ the genetic code of both natural and synthetic mRNAs to produce proteins. The precise positioning of the ribosome on the mRNA is essential to make the right proteins because the ribosome ‘reads’ the mRNA sequence three bases at a time. Those three bases determine what amino acid is added next into the protein chain. Therefore, even a tiny shift in the ribosome along the mRNA will massively distort the code and the resulting protein.

When the ribosome is confronted with a string of these modified bases called N1-methylpseudouridine in the mRNA, it slips around 10% of the time causing the mRNA to be misread and unintended proteins to be produced – enough to trigger an immune response. Removing these runs of N1-methylpseudouridine from the mRNAs prevents ‘off-target’ protein production.

This research was funded by the Medical Research Council and the Wellcome LEAP R3 programme, and supported by the NIHR Cambridge BRC.

Reference: Mulroney, T E et al: ‘(N)1-methylpseudouridylation of mRNA causes +1 ribosomal frameshifting.’ Nature, Dec 23. DOI: 10.1038/s41586-023-06800-3



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Diamonds and rust help unveil ‘impossible’ quasi-particles

Magnetic monopoles in hematite

source: www.cam.ac.uk

Researchers have discovered magnetic monopoles – isolated magnetic charges – in a material closely related to rust, a result that could be used to power greener and faster computing technologies.

If monopoles did exist, and we were able to isolate them, it would be like finding a missing puzzle piece that was assumed to be lostMete Atatüre

Researchers led by the University of Cambridge used a technique known as diamond quantum sensing to observe swirling textures and faint magnetic signals on the surface of hematite, a type of iron oxide.

The researchers observed that magnetic monopoles in hematite emerge through the collective behaviour of many spins (the angular momentum of a particle). These monopoles glide across the swirling textures on the surface of the hematite, like tiny hockey pucks of magnetic charge. This is the first time that naturally occurring emergent monopoles have been observed experimentally.

The research has also shown the direct connection between the previously hidden swirling textures and the magnetic charges of materials like hematite, as if there is a secret code linking them together. The results, which could be useful in enabling next-generation logic and memory applications, are reported in the journal Nature Materials.

According to the equations of James Clerk Maxwell, a giant of Cambridge physics, magnetic objects, whether a fridge magnet or the Earth itself, must always exist as a pair of magnetic poles that cannot be isolated.

“The magnets we use every day have two poles: north and south,” said Professor Mete Atatüre, who led the research. “In the 19th century, it was hypothesised that monopoles could exist. But in one of his foundational equations for the study of electromagnetism, James Clerk Maxwell disagreed.”

Atatüre is Head of Cambridge’s Cavendish Laboratory, a position once held by Maxwell himself. “If monopoles did exist, and we were able to isolate them, it would be like finding a missing puzzle piece that was assumed to be lost,” he said.

About 15 years ago, scientists suggested how monopoles could exist in a magnetic material. This theoretical result relied on the extreme separation of north and south poles so that locally each pole appeared isolated in an exotic material called spin ice.

However, there is an alternative strategy to find monopoles, involving the concept of emergence. The idea of emergence is the combination of many physical entities can give rise to properties that are either more than or different to the sum of their parts.

Working with colleagues from the University of Oxford and the National University of Singapore, the Cambridge researchers used emergence to uncover monopoles spread over two-dimensional space, gliding across the swirling textures on the surface of a magnetic material.

The swirling topological textures are found in two main types of materials: ferromagnets and antiferromagnets. Of the two, antiferromagnets are more stable than ferromagnets, but they are more difficult to study, as they don’t have a strong magnetic signature.

To study the behaviour of antiferromagnets, Atatüre and his colleagues use an imaging technique known as diamond quantum magnetometry. This technique uses a single spin – the inherent angular momentum of an electron – in a diamond needle to precisely measure the magnetic field on the surface of a material, without affecting its behaviour.

For the current study, the researchers used the technique to look at hematite, an antiferromagnetic iron oxide material. To their surprise, they found hidden patterns of magnetic charges within hematite, including monopoles, dipoles and quadrupoles.

“Monopoles had been predicted theoretically, but this is the first time we’ve actually seen a two-dimensional monopole in a naturally occurring magnet,” said co-author Professor Paolo Radaelli, from the University of Oxford.

“These monopoles are a collective state of many spins that twirl around a singularity rather than a single fixed particle, so they emerge through many-body interactions. The result is a tiny, localised stable particle with diverging magnetic field coming out of it,” said co-first author Dr Hariom Jani, from the University of Oxford.

“We’ve shown how diamond quantum magnetometry could be used to unravel the mysterious behaviour of magnetism in two-dimensional quantum materials, which could open up new fields of study in this area,” said co-first author Dr Anthony Tan, from the Cavendish Laboratory. “The challenge has always been direct imaging of these textures in antiferromagnets due to their weaker magnetic pull, but now we’re able to do so, with a nice combination of diamonds and rust.”

The study not only highlights the potential of diamond quantum magnetometry but also underscores its capacity to uncover and investigate hidden magnetic phenomena in quantum materials. If controlled, these swirling textures dressed in magnetic charges could power super-fast and energy-efficient computer memory logic.

The research was supported in part by the Royal Society, the Sir Henry Royce Institute, the European Union, and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).

Reference:
K C Tan, Hariom Jani, Michael Högen et al. ‘Revealing Emergent Magnetic Charge in an Antiferromagnet with Diamond Quantum Magnetometry.’ Nature Materials (2023). DOI: 10.1038/s41563-023-01737-4.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge researchers recognised as Future Leaders by UKRI

Alecia-Jane Twigger, one of the Future Leaders

source: www.cam.ac.uk

Four researchers are among the UK’s “most promising research leaders” who will benefit from £101 million from UKRI to tackle major global issues and commercialise their innovations.

The fellows announced today illustrate how this scheme empowers talented researchers and innovators to build the diverse and connected research and innovation system we need to shorten the distance between discovery and prosperity across the UK.Ottoline Leyser, UKRI Chief Executive

Future Leaders Fellowships are awarded by UK Research and Innovation (UKRI) to support universities and businesses in developing their most talented early career researchers and innovators, and to attract new people to their organisations, including from overseas.

The 75 “most promising research leaders” recognised today by UKRI will benefit from £101 million to tackle major global issues and to commercialise their innovations in the UK.

UKRI Chief Executive, Professor Dame Ottoline Leyser, said: “UKRI’s Future Leaders Fellowships provide researchers and innovators with long-term support and training, giving them the freedom to explore adventurous new ideas, and to build dynamic careers that break down the boundaries between sectors and disciplines.

“The fellows announced today illustrate how this scheme empowers talented researchers and innovators to build the diverse and connected research and innovation system we need to shorten the distance between discovery and prosperity across the UK.”

The four Cambridge researchers are:

Dr Alecia-Jane Twigger (Department of Pharmacology) (pictured)

Breastfeeding has been highlighted by the World Health Organization (WHO) as “one of the most effective ways to ensure child health and survival”. A major priority of the WHO is to increase the global rate of exclusive breastfeeding for the first 6 months up to at least 50% by 2025. However, many mothers worry about low milk production – a major driver for mothers switching to formula feeding. With funding provided by the Future Leaders Fellowship, Dr Twigger will establish state-of-the-art models of lactation with the aim of developing and trialling treatments to support low-milk production mothers in partnership with breastfeeding advocates and clinical stakeholders.

Dr Amy Orben (MRC Cognition and Brain Sciences Unit and Fellow of St John’s College)

Dr Amy Orben will pinpoint how social media use might be linked to mental health risk in teenagers, a time when we are especially susceptible to developing mental health conditions. She will use a range of innovative techniques to study technological designs, such as the quantification of social feedback through ‘like’ counts, that could be problematic and therefore a target for future regulation. As a UKRI Future Leader Fellow, Dr Orben will also collaborate flexibly with youth, policymakers and charities to swiftly address pressing questions about social media and technology, helping to safeguard young people.

Dr Anna Moore (Department of Psychiatry)

Seventy percent of children suffering mental health problems are unable to access services and those who can are waiting longer than ever for help. Working with children, families and Cambridge Children’s Hospital project, Dr Anna Moore is developing easy-to-use digital tools to revolutionise mental health treatment for the young, by helping clinicians diagnose conditions much earlier. The system, called Timely, will use AI to analyse patient data, joining the dots to spot the early signs of mental health conditions. The tool will be designed to reduce health inequality, improve service efficiency and ensure data use is ethical and publicly acceptable.

Dr Niamh Gallagher (Faculty of History and Fellow of St Catharine’s College)

Dr Gallagher will lead ground-breaking historical research into one of the greatest geopolitical transformations of the 20th century, the disappearance of the British Empire, by investigating how Ireland, the Irish and a series of so-called ‘Irish Questions’ influenced the multifarious ‘ends’ of the Empire, from 1886 to today. With partners spanning education, public policy and the media, this research will produce a series of innovative outputs and shareable recommendations that facilitate pathways to cohesion in post-conflict Northern Ireland and enhance British–Irish relations in the aftermath of Brexit.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Why reading nursery rhymes and singing to babies may help them to learn language

Babies wearing 'head cap' to measure electrical brain activity

source: www.cam.ac.uk

Researchers find that babies don’t begin to process phonetic information reliably until seven months old which they say is too late to form the foundation of language.

This is the first evidence we have of how brain activity relates to phonetic information changes over time in response to continuous speech.Professor Giovanni Di Liberto

Parents should speak to their babies using sing-song speech, like nursery rhymes, as soon as possible, say researchers. That’s because babies learn languages from rhythmic information, not phonetic information, in their first months.

Phonetic information – the smallest sound elements of speech, typically represented by the alphabet – is considered by many linguists to be the foundation of language. Infants are thought to learn these small sound elements and add them together to make words. But a new study suggests that phonetic information is learnt too late and slowly for this to be the case.

Instead, rhythmic speech helps babies learn language by emphasising the boundaries of individual words and is effective even in the first months of life.

Researchers from the University of Cambridge and Trinity College Dublin investigated babies’ ability to process phonetic information during their first year.

Their study, published today in the journal Nature Communications, found that phonetic information wasn’t successfully encoded until seven months old, and was still sparse at 11 months old when babies began to say their first words.

“Our research shows that the individual sounds of speech are not processed reliably until around seven months, even though most infants can recognise familiar words like ‘bottle’ by this point,” said Cambridge neuroscientist, Professor Usha Goswami. “From then individual speech sounds are still added in very slowly – too slowly to form the basis of language.”

The researchers recorded patterns of electrical brain activity in 50 infants at four, seven and eleven months old as they watched a video of a primary school teacher singing 18 nursery rhymes to an infant. Low frequency bands of brainwaves were fed through a special algorithm, which produced a ‘read out’ of the phonological information that was being encoded.  

The researchers found that phonetic encoding in babies emerged gradually over the first year of life, beginning with labial sounds (e.g. b for “baby”) and nasal sounds (e.g. m for “mummy”), with the ‘read out’ progressively looking more like that of adults

First author, Professor Giovanni Di Liberto, a cognitive and computer scientist at Trinity College Dublin and a researcher at the ADAPT Centre, said: “This is the first evidence we have of how brain activity relates to phonetic information changes over time in response to continuous speech.”

Previously, studies have relied on comparing the responses to nonsense syllables, like “bif” and “bof” instead.

The current study forms part of the BabyRhythm project led by Goswami, which is investigating how language is learnt and how this is related to dyslexia and developmental language disorder. 

Goswami believes that it is rhythmic information – the stress or emphasis on different syllables of words and the rise and fall of tone – that is the key to language learning. A sister study, also part of the BabyRhythm project, has shown that rhythmic speech information was processed by babies at two months old – and individual differences predicted later language outcomes. The experiment was also conducted with adults who showed an identical ‘read out’ of rhythm and syllables to babies.

“We believe that speech rhythm information is the hidden glue underpinning the development of a well-functioning language system,” said Goswami. “Infants can use rhythmic information like a scaffold or skeleton to add phonetic information on to. For example, they might learn that the rhythm pattern of English words is typically strong-weak, as in ‘daddy’ or ‘mummy’, with the stress on the first syllable. They can use this rhythm pattern to guess where one word ends and another begins when listening to natural speech.”

“Parents should talk and sing to their babies as much as possible or use infant directed speech like nursery rhymes because it will make a difference to language outcome,” she added.

Goswami explained that rhythm is a universal aspect of every language all over the world. “In all language that babies are exposed to there is a strong beat structure with a strong syllable twice a second. We’re biologically programmed to emphasise this when speaking to babies.”

Goswami says that there is a long history in trying to explain dyslexia and developmental language disorder in terms of phonetic problems but that the evidence doesn’t add up. She believes that individual differences in children’s language originate with rhythm. 

The research was funded by the European Research Council under the European Union’s Horizon 2020 research and innovation programme and by Science Foundation Ireland. 

Di Liberto et al. Emergence of the cortical encoding of phonetic features in the first year of life, Nature Communications DOI: 10.1038/s41467-023-43490-x



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Newborn babies at risk from bacteria commonly carried by mothers

Pregnant woman holding her stomach

source: www,cam.ac.uk

One in 200 newborns is admitted to a neonatal unit with sepsis caused by a bacteria commonly carried by their mothers – much greater than the previous estimate, say Cambridge researchers. The team has developed an ultra-sensitive test capable of better detecting the bacteria, as it is missed in the vast majority of cases.

In the UK, we’ve traditionally not screened mothers for GBS, but our findingsprofoundly changes the risk/benefit balance of universal screeningFrancesca Gaccioli

Streptococcus agalactiae (known as Group B Streptococcus, or GBS) is present in the genital tract in around one in five women. Previous research by the team at the University of Cambridge and Rosie Hospital, Cambridge University Hospitals NHS Foundation Trust, identified GBS in the placenta of around 5% of women prior to the onset of labour. Although it can be treated with antibiotics, unless screened, women will not know they are carriers.

GBS can cause sepsis, a life-threatening reaction to an infection, in the newborn. Worldwide, GBS accounts for around 50,000 stillbirths and as many as 100,000 infant deaths per year.

In a study published today in Nature Microbiology, the team looked at the link between the presence of GBS in the placenta and the risk of admission of the baby to a neonatal unit. The researchers re-analysed data available from their previous study of 436 infants born at term, confirming their findings in a second cohort of 925 pregnancies.

From their analysis, the researchers estimate that placental GBS was associated with a two- to three-fold increased risk of neonatal unit admission, with one in 200 babies admitted with sepsis associated with GBS – almost 10 times the previous estimate. The clinical assessment of these babies using the current diagnostic testing identified GBS in less than one in five of these cases.

In the USA, all pregnant women are routinely screened for GBS and treated with antibiotics if found to be positive. In the UK, women who test positive for GBS are also treated with antibiotics – however, only a minority of pregnant women are tested for GBS, as the approach in the UK is to obtain samples only from women experiencing complications, or with other risk factors.

There are a number of reasons why women in the UK are not screened, including the fact that detecting GBS in the mother is not always straightforward and only a small minority of babies exposed to the bacteria were thought to become ill. A randomised controlled trial of screening for GBS for treatment with antibiotics is currently underway in the UK.

Dr Francesca Gaccioli from the Department of Obstetrics & Gynaecology at the University of Cambridge said: “In the UK, we’ve traditionally not screened mothers for GBS, but our findings – that significantly more newborns are admitted to the neonatal unit as a result of GBS-related sepsis than was previously thought – profoundly changes the risk/benefit balance of universal screening.”

To improve detection, the researchers have developed an ultrasensitive PCR test, which amplifies tiny amounts of DNA or RNA from a suspected sample to check for the presence of GBS. They have filed a patent with Cambridge Enterprise, the University of Cambridge’s technology transfer arm, for this test.

Professor Gordon Smith, Head of Obstetrics & Gynaecology at the University of Cambridge, said: “Using this new test, we now realise that the clinically detected cases of GBS may represent the tip of the iceberg of complications arising from this infection. We hope that the ultra-sensitive test developed by our team might lead to viable point-of-care testing to inform immediate neonatal care.”

When the researchers analysed serum from the babies’ umbilical cords, they found that over a third showed greatly increased levels of several cytokines – protein messengers release by the immune system. This suggests that a so-called ‘cytokine storm’ – an extreme immune response that causes collateral damage to the host – was behind the increased risk of disease.

The research was funded by the Medical Research Council and supported by the National Institute for Health and Care Research (NIHR) Cambridge Biomedical Research Centre.

Reference
Gaccioli, F, Stephens, K & Sovio, U et al. Placental Streptococcus agalactiae DNA is associated with neonatal unit admission and fetal pro-inflammatory cytokines in term infants. Nature Microbiology; 29 Nov 2023; DOI: 10.1038/s41564-023-01528-2



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Early-stage stem cell therapy trial shows promise for treating progressive MS

Early-stage stem cell therapy trial shows promise for treating progressive MS

https://www.youtube-nocookie.com/embed/nJPNlSYR2LA?wmode=opaque&controls=1&rel=0&autohide=0&enablejsapi=1&origin=https%3A%2F%2Fwww.cam.ac.uk

source: www.cam.ac.uk

An international team has shown that the injection of a type of stem cell into the brains of patients living with progressive multiple sclerosis (MS) is safe, well tolerated and has a long-lasting effect that appears to protect the brain from further damage.

I am cautiously very excited about our findings, which are a step towards developing a cell therapy for treating MSStefano Pluchino

The study, led by scientists at the University of Cambridge, University of Milan Bicocca and Hospital Casa Sollievo della Sofferenza (Italy), is a step towards developing an advanced cell therapy treatment for progressive MS.

Over 2 million people live with MS worldwide, and while treatments exist that can reduce the severity and frequency of relapses, two-thirds of MS patients still transition into a debilitating secondary progressive phase of disease within 25-30 years of diagnosis, where disability grows steadily worse.

In MS, the body’s own immune system attacks and damages myelin, the protective sheath around nerve fibres, causing disruption to messages sent around the brain and spinal cord.

Key immune cells involved in this process are macrophages (literally ‘big eaters’), which ordinarily attack and rid the body of unwanted intruders. A particular type of macrophage known as a microglial cell is found throughout the brain and spinal cord. In progressive forms of MS, they attack the central nervous system (CNS), causing chronic inflammation and damage to nerve cells.

Recent advances have raised expectations that stem cell therapies might help ameliorate this damage. These involve the transplantation of stem cells, the body’s ‘master cells’, which can be programmed to develop into almost any type of cell within the body.

Previous work from the Cambridge team has shown in mice that skin cells re-programmed into brain stem cells, transplanted into the central nervous system, can help reduce inflammation and may be able to help repair damage caused by MS.

Now, in research published in the Cell Stem Cell, scientists have completed a first-in-human, early-stage clinical trial that involved injecting neural stem cells directly into the brains of 15 patients with secondary MS recruited from two hospitals in Italy. The trial was conducted by teams at the University of Cambridge, Milan Bicocca and the Hospitals Casa Sollievo della Sofferenza and S. Maria Terni  (IT) and Ente Ospedaliero Cantonale (Lugano, Switzerland) and the University of Colorado (USA).

The stem cells were derived from cells taken from brain tissue from a single, miscarried foetal donor. The Italian team had previously shown that it would be possible to produce a virtually limitless supply of these stem cells from a single donor – and in future it may be possible to derive these cells directly from the patient – helping to overcome practical problems associated with the use of allogeneic foetal tissue.

The team followed the patients over 12 months, during which time they observed no treatment-related deaths or serious adverse events. While some side-effects were observed, all were either temporary or reversible.

All the patients showed high levels of disability at the start of the trial – most required a wheelchair, for example – but during the 12 month follow up period none showed any increase in disability or a worsening of symptoms. None of the patients reported symptoms that suggested a relapse and nor did their cognitive function worsen significantly during the study. Overall, say the researchers, this points to a substantial stability of the disease, without signs of progression, though the high levels of disability at the start of the trial make this difficult to confirm.

The researchers assessed a subgroup of patients for changes in the volume of brain tissue associated with disease progression. They found that the larger the dose of injected stem cells, the smaller the reduction in this brain volume over time. They speculate that this may be because the stem cell transplant dampened inflammation.

The team also looked for signs that the stem cells were having a neuroprotective effect – that is, protecting nerve cells from further damage. Their previous work showed how tweaking metabolism – how the body produces energy – can in turn reprogram microglia from ‘bad’ to ‘good’. In this new study, they looked at how the brain’s metabolism changes after the treatment. They measured changes in the fluid around the brain and in the blood over time and found certain signs that are linked to how the brain processes fatty acids. These signs were connected to how well the treatment works and how the disease develops. The higher the dose of stem cells, the greater the levels of fatty acids, which also persisted over the 12-month period.

Professor Stefano Pluchino from the University of Cambridge, who co-led the study, said: “We desperately need to develop new treatments for secondary progressive MS, and I am cautiously very excited about our findings, which are a step towards developing a cell therapy for treating MS.

“We recognise that our study has limitations – it was only a small study and there may have been confounding effects from the immunosuppressant drugs, for example – but the fact that our treatment was safe and that its effects lasted over the 12 months of the trial means that we can proceed to the next stage of clinical trials.”

Co-leader Professor Angelo Vescovi from the University of Milano-Bicocca said: “It has taken nearly three decades to translate the discovery of brain stem cells into this experimental therapeutic treatment This study will add to the increasing excitement in this field and pave the way to broader efficacy studies, soon to come.”

Caitlin Astbury, Research Communications Manager at the MS Society, says: “This is a really exciting study which builds on previous research funded by us. These results show that special stem cells injected into the brain were safe and well-tolerated by people with secondary progressive MS. They also suggest this treatment approach might even stabilise disability progression. We’ve known for some time that this method has the potential to help protect the brain from progression in MS.

“This was a very small, early-stage study and we need further clinical trials to find out if this treatment has a beneficial effect on the condition. But this is an encouraging step towards a new way of treating some people with MS.” 

Reference
Leone, MA, Gelati, M & Profico, DC et al. Intracerebroventricular Transplantation of Foetal Allogeneic Neural Stem Cells in Patients with Secondary Progressive Multiple Sclerosis (hNSC-SPMS): a phase I dose escalation clinical trial. Cell Stem Cell; 27 Nov 2023; DOI: 10.1016/j.stem.2023.11.001



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

AI system self-organises to develop features of brains of complex organisms

Graphic representing an artificially intelligent brain

source: www.cam.ac.uk

Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system – in much the same way that the human brain has to develop and operate within physical and biological constraints – allows it to develop features of the brains of complex organisms in order to solve tasks.

Not only is the brain great at solving complex problems, it does so while using very little energyJascha Achterberg

As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.

Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”

Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”

In a study published today in Nature Machine Intelligence, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains.

Instead of real neurons, the system used computational nodes. Neurons and nodes are similar in function, in that each takes an input, transforms it, and produces an output, and a single node or neuron might connect to multiple others, all inputting information to be computed.

In their system, however, the researchers applied a ‘physical’ constraint on the system. Each node was given a specific location in a virtual space, and the further away two nodes were, the more difficult it was for them to communicate. This is similar to how neurons in the human brain are organised.

The researchers gave the system a simple task to complete – in this case a simplified version of a maze navigation task typically given to animals such as rats and macaques when studying the brain, where it has to combine multiple pieces of information to decide on the shortest route to get to the end point.

One of the reasons the team chose this particular task is because to complete it, the system needs to maintain a number of elements – start location, end location and intermediate steps – and once it has learned to do the task reliably, it is possible to observe, at different moments in a trial, which nodes are important. For example, one particular cluster of nodes may encode the finish locations, while others encode the available routes, and it is possible to track which nodes are active at different stages of the task.

Initially, the system does not know how to complete the task and makes mistakes. But when it is given feedback it gradually learns to get better at the task. It learns by changing the strength of the connections between its nodes, similar to how the strength of connections between brain cells changes as we learn. The system then repeats the task over and over again, until eventually it learns to perform it correctly.

With their system, however, the physical constraint meant that the further away two nodes were, the more difficult it was to build a connection between the two nodes in response to the feedback. In the human brain, connections that span a large physical distance are expensive to form and maintain.

When the system was asked to perform the task under these constraints, it used some of the same tricks used by real human brains to solve the task. For example, to get around the constraints, the artificial systems started to develop hubs – highly connected nodes that act as conduits for passing information across the network.

More surprising, however, was that the response profiles of individual nodes themselves began to change: in other words, rather than having a system where each node codes for one particular property of the maze task, like the goal location or the next choice, nodes developed a flexible coding scheme. This means that at different moments in time nodes might be firing for a mix of the properties of the maze. For instance, the same node might be able to encode multiple locations of a maze, rather than needing specialised nodes for encoding specific locations. This is another feature seen in the brains of complex organisms.

Co-author Professor Duncan Astle, from Cambridge’s Department of Psychiatry, said: “This simple constraint – it’s harder to wire nodes that are far apart – forces artificial systems to produce some quite complicated characteristics. Interestingly, they are characteristics shared by biological systems like the human brain. I think that tells us something fundamental about why our brains are organised the way they are.”

Understanding the human brain

The team are hopeful that their AI system could begin to shed light on how these constraints, shape differences between people’s brains, and contribute to differences seen in those that experience cognitive or mental health difficulties.

Co-author Professor John Duncan from the MRC CBSU said: “These artificial brains give us a way to understand the rich and bewildering data we see when the activity of real neurons is recorded in real brains.”

Achterberg added: “Artificial ‘brains’ allow us to ask questions that it would be impossible to look at in an actual biological system. We can train the system to perform tasks and then play around experimentally with the constraints we impose, to see if it begins to look more like the brains of particular individuals.”

Implications for designing future AI systems

The findings are likely to be of interest to the AI community, too, where they could allow for the development of more efficient systems, particularly in situations where there are likely to be physical constraints.

Dr Akarca said: “AI researchers are constantly trying to work out how to make complex, neural systems that can encode and perform in a flexible way that is efficient. To achieve this, we think that neurobiology will give us a lot of inspiration. For example, the overall wiring cost of the system we’ve created is much lower than you would find in a typical AI system.”

Many modern AI solutions involve using architectures that only superficially resemble a brain. The researchers say their works shows that the type of problem the AI is solving will influence which architecture is the most powerful to use.

Achterberg said: “If you want to build an artificially-intelligent system that solves similar problems to humans, then ultimately the system will end up looking much closer to an actual brain than systems running on large compute cluster that specialise in very different tasks to those carried out by humans. The architecture and structure we see in our artificial ‘brain’ is there because it is beneficial for handling the specific brain-like challenges it faces.”

This means that robots that have to process a large amount of constantly changing information with finite energetic resources could benefit from having brain structures not dissimilar to ours.

Achterberg added: “Brains of robots that are deployed in the real physical world are probably going to look more like our brains because they might face the same challenges as us. They need to constantly process new information coming in through their sensors while controlling their bodies to move through space towards a goal. Many systems will need to run all their computations with a limited supply of electric energy and so, to balance these energetic constraints with the amount of information it needs to process, it will probably need a brain structure similar to ours.”

The research was funded by the Medical Research Council, Gates Cambridge, the James S McDonnell Foundation, Templeton World Charity Foundation and Google DeepMind.

Reference
Achterberg, J & Akarca, D et al. Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings. Nature Machine Intelligence; 20 Nov 2023; DOI: 10.1038/s42256-023-00748-9



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Innovative aquaculture system turns waste wood into nutritious seafood

Naked Clams in wooden growth panel

source: www.cam.ac.uk

Researchers hoping to rebrand a marine pest as a nutritious food have developed the world’s first system of farming shipworms, which they have renamed ‘Naked Clams’.

Naked Clams taste like oysters, they’re highly nutritious and they can be produced with a really low impact on the environment.Dr David Willer

These long, white saltwater clams are the world’s fastest-growing bivalve and can reach 30cm long in just six months. They do this by burrowing into waste wood and converting it into highly-nutritious protein.

The researchers found that the levels of Vitamin B12 in the Naked Clams were higher than in most other bivalves – and almost twice the amount found in blue mussels.

And with the addition of an algae-based feed to the system, the Naked Clams can be fortified with omega-3 polyunsaturated fatty acids – nutrients essential for human health.

Shipworms have traditionally been viewed as a pest because they bore through any wood immersed in seawater, including ships, piers and docks.

The researchers developed a fully-enclosed aquaculture system that can be completely controlled, eliminating the water quality and food safety concerns often associated with mussel and oyster farming.

And the modular design means it can be used in urban settings, far from the sea.

“Naked Clams taste like oysters, they’re highly nutritious and they can be produced with a really low impact on the environment,” said Dr David Willer, Henslow Research Fellow at the University of Cambridge’s Department of Zoology and first author of the report.

He added: “Naked Clam aquaculture has never been attempted before. We’re growing them using wood that would otherwise go to landfill or be recycled, to produce food that’s high in protein and essential nutrients like Vitamin B12.”

Scientifically named Teredinids, these creatures have no shell, but are classed as bivalve shellfish and related to oysters and mussels.

Because the Naked Clams don’t put energy into growing shells, they grow much faster than mussels and oysters which can take two years to reach a harvestable size.

The report is published today in the journal Sustainable Agriculture.

Wild shipworms are eaten in the Philippines – either raw, or battered and fried like calamari. But for British consumers, the researchers think Naked Clams will be more popular as a ‘white meat’ substitute in processed foods like fish fingers and fishcakes.

“We urgently need alternative food sources that provide the micronutrient-rich profile of meat and fish but without the environmental cost, and our system offers a sustainable solution,” said Dr Reuben Shipway at the University of Plymouth’s School of Biological & Marine Sciences, senior author of the report.

He added: “Switching from eating beef burgers to Naked Clam nuggets may well become a fantastic way to reduce your carbon footprint.”

The research is a collaboration between the Universities of Cambridge and Plymouth, and has attracted funding from sources including The Fishmongers’ Company, British Ecological Society, Cambridge Philosophical Society, Seale-Hayne Trust, and BBSRC

The team is now trialling different types of waste wood and algal feed in their system to optimise the growth, taste and nutritional profile of the Naked Clams – and is working with Cambridge Enterprise to scale-up and commercialise the system.

Reference

Willer, D.F. et al: ‘Naked Clams to open a new sector in sustainable nutritious food production.’ Sustainable Agriculture, Nov 23. DOI: 10.1038/s44264-023-00004-y



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.


Read this next

Lab-grown ‘small blood vessels’ point to potential treatment for major cause of stroke and vascular dementia

Disease mural cells

source: www.cam.ac.uk

Cambridge scientists have grown small blood vessel-like models in the lab and used them to show how damage to the scaffolding that supports these vessels can cause them to leak, leading to conditions such as vascular dementia and stroke.

Despite the number of people affected worldwide by small vessel disease, we have little in the way of treatments because we don’t fully understand what damages the blood vessels and causes the diseaseAlessandra Granata

The study, published today in Stem Cell Reports, also identifies a drug target to ‘plug’ these leaks and prevent so-called small vessel disease in the brain.

Cerebral small vessel disease (SVD) is a leading cause of age-related cognitive decline and contributes to almost half (45%) of dementia cases worldwide. It is also responsible for one in five (20%) ischemic strokes, the most common type of stroke, where a blood clot prevents the flow of blood and oxygen to the brain.

The majority of cases of SVD are associated with conditions such as hypertension and type 2 diabetes, and tend to affect people in their middle age. However, there are some rare, inherited forms of the disease that can strike people at a younger age, often in their mid-thirties. Both the inherited and ‘spontaneous’ forms of the disease share similar characteristics.

Scientists at the Victor Phillip Dahdaleh Heart and Lung Research Institute, University of Cambridge, used cells taken from skin biopsies of patients with one of these rare forms of SVD, which is caused by a mutation in a gene called COL4.

By reprogramming the skin cells, they were able to create induced pluripotent stem cells – cells that have the capacity to develop into almost any type of cell within the body. The team then used these stem cells to generate cells of the brain blood vessels and create a model of the disease that mimics the defects seen in patients’ brain vessels.

Dr Alessandra Granata from the Department of Clinical Neurosciences at Cambridge, who led the study, said: “Despite the number of people affected worldwide by small vessel disease, we have little in the way of treatments because we don’t fully understand what damages the blood vessels and causes the disease. Most of what we know about the underlying causes tends to come from animal studies, but they are limited in what they can tell us.

“That’s why we turned to stem cells to generate cells of the brain blood vessels and create a disease model ‘in a dish’ that mimics what we see in patients.”

Our blood vessels are built around a type of scaffolding known as an extracellular matrix, a net-like structure that lines and supports the small blood vessels in the brain. The COL4 gene is important for the health of this matrix.

In their disease model, the team found that the extracellular matrix is disrupted, particularly at its so-called ‘tight junctions’, which ‘zip’ cells together. This leads to the small blood vessels becoming leaky – a key characteristic seen in SVD, where blood leaks out of the vessels and into the brain.

The researchers identified a class of molecules called metalloproteinases (MMPs) that play a key role in this damage. Ordinarily, MMPs are important for maintaining the extracellular matrix, but if too many of them are produced, they can damage the structure – similar to how in The Sorcerer’s Apprentice, a single broom can help mop the floor, but too many wreak havoc.

When the team treated the blood vessels with drugs that inhibit MMPs – an antibiotic and anti-cancer drug – they found that these reversed the damage and stopped the leakage.

Dr Granata added: “These particular drugs come with potentially significant side effects so wouldn’t in themselves be viable to treat small vessel disease. But they show that in theory, targeting MMPs could stop the disease. Our model could be scaled up relatively easily to test the viability of future potential drugs.”

The study was funded by the Stroke Association, British Heart Foundation and Alzheimer’s Society, with support from the NIHR Cambridge Biomedical Research Centre and the European Union’s Horizon 2020 Programme.

Reference
Al-Thani, M, Goodwin-Trotman, M. A novel human 1 iPSC model of COL4A1/A2 small vessel disease unveils a key pathogenic role of matrix metalloproteinases. Stem Cell Reports; 16 Nov 2023; DOI: https://doi.org/10.1016/j.stemcr.2023.10.014



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Solar-powered device produces clean water and clean fuel at the same time

Device for making solar fuels on the River Cam near the Bridge of Sighs

source: www.cam.ac.uk

A floating, solar-powered device that can turn contaminated water or seawater into clean hydrogen fuel and purified water, anywhere in the world, has been developed by researchers.

These are the sorts of solutions we will need to develop a truly circular economy and sustainable futureErwin Reisner

The device, developed by researchers at the University of Cambridge, could be useful in resource-limited or off-grid environments, since it works with any open water source and does not require any outside power.

It takes its inspiration from photosynthesis, the process by which plants convert sunlight into food. However, unlike earlier versions of the ‘artificial leaf’, which could produce green hydrogen fuel from clean water sources, this new device operates from polluted or seawater sources and can produce clean drinking water at the same time.

Tests of the device showed it was able to produce clean water from highly polluted water, seawater, and even from the River Cam in central Cambridge. The results are reported in the journal Nature Water.

“Bringing together solar fuels production and water purification in a single device is tricky,” said Dr Chanon Pornrungroj from Cambridge’s Yusuf Hamied Department of Chemistry, the paper’s co-lead author. “Solar-driven water splitting, where water molecules are broken down into hydrogen and oxygen, need to start with totally pure water because any contaminants can poison the catalyst or cause unwanted chemical side-reactions.”

“In remote or developing regions, where clean water is relatively scarce and the infrastructure necessary for water purification is not readily available, water splitting is extremely difficult,” said co-lead author Ariffin Mohamad Annuar. “A device that could work using contaminated water could solve two problems at once: it could split water to make clean fuel, and it could make clean drinking water.”

Pornrungroj and Mohamad Annuar, who are both members of Professor Erwin Reisner’s research group, came up with a design that did just that. They deposited a photocatalyst on a nanostructured carbon mesh that is a good absorber of both light and heat, generating the water vapour used by the photocatalyst to create hydrogen. The porous carbon mesh, treated to repel water, served both to help the photocatalyst float and to keep it away from the water below, so that contaminants do not interfere with its functionality.

In addition, the new device uses more of the Sun’s energy. “The light-driven process for making solar fuels only uses a small portion of the solar spectrum – there’s a whole lot of the spectrum that goes unused,” said Mohamad Annuar.

The team used a white, UV-absorbing layer on top of the floating device for hydrogen production via water splitting. The rest of the light in the solar spectrum is transmitted to the bottom of the device, which vaporises the water.

“This way, we’re making better use of the light – we get the vapour for hydrogen production, and the rest is water vapour,” said Pornrungroj. “This way, we’re truly mimicking a real leaf, since we’ve now been able to incorporate the process of transpiration.”

A device that can make clean fuel and clean water at once using solar power alone could help address the energy and the water crises facing so many parts of the world. For example, the indoor air pollution caused by cooking with ‘dirty’ fuels, such as kerosene, is responsible for more than three million deaths annually, according to the World Health Organization. Cooking with green hydrogen instead could help reduce that number significantly. And 1.8 billion people worldwide still lack safe drinking water at home.

“It’s such a simple design as well: in just a few steps, we can build a device that works well on water from a wide variety of sources,” said Mohamad Annuar.

“It’s so tolerant of pollutants, and the floating design allows the substrate to work in very cloudy or muddy water,” said Pornrungroj. “It’s a highly versatile system.”

“Our device is still a proof of principle, but these are the sorts of solutions we will need if we’re going to develop a truly circular economy and sustainable future,” said Reisner, who led the research. “The climate crisis and issues around pollution and health are closely related, and developing an approach that could help address both would be a game-changer for so many people.”

The research was supported in part by the European Commission’s Horizon 2020 programme, The European Research Council, the Cambridge Trust, the Petronas Education Sponsorship Programme, and the Winton Programme for the Physics of Sustainability. Erwin Reisner is a Fellow of St John’s College. Chanon Pornrungroj is a member of Darwin College, and Ariffin Mohamad Annuar is a member of Clare College.

Reference:
Chanon Pornrungroj, Ariffin Bin Mohamad Annuar et al. ‘Hybrid photothermal-photocatalyst sheets for solar-driven overall water splitting coupled to water purification.’ Nature Water (2023). DOI: 10.1038/s44221-023-00139-9



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

The Vice-Chancellor’s Dialogues: Is assisted dying compassionate, or dangerous for society?

Vice-Chancellor Professor Deborah Prentice chaired the first Vice-Chancellor’s Dialogues

source: www.cam.ac.uk

On Wednesday 8th November Vice-Chancellor Professor Deborah Prentice chaired the first Vice-Chancellor’s Dialogues. The event launched a series of dialogues about some of the most difficult issues of our time.

There are two purposes to these events. The first, is to establish whether there is any common ground between people who may seem to be far apart. If we are to make progress in legislation or in understanding the world we live in, we need to identify where we agree as well as where we disagree. The second, is to ensure discussions involve the widest range of viewpoints – that nothing, within the law, is taboo and that freedom of speech and of thought, and of academic debate, is upheld.

The first event tackled, literally, a matter of life and death: the question of whether assisted dying is compassionate, or dangerous for society.

The speakers were:

  • Dr Jonathan Romain, who was appointed Chair of Dignity in Dying, the UK’s leading campaign for a change in the law on assisted dying, in June 2023
  • Dr Amy Proffitt, who spoke for Dying Well, the group promoting access to excellent care at the end of life and standing against the legalisation of assisted suicide
  • Dr Zoë Fritz, a Wellcome fellow in Society and Ethics at the University of Cambridge, and a Consultant Physician in Acute medicine at Addenbrooke’s Hospital. She works with colleagues in the Faculties of Law and Philosophy to ensure solutions are philosophically grounded and legally robust, as well as clinically practical and acceptable to all stakeholders.

The full recording can be viewed on the University YouTube channel.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Why do climate models underestimate polar warming? ‘Invisible clouds’ could be the answer

Polar Stratospheric Clouds, also called mother of pearl clouds

source: www.cam.ac.uk

Stratospheric clouds over the Arctic may explain the differences seen between the polar warming calculated by climate models and actual recordings, according to researchers from the University of Cambridge and UNSW Sydney.

Our study shows the value of increasing the detail of climate models where we canDeepashree Dutta

The Earth’s average surface temperature has increased drastically since the start of the Industrial Revolution, but the warming effect seen at the poles is even more exaggerated. While existing climate models consider the increased heating in the Arctic and Antarctic poles, they often underestimate the warming in these regions. This is especially true for climates millions of years ago, when greenhouse gas concentrations were very high.

This is a problem because future climate projections are generated with these same models: if they do not produce enough warming for the past, we might underestimate polar warming – and therefore the associated risks, such as ice sheet or permafrost melting – for the future.

“During my PhD, I was drawn to the fact that the climate models we are using do not represent the magnitude of warming that happens in the Arctic,” said lead author Dr Deepashree Dutta from Cambridge’s Department of Geography, who carried out the work during her PhD at UNSW. “At the same time, we knew that the majority of these models do not represent the upper layers of the atmosphere very well. And we thought this might be a missing link.”

The team turned their focus to a key atmospheric element that is missing in most models — polar stratospheric clouds — and found that they can explain a large part of the missing warming in models.

Their results, published in the journal Nature Geoscience, show that there is still much to learn about the climate of the past, present and future.

Climate models are computer simulations of our global climate system that are built using our theoretical understanding of how the climate works. They can be used to recreate past conditions or predict future climate scenarios.

Climate models incorporate many factors that influence the climate, but they cannot include all real-world processes. One consequence of this is that generally, climate models simulate polar climate change that is smaller than actual observations.

“The more detail you include in the model, the more resources they require to run,” said co-author Dr Martin Jucker from UNSW. “It’s often a toss-up between increasing the horizontal or vertical resolution of the model. And as we live down here at the surface of the earth, the detail closer to the surface is often prioritised.”

In 1992, American paleoclimatologist Dr Lisa Sloan first suggested that the extreme warming at high latitudes during past warm periods may have been caused by polar stratospheric clouds.

Polar stratospheric clouds form at very high altitudes (15-25 km above the Earth’s surface), and at very low temperatures (over the poles). They are also called nacreous or mother-of-pearl clouds because of their bright and sometimes luminous hues, although they are not normally visible to the naked eye. 

These polar stratospheric clouds have a similar effect on climate as greenhouse gases – they trap heat that would otherwise be lost to space and warm the surface of the Earth. 

“These clouds form under complex conditions, which most climate models cannot reproduce. And we wondered if this inability to simulate these clouds may result in less surface warming at the poles than what we’ve observed in the real world,” said Dutta. 

Thirty years after Sloan’s research, Dutta wanted to test this theory using one of the few atmospheric models that incorporates polar atmospheric clouds, to see if it might explain the disparities in warming between observational data and climate models.

“I wanted to test this theory by running an atmospheric model that includes all necessary processes with conditions that resembled a time period over 50 million years ago, known as the early Eocene. It was a period of Earth’s history when the planet was very hot and the Arctic was ice-free throughout the year,” said Dutta. 

The Eocene was also a period characterised by high methane content, and the position of continents and mountains was different to today.

“Climate models are far too cold in the polar regions, when simulating these past hot climates, and this has been an enigma for the past thirty years,” said Jucker. “The early Eocene was a period in the Earth’s climate with extreme polar warming, so presented the perfect test for our climate models.”

The team found that the elevated methane levels during the Eocene resulted in an increase in polar stratospheric cloud formation. They found that under certain conditions, the local surface warming due to stratospheric clouds was up to 7 degrees Celsius during the coldest winter months. This temperature difference significantly reduces the gap between climate models and temperature evidence from climate archives.

By comparing future simulations to simulations of the Eocene, the researchers also discovered that it isn’t just methane that was needed to produce polar stratospheric clouds. “This is another key finding of this work,” said Dutta. “It’s not just methane, but it’s also the Earth’s continental arrangement, which plays an important role in forming these stratospheric clouds. Because if we input the same amount of methane for our future climate, we do not see the same increase in stratospheric clouds.”

The research has provided some of the answers to questions about the climate of the deep past, but what does that mean for future projections?

“We found that stratospheric clouds account for the accelerated warming at the poles that is often left out of our climate models, and of course this could potentially mean that our future projections are also not warm enough,” said Jucker. “But the good news is that these clouds are more likely to form under the continental arrangement that was present tens of millions of years ago, and is not found on Earth now. Therefore, we don’t expect such large increases in stratospheric clouds in the future.”

This new research has not only helped to provide a piece of the puzzle as to why temperature recordings in the Arctic are always warmer than climate models – it has also provided new insights into the Earth’s past climate.

“Our study shows the value of increasing the detail of climate models, where we can,” said Dutta. “Although we already know a lot about these clouds theoretically, until we include them in our climate models, we won’t know the full scale of their impact.”

Reference:
Deepashree Dutta et al. ‘Early Eocene low orography and high methane enhance Arctic warming via polar stratospheric clouds.’ Nature Geoscience (2023). DOI: 10.1038/s41561-023-01298-w

Adapted from a UNSW press release.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Machine learning gives users ‘superhuman’ ability to open and control tools in virtual reality

HotGestures give users ‘superhuman’ ability to open and control tools in virtual reality

https://www.youtube-nocookie.com/embed/3kNFvhU5ntU?wmode=opaque&controls=1&rel=0&autohide=0&enablejsapi=1&origin=https%3A%2F%2Fwww.cam.ac.uk

source: www.cam.ac.uk

Researchers have developed a virtual reality application where a range of 3D modelling tools can be opened and controlled using just the movement of a user’s hand. 

We need new ways of interacting with technology, and we think this is a step in that directionPer Ola Kristensson

The researchers, from the University of Cambridge, used machine learning to develop ‘HotGestures’ – analogous to the hot keys used in many desktop applications.

HotGestures give users the ability to build figures and shapes in virtual reality without ever having to interact with a menu, helping them stay focused on a task without breaking their train of thought.

The idea of being able to open and control tools in virtual reality has been a movie trope for decades, but the researchers say that this is the first time such a ‘superhuman’ ability has been made possible. The results are reported in the journal IEEE Transactions on Visualization and Computer Graphics.

Virtual reality (VR) and related applications have been touted as game-changers for years, but outside of gaming, their promise has not fully materialised. “Users gain some qualities when using VR, but very few people want to use it for an extended period of time,” said Professor Per Ola Kristensson from Cambridge’s Department of Engineering, who led the research. “Beyond the visual fatigue and ergonomic issues, VR isn’t really offering anything you can’t get in the real world.”

Most users of desktop software will be familiar with the concept of hot keys – command shortcuts such as ctrl-c to copy and ctrl-v to paste. While these shortcuts omit the need to open a menu to find the right tool or command, they rely on the user having the correct command memorised.

“We wanted to take the concept of hot keys and turn it into something more meaningful for virtual reality – something that wouldn’t rely on the user having a shortcut in their head already,” said Kristensson, who is also co-Director of the Centre for Human-Inspired Artificial Intelligence.

Instead of hot keys, Kristensson and his colleagues developed ‘HotGestures’, where users perform a gesture with their hand to open and control the tool they need in 3D virtual reality environments.

For example, performing a cutting motion opens the scissor tool, and the spray motion opens the spray can tool. There is no need for the user to open a menu to find the tool they need, or to remember a specific shortcut. Users can seamlessly switch between different tools by performing different gestures during a task, without having to pause their work to browse a menu or to press a button on a controller or keyboard.

“We all communicate using our hands in the real world, so it made sense to extend this form of communication to the virtual world,” said Kristensson.

For the study, the researchers built a neural network gesture recognition system that can recognise gestures by performing predictions on an incoming hand joint data stream. The system was built to recognise ten different gestures associated with building 3D models: pen, cube, cylinder, sphere, palette, spray, cut, scale, duplicate and delete.

The team carried out two small studies where participants used HotGestures, menu commands or a combination. The gesture-based technique provided fast and effective shortcuts for tool selection and usage. Participants found HotGestures to be distinctive, fast, and easy to use while also complementing conventional menu-based interaction. The researchers designed the system so that there were no false activations – the gesture-based system was able to correctly recognise what was a command and what was normal hand movement. Overall, the gesture-based system was faster than a menu-based system.

“There is no VR system currently available that can do this,” said Kristensson. “If using VR is just like using a keyboard and a mouse, then what’s the point of using it? It needs to give you almost superhuman powers that you can’t get elsewhere.”

The researchers have made the source code and dataset publicly available so that designers of VR applications can incorporate it into their products.

“We want this to be a standard way of interacting with VR,” said Kristensson. “We’ve had the tired old metaphor of the filing cabinet for decades. We need new ways of interacting with technology, and we think this is a step in that direction. When done right, VR can be like magic.”

The research was supported in part by the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).

Reference:
Zhaomou Song; John J Dudley; Per Ola Kristensson. ‘HotGestures: Complementing Command Selection and Use with Delimiter-Free Gesture-Based Shortcuts in Virtual Reality.’ IEEE Transactions on Visualization and Computer Graphics (2023). DOI: 10.1109/TVCG.2023.3320257



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Experts predict ‘catastrophic ecosystem collapse’ of UK forests within the next 50 years if action not taken

Woodland

source: www.cam.ac.uk

Other threats to UK forests include competition with society for water, viral diseases, and extreme weather affecting forest management.

The next 50 years will bring huge changes to UK forests: the threats they face, the way that we manage them, and the benefits they deliver to society.Dr Eleanor Tew, visiting researcher at Cambridge’s Department of Zoology and Head of Forest Planning at Forestry England

A team of experts from across Europe has produced a list of 15 over-looked and emerging issues that are likely to have a significant impact on UK forests over the next 50 years.

This is the first ‘horizon scanning’ exercise – a technique to identify relatively unknown threats, opportunities, and new trends – of UK forests. The aim is to help researchers, practitioners, policymakers, and society in general, better prepare for the future and address threats before they become critical.

Dr Eleanor Tew, first author, visiting researcher at Cambridge’s Department of Zoology and Head of Forest Planning at Forestry England said: “The next 50 years will bring huge changes to UK forests: the threats they face, the way that we manage them, and the benefits they deliver to society.”

Forestry England, a part of the Forestry Commission, collaborated with the University of Cambridge on the study, which was published today in the journal, Forestry.

A panel comprising 42 experts, who represented a range of professions, organisations, and geographies, reached out to their networks to seek over-looked and emerging issues that were likely to affect UK forests over the next half a century. The resulting 180-item longlist was then whittled down through a series of review exercises to a shortlist of 30 issues. In a final workshop, panellists identified the top 15 issues they believed were likely to have the greatest impact on UK forests in the next 50 years.

The research method did not support the overall ranking of the 15 issues in order of importance or likelihood of occurrence. However, when the issues were scored individually by the panel of experts, it was notable that ‘catastrophic forest ecosystem collapse’ was the most highly ranked issue, with 64% of experts ranking it as their top issue and 88% ranking it within their top three.

‘Catastrophic forest ecosystem collapse’ refers to multiple interrelated hazards that have a cascading effect on forests, leading to their total or partial collapse. This has already been witnessed in continental Europe and North America.

Tew said: “We hope the results from this horizon scanning exercise serve as an urgent call to action to build on, and dramatically upscale, action to increase forest resilience.”

Another issue identified was that droughts caused by climate change may lead to competition for water resources between forests and society. On the other hand, forests may help to mitigate the impact of floods caused by climate change.

Tree viral diseases were also identified as an issue. In the UK, pests and pathogens are increasing due to globalisation and climate change, with viruses and viroids (RNA molecules) being the largest group on the UK Plant Health Risk Register. However, little is known about how viral diseases affect forest tree species and indeed the wider ecosystem.

A further issue was the effect of climate change on forest management, with extreme weather leading to smaller windows of time when forestry can be carried out. Experts warn that the seasons for carrying out work such as harvesting and thinning are getting narrower as we see wetter winters and scorching summers.

However not all emerging issues are threats – some are new opportunities. For example, trees will be at the heart of future urban planning. Experts predict that ‘forest lungs’ will be created thanks to an increased understanding of the benefits of trees for society. They say there will likely be a greater blurring of boundaries between urban and rural areas, with an increase in green infrastructure and connectivity.

International commitments around nature are also likely to have repercussions at the local level. For example, the mandatory reporting of companies’ supply chain impacts on nature, such as through the new framework being developed by the Taskforce on Nature-related Financial Disclosures (TNFD), could create additional incentives for nature-friendly forest management.

Tew concluded: “These results are both concerning and exciting. However, we should be optimistic, remembering that these are possibilities and not certainties. Crucially, we have time to act ‒ by responding to the threats and embracing the opportunities, future generations can have resilient forests with all the benefits they offer.”

Senior author and pioneer of horizon scanning, Professor Bill Sutherland, from the Department of Zoology at the University of Cambridge said: “We are already seeing dramatic events in Europe’s forests whether fires, disease or bark beetles, whilst the importance of trees is increasingly recognised. Horizon scanning to identify future issues is key, especially as trees planted now will face very different circumstances as they mature in scores of years.”

This research was funded by Forestry England. The Forestry Commission is bringing the sector together in 2024 to look at next steps.

Tew et al, A horizon scan of issues affecting UK forest management within 50 years, Forestry DOI: 10.1093/forestry/cpad047



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge provides English learning platform for Ukraine

Students studying online.

source: www.cam.ac.uk

People in Ukraine will be able to improve their English using an online learning platform specially developed by the University of Cambridge, in collaboration with Cambridge University Press & Assessment, and technology companies Amazon Web Services and Catalyst IT.

It is part of the new Future Perfect programme – initiated by the President of Ukraine and being launched by the Ukrainian government – which aims to make English the official language of international communication in Ukraine and open up new professional and personal opportunities for Ukrainians.

The organisations combined their expertise following a request for help from the Ukrainian government to support Ukraine’s education sector and enhance foreign language learning for both teachers and students, many of whom have been displaced by the war.

As well as helping Ukraine grow international relationships and enable Ukrainians to make better use of other support they have received from the international community since Russia invaded in February 2022, the Future Perfect programme aims to contribute to the rebuilding of the economy after the war.

Thousands of college and university students are among the adult and young adult learners who Ukraine hopes will benefit from the English language learning platform, the first project under the umbrella of Future Perfect and based on Cambridge University Press & Assessment’s Empower course which provides a mix of engaging classroom materials and online learning.

Cambridge is committed to doing all it can to assist teachers and learners in Ukraine, making its educational excellence available to colleges and universities, and enabling students to continue their studies despite the unprecedented challenges Russia’s illegal invasion has created.

Professor Kamal Munir, Pro-Vice-Chancellor for University Community and Engagement at Cambridge University, said: “The University – as part of its Help for Ukraine package of educational support – acted as soon as the Ukrainian government asked for support with English language learning, drawing on expertise from across our departments.

“Key to creating a platform for such a specific audience, learning in such challenging circumstances, was the technical skills of teams within the academic University, and the knowledge and experience of colleagues at Cambridge University Press & Assessment. They have produced in months a resource that would normally take years to deliver.”

The first-of-its kind project saw Cambridge teams – in partnership with e-learning experts Catalyst IT – combine academic expertise from the University, and curriculum expertise from Cambridge University Press & Assessment to create the online platform and provide learning course materials.

Amazon Web Services (AWS) will host the platform on its cloud infrastructure. By using AWS Cloud, the platform will have the ability to dynamically scale to meet future demand for the course, enabling the course content to be available to users anytime from anywhere, all delivered from a highly secure environment. Leveraging the cloud means innovation can be a continuous cycle, ensuring the platform can accommodate future technology developments.

Cambridge will support the launch of the platform – which is being supplied free of charge – before it is handed over to the Digital Ministry of Ukraine.

Fran Woodward, Global Managing Director for English at Cambridge University Press & Assessment, said: “Future Perfect reflects a great ambition for Ukrainian education. This will open doors for Ukrainians who want to improve their English language skills, and will support new global economic opportunities. We are delighted to support English language education in Ukraine and we wish Ukrainian teachers and students every success.”

Valeriya Ionan, Ukraine’s Deputy Minister of Digital Transformation for European Integration, said: “The full-scale invasion re-emphasizes the importance of developing the skills of our people, and the value of inclusive education. We believe in the transformative power of education to facilitate the skills that can reduce the unemployment rate as English language proficiency is directly correlated to GDP. The Ministry of Digital Transformation of Ukraine expresses gratitude to Cambridge University and Amazon Web Services along with Catalyst IT as the technology leaders for the strategic support”.

Dmytro Zavgorodnii, Deputy Minister of Education and Science of Ukraine, for digital development, digital transformations and digitalization, said: “Speaking English multiplies opportunities for literally every citizen in Ukraine. For some, it is a chance to find a dream job and for others it is a tool to connect with people or events around the world. Regardless of your future or current occupation, English is essential. Thanks to Cambridge University, Amazon Web Service and Catalyst IT, we now have a well-timed approach to develop our population’s skills”.

Joey Murison, Managing Director of Catalyst IT, said: “It has been our pleasure to support this great initiative for the students of Ukraine. Our expertise as world leaders in the maintenance and management of online learning platforms for the higher education sector has enabled us to deliver the platform in record time. We look forward to providing our ongoing support now and into the future for the benefit of the students of Ukraine.”

Liam Maxwell, Director, Government Transformation, at Amazon Web Services said: “We’re pleased to collaborate on this initiative that will give Ukrainians the opportunity to enhance their English language skills. Building the Empower platform on the cloud will give the Ukrainian Government the flexibility to dynamically scale the environment to meet the demand for the course, and enables the content to be made available remotely and securely to students. We look forward to seeing the course launch, and hope it has a positive impact on the professional growth of the Ukrainian students who take part.”

Cambridge University Help for Ukraine

Cambridge University Help for Ukraine is a developing package of support announced by the University last year. It has also created fully funded residential placements in a wide range of subjects for students and academics, clinical placements for medical students, and help for academics still working in Ukraine.

About Cambridge University Press & Assessment

Cambridge University Press & Assessment supports millions of English language learners worldwide, working with tens of thousands of organisations in more than 130 countries and territories. Last year it was announced that other English language teaching and learning resources were being made available at no cost as part of Cambridge’s support for the Ukrainian education sector.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Chimpanzees use hilltops to conduct reconnaissance on rival groups

Chimpanzees are seen attentively listening to other chimpanzees heard at some distance in the West African forests of Côte d’Ivoire, studied as part of research by the Taï Chimpanzee Project

source: www.cam.ac.uk

Research on neighbouring chimpanzee communities in the forests of West Africa suggests a warfare tactic not previously seen beyond humans is regularly used by our closest evolutionary relatives.

Tactical warfare is considered a driver of human evolutionSylvain Lemoine

Chimpanzees use high ground to conduct reconnaissance on rival groups, often before making forays into enemy territory at times when there is reduced risk of confrontation, a new study suggests.

Tactical use of elevated terrain in warfare situations is considered unique to humans – until now. For the first time, one of the oldest military strategies has been observed in our closest evolutionary relatives.

Researchers conducted a three-year study of two neighbouring chimpanzee groups in the West African forests of Côte d’Ivoire, tracking the primates as they traversed their respective territories, including an overlapping border area where skirmishes occasionally took place.

The team found that chimpanzees were more than twice as likely to climb hills when heading towards this contested frontier as when they were travelling into the heart of their own territory.*

While atop border hills, chimpanzees were more likely to refrain from noisily eating or foraging and spend time quietly resting – enabling them to hear distant sounds of rival groups, say researchers.

The further away the location of hostile chimpanzees, the greater the likelihood of an advance into dangerous territory upon descending the hill. This suggests that chimpanzees on high ground gauge the distance of rivals, and act accordingly to make incursions while avoiding costly fights.  

Other mammal species such as meerkats use high ground to keep watch for predators or call to mates. However, researchers say this is the first evidence for an animal other than humans making strategic use of elevation to assess the risks of “intergroup conflict”.

“Tactical warfare is considered a driver of human evolution,” said Dr Sylvain Lemoine, a biological anthropologist from the University of Cambridge’s Department of Archaeology, and lead author of the study published in the journal PLOS Biology.

“This chimpanzee behaviour requires complex cognitive abilities that help to defend or expand their territories, and would be favoured by natural selection.”

“Exploiting the landscape for territorial control is deeply rooted in our evolutionary history. In this use of war-like strategy by chimpanzees we are perhaps seeing traces of the small scale proto-warfare that probably existed in prehistoric hunter-gatherer populations.”

The study was conducted at the Taï Chimpanzee Project, where Lemoine worked during his PhD. The project is currently led by study senior author Dr Roman Wittig from CNRS in France.

Teams of researchers spend 8-12 hours a day following four groups that are habituated to the presence of humans. It is one of the few sites where data is collected simultaneously on multiple communities of wild chimpanzees. 

The project researchers have GPS trackers, through which the study authors were able to reproduce maps of two chimpanzee territories that border each other, including elevation data. These were matched to old French colonial maps to confirm topography.

Each group consisted of 30-40 adult chimpanzees at any one time. The study used over 21,000 hours of track logs from a total of 58 animals recorded between 2013 and 2016. 

To establish and protect their territory, chimpanzees perform regular tours of the periphery that form a sort of “border patrol”, says Lemoine. “Patrols are often conducted in subgroups that stay close and limit noise. As an observer, you get a sense that patrolling has begun. They move and stop at the same time, a bit like a hunt,” he said.

The type of hills near the border used for reconnaissance are known as “inselbergs”: isolated rocky outcrops that break up the forest canopy. Chimpanzees repeatedly returned to some of these inselbergs, where time on the summit was passed in a more muted state.

“These aren’t so much lookout points as listen-out points,” said Lemoine. “Chimpanzees drum on tree trunks and make excitable vocalisations called pant-hoots to communicate with group members or assert their territory. These sounds can be heard over a kilometre away, even in dense forest.”

“It may be that chimpanzees climb hilltops near the edge of their territory when they have yet to hear signs of rival groups. Resting quietly on an elevated rock formation is an ideal condition for the auditory detection of distant adversaries.”

Researchers analysed tactical movements in the half an hour after a stop longer than five minutes on a hill near the border, and compared it to movements after stops in low-lying border areas.

Following a hilltop recce, the likelihood of advancing into enemy territory increased from 40% when rivals were 500 metres away, to 50% when rivals were at 1000m, to 60% when rivals were at 3000m.    

“Chimpanzees often expand their territory by encroaching and patrolling in that of their neighbours. Hilltop information-gathering will help them to do this while reducing risks of encountering any enemies,” said Lemoine. “The border zone between the two groups was in a state of flux.” 

More territory can boost food provision and mating chances, says Lemoine. His previous work suggests that larger chimpanzee groups live in bigger territories with reduced pressure from rivals, which in turn increases birth rates within communities.

The latest research suggests that chimpanzees use hilltop reconnaissance to avoid confrontation, and violence is relatively rare, says Lemoine. But fights, and even kidnappings and killings, did occur between rival group members.

“Occasionally, raiding parties of two or three males venture deep into enemy territory, which can lead to fighting. Confrontations between rival chimpanzees are extremely noisy. The animals go into an intimidating frenzy, screaming and defecating and gripping each other’s genitals.”

*Chimpanzees stopped on peripheral hills in 58% of movements heading towards the border, but in only 25% of the movements heading towards the centre of their territory.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

AI trained to identify least green homes by Cambridge researchers

Street view images of Cambridge houses showing building features contributing to HtD identification

source: www.cam.ac.uk

First of its kind AI-model can help policymakers efficiently identify and prioritize houses for retrofitting and other decarbonizing measures.

This is the first time that AI has been trained to identify hard-to-decarbonize buildingsRonita Bardhan

‘Hard-to-decarbonize’ (HtD) houses are responsible for over a quarter of all direct housing emissions – a major obstacle to achieving net zero – but are rarely identified or targeted for improvement.

Now a new ‘deep learning’ model trained by researchers from Cambridge University’s Department of Architecture promises to make it far easier, faster and cheaper to identify these high priority problem properties and develop strategies to improve their green credentials.

Houses can be ‘hard to decarbonize’ for various reasons including their age, structure, location, social-economic barriers and availability of data. Policymakers have tended to focus mostly on generic buildings or specific hard-to-decarbonise technologies but the study, published in the journal Sustainable Cities and Society, could help to change this.

Maoran Sun, an urban researcher and data scientist, and his PhD supervisor Dr Ronita Bardhan (Selwyn College), who leads Cambridge’s Sustainable Design Group, show that their AI model can classify HtD houses with 90% precision and expect this to rise as they add more data, work which is already underway.

Dr Bardhan said: “This is the first time that AI has been trained to identify hard-to-decarbonize buildings using open source data to achieve this.

“Policymakers need to know how many houses they have to decarbonize, but they often lack the resources to perform detail audits on every house. Our model can direct them to high priority houses, saving them precious time and resources.”

The model also helps authorities to understand the geographical distribution of HtD houses, enabling them to efficiently target and deploy interventions efficiently.

The researchers trained their AI model using data for their home city of Cambridge, in the United Kingdom. They fed in data from Energy Performance Certificates (EPCs) as well as data from street view images, aerial view images, land surface temperature and building stock. In total, their model identified 700 HtD houses and 635 non-HtD houses. All of the data used was open source.

Maoran Sun said: “We trained our model using the limited EPC data which was available. Now the model can predict for the city’s other houses without the need for any EPC data.”

Bardhan added: “This data is available freely and our model can even be used in countries where datasets are very patchy. The framework enables users to feed in multi-source datasets for identification of HtD houses.”

Sun and Bardhan are now working on an even more advanced framework which will bring additional data layers relating to factors including energy use, poverty levels and thermal images of building facades. They expect this to increase the model’s accuracy but also to provide even more detailed information.

The model is already capable of identifying specific parts of buildings, such as roofs and windows, which are losing most heat, and whether a building is old or modern. But the researchers are confident they can significantly increase detail and accuracy.

They are already training AI models based on other UK cities using thermal images of buildings, and are collaborating with a space products-based organisation to benefit from higher resolution thermal images from new satellites. Bardhan has been part of the NSIP – UK Space Agency program where she collaborated with the Department of Astronomy and Cambridge Zero on using high resolution thermal infrared space telescopes for globally monitoring the energy efficiency of buildings.

Sun said: “Our models will increasingly help residents and authorities to target retrofitting interventions to particular building features like walls, windows and other elements.”

Bardhan explains that, until now, decarbonization policy decisions have been based on evidence derived from limited datasets, but is optimistic about AI’s power to change this.

“We can now deal with far larger datasets. Moving forward with climate change, we need adaptation strategies based on evidence of the kind provided by our model. Even very simple street view photographs can offer a wealth of information without putting anyone at risk.”

The researchers argue that by making data more visible and accessible to the public, it will become much easier to build consensus around efforts to achieve net zero.

“Empowering people with their own data makes it much easier for them to negotiate for support,” Bardhan said.

She added: “There is a lot of talk about the need for specialised skills to achieve decarbonisation but these are simple data sets and we can make this model very user friendly and accessible for the authorities and individual residents.”

Cambridge as a study site

Cambridge is an atypical city but informative site on which to base the initial model. Bardhan notes that Cambridge is relatively affluent meaning that there is a greater willingness and financial ability to decarbonise houses.

“Cambridge isn’t ‘hard to reach’ for decarbonisation in that sense,” Bardhan said. “But the city’s housing stock is quite old and building bylaws prevent retrofitting and the use of modern materials in some of the more historically important properties. So it faces interesting challenges.”

The researchers will discuss their findings with Cambridge City Council. Bardhan previously worked with the Council to assess council houses for heat loss. They will also continue to work with colleagues at Cambridge Zero and the University’s Decarbonisation Network.

Reference

M Sun & R Bardhan, ‘Identifying Hard-to-Decarbonize houses from multi-source data in Cambridge, UK’, Sustainable Cities and Society (2023). DOI: 10.1016/j.scs.2023.105015



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge, Intel and Dell join forces on UK’s fastest AI supercomputer

Dr Paul Calleja, Director of Dawn AI Service (left) and Professor Richard McMahon, Chair of Cambridge Research Computing Advisory Group and UKRI Dawn Principal Investigator (right) in front of Dawn.

source: www.cam.ac.uk

The Cambridge Open Zettascale Lab is hosting Dawn, the UK’s fastest artificial intelligence (AI) supercomputer, which has been built by the University of Cambridge Research Computing Services, Intel and Dell Technologies.

Dawn Phase 1 represents a huge step forward in AI and simulation capability for the UK, deployed and ready to use nowPaul Calleja

Dawn has been created via a highly innovative long-term co-design partnership between the University of Cambridge, UK Research & Innovation, the UK Atomic Energy Authority and global tech leaders Intel and Dell Technologies. This partnership brings highly valuable technology first-mover status and inward investment into the UK technology sector.

Dawn, supported by UK Research and Innovation (UKRI), will vastly increase the country’s AI and simulation compute capacity for both fundamental research and industrial use, accelerating research discovery and driving growth within the UK knowledge economy. It is expected to drive significant advancements in healthcare, green fusion energy development and climate modelling.

Dawn Phase 1 and the already announced Isambard AI supercomputer at the University of Bristol will join to form the AI Research Resource (AIRR), a UK national facility to help researchers maximise the potential of AI and support critical work into the potential and safe use of the technology.

Dr Paul Calleja, Director of Research Computing Services at the University of Cambridge, said: “Dawn Phase 1 represents a huge step forward in AI and simulation capability for the UK, deployed and ready to use now. Dawn was born from an innovative co-design partnership between University of Cambridge, UKAEA, Dell Technologies and Intel.

“The Phase 1 system plays an important role within a larger context, where this co-design activity is hoped to continue, aiming to deliver a Phase 2 supercomputer in 2024 which will boast 10 times the level of performance. If taken forward, Dawn Phase 2 would significantly boost the UK AI capability and continue this successful industry partnership.”

World-leading technical teams from the University, Intel and Dell Technologies built Dawn, which harnesses the power of both AI and high-performance computing (HPC) to solve some of the world’s most challenging and pressing problems.

Announcing this investment at the AI Safety Summit at Bletchley Park, Science, Innovation and Technology Secretary Michelle Donelan said: “Frontier AI models are becoming exponentially more powerful. At our AI Safety Summit in Bletchley Park, we have made it clear that Britain is grasping the opportunity to lead the world in adopting this technology safely so we can put it to work and lead healthier, easier and longer lives.

“This means giving Britain’s leading researchers and scientific talent access to the tools they need to delve into how this complicated technology works. That is why we are investing in building UK’s supercomputers, making sure we cement our place as a world-leader in AI safety.”

Professor Emily Shuckburgh, Director of Cambridge Zero and the Institute of Computing for Climate Science said: “The coupling of AI and simulation methods is a growing and increasingly essential part of climate research. It is central to data-driven predictions and equation discovery, both of which are at the fore in climate science.

“This incredible new resource – Dawn – at Cambridge will enable software engineers and researchers at the Institute of Computing for Climate Science to accelerate their work helping to address the global challenges associated with climate change.”

Dawn brings the UK closer to reaching the compute threshold of a quintillion floating point operations per second – one exaflop, better known as exascale. For perspective: every person on earth would have to make calculations 24 hours a day for more than four years to equal a second’s worth of processing power in an exascale system.

Hosted at Cambridge Open Zettascale Lab’s site, Dawn is the fastest AI supercomputer deployed in the UK today and will support some of the UK’s largest-ever workloads across both academic research and industrial domains. Importantly, it is the UK’s first step on the road to developing future Exascale system.

Adam Roe, EMEA HPC technical director at Intel, said: “Dawn considerably strengthens the scientific and AI compute capability available in the UK, and it’s on the ground, operational today at the Cambridge Open Zettascale Lab.

“I’m very excited to see the sorts of early science this machine can deliver and continue to strengthen the Open Zettascale Lab partnership between Dell Technologies, Intel and the University of Cambridge, and further broaden that to the UK scientific and AI community.”

Tariq Hussain, Head of UK Public Sales, Dell Technologies, said: “Collaborations like [this one], alongside strong inward investment, are vital if we want compute to unlock the high-growth AI potential of the UK. It is paramount that the government invests in the right technologies and infrastructure to ensure the UK leads in AI and exascale-class simulation capability.

“It’s also important to embrace the full spectrum of the technology ecosystem, including GPU diversity, to ensure customers can tackle the growing demands of generative AI, industrial simulation modelling and ground-breaking scientific research.”

Dr Rob Akers, Director of Computing Programmes & Senior Fellow at UKAEA, added: “Dawn will form an essential part of a diverse UKRI supercomputing ecosystem, helping to promote high-fidelity simulation and AI capability ensuring that UK science and engineering is first in the queue to exploit the latest innovation in disruptive HPC hardware. In the short term, Dawn will allow UKAEA’s fusion energy programme to form a powerful and exciting cross-Atlantic partnership with US labs exploiting the new 2ExaFlop AURORA supercomputer at Argonne, Dawn’s ‘big sister’.

“Fusion has long been referred to as an ‘exascale grand challenge’. The exascale is finally upon us and I firmly believe that the many collaborations coalescing around Dawn will be a powerful ingredient for extracting value promised by the exascale – for the UK to deliver fusion power to grid in the 2040’s, to realise Net Zero more generally, to seed high value UK jobs in AI and ‘digital’ and to drive economic growth across the entire United Kingdom.”



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cancer drug could hold hope for treating inflammatory diseases including gout and heart diseases

The feet of a man suffering from gout.

source: www.cam.ac.uk

A cancer drug currently in the final stages of clinical trials could offer hope for the treatment of a wide range of inflammatory diseases, including gout, heart failure, cardiomyopathy, and atrial fibrillation, say scientists at the University of Cambridge.

We believe [our findings] could be important in preventing a number of common diseases that can cause pain and disability and in some cases can lead to life-threatening complicationsXuan Li

In a study published on 1 November in the Journal of Clinical Investigation, the researchers have identified a molecule that plays a key role in triggering inflammation in response to materials in the body seen as potentially harmful.

We are born with a defence system known as innate immunity, which acts as the first line of defence against harmful materials in the body. Some of these materials will come from outside, such as bacterial or viral infections, while others can be produced within the body.

Innate immunity triggers an inflammatory response, which aims to attack and destroy the perceived threat. But sometimes, this response can become overactive and can itself cause harm to the body.

One such example of this is gout, which occurs when urate crystals build up in joints, causing excessive inflammation, leading to intense pain. Another example is heart attack, where dead cell build up in the damaged heart – the body sees itself as being under attack and an overly-aggressive immune system fights back, causing collateral damage to the heart.

Several of these conditions are characterised by overactivation of a component of the innate immune response known as an inflammasome – specifically, the inflammasome NLRP3. Scientists at the Victor Phillip Dahdaleh Heart and Lung Research Institute at Cambridge have found a molecule that helps NLRP3 respond.

This molecule is known as PLK1. It is involved in a number of processes within the body, including helping organise tiny components of our cells known as microtubules cytoskeletons. These behave like train tracks inside of the cell, allowing important materials to be transported from one part of the cell to another.

Dr Xuan Li from the Department of Medicine at the University of Cambridge, the study’s senior author, said: “If we can get in the way of the microtubules as they try to organise themselves, then we can in effect slow down the inflammatory response, preventing it from causing collateral damage to the body. We believe this could be important in preventing a number of common diseases that can cause pain and disability and in some cases can lead to life-threatening complications.”

But PLK1 also plays another important role in the body – and this may hold the key to developing new treatments for inflammatory diseases.

For some time now, scientists have known that PLK1 is involved in cell division, or mitosis, a process which, when it goes awry, can lead to runaway cell division and the development of tumours. This has led pharmaceutical companies to test drugs that inhibit its activity as potential treatments for cancer. At least one of these drugs is in phase three clinical trials – the final stages of testing how effective a drug is before it can be granted approval.

When the Cambridge scientists treated mice that had developed inflammatory diseases with a PLK1 inhibitor, they showed that it prevented the runaway inflammatory response – and at a much lower dose than would be required for cancer treatment. In other words, inhibiting the molecule ‘calmed down’ NLRP3 in non-dividing cells, preventing the overly aggressive inflammatory response seen in these conditions.

The researchers are currently planning to test its use against inflammatory diseases in clinical trials.

“These drugs have already been through safety trials for cancer – and at higher doses than we think we would need – so we’re optimistic that we can minimise delays in meeting clinical and regulatory milestones,” added Dr Li.

“If we find that the drug is effective for these conditions, we could potentially see new treatments for gout and inflammatory heart diseases – as well as a number of other inflammatory conditions – in the not-too-distant future.”

The research was funded by the British Heart Foundation. Professor James Leiper, Associate Medical Director at the British Heart Foundation said: “This innovative research has uncovered a potential new treatment approach for inflammatory heart diseases such as heart failure and cardiomyopathy. It’s promising that drugs targeting PLK1 – that work by dampening down the inflammatory response – have already been proven safe and effective in cancer trials, potentially helping accelerate the drug discovery process.

“We hope that this research will open the door for new ways to treat people with heart diseases caused by overactive and aggressive immune responses, and look forward to more research to uncover how this drug could be could be repurposed.”

Reference
Baldrighi, M et al. PLK1 inhibition dampens NLRP3 inflammasome-elicited response in inflammatory disease models. JCI; 1 Nov 2023; DOI: 10.1172/JCI162129



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Offset markets: new approach could help save tropical forests by restoring faith in carbon credits

Tropical forest in Tanzania

source: www.cam.ac.uk

A new way to price carbon credits could encourage desperately needed investment in forest preservation and boost vital progress towards net-zero.

Our new approach has the potential to address market concerns around nature-based solutions to carbon offsetting.Srinivasan Keshav

A new approach to valuing the carbon storage potential of natural habitats aims to help restore faith in offset schemes, by enabling investors to directly compare carbon credit pricing across a wide range of projects.

Current valuation methods for forest conservation projects have come under heavy scrutiny, leading to a crisis of confidence in carbon markets. This is hampering efforts to offset unavoidable carbon footprints, mitigate climate change, and scale up urgently needed investment in tropical forest conservation.

Measuring the value of carbon storage is not easy. Recent research revealed that as little as 6% of carbon credits from voluntary REDD+ schemes result in preserved forests. And the length of time these forests are preserved is critical to the climate benefits achieved.

Now, a team led by scientists at the University of Cambridge has invented a more reliable and transparent way of estimating the benefit of carbon stored because of forest conservation.  

The method is published today in the journal Nature Climate Change. In it, the researchers argue that saving tropical forests is not only vital for biodiversity, but also a much less expensive way of balancing emissions than most of the current carbon capture and storage technologies.

The new approach works a bit like a lease agreement: carbon credits are issued to tropical forest projects that store carbon for a predicted amount of time. The valuation is front-loaded, because more trees protected now means less carbon released to the atmosphere straight away.

The technique involves deliberately pessimistic predictions of when stored carbon might be released, so that the number of credits issued is conservative. But because forests can now be monitored by remote sensing, if projects do better than predicted – which they usually will – they can be rewarded through the issue of further credits.

The payments encourage local people to protect forests: the carbon finance they receive can help provide alternative livelihoods that don’t involve cutting down trees.

And by allowing for future payments, the new method generates incentives for safeguarding forests long after credits have been issued. This contrasts with the current approach, which passes on a burden for conservation to future generations without compensation for lost livelihoods.

The approach also allows different types of conservation projects to be compared in a like-for-like manner.

“Until now there hasn’t been a satisfactory way of directly comparing technological solutions with nature-based solutions for carbon capture. This has caused a lack of enthusiasm for investing in carbon credits linked to tropical forest protection,” said Dr Tom Swinfield, a researcher in the University of Cambridge’s Department of Zoology and senior author of the study.

He added: “Tropical forests are being cleared so quickly that if we don’t protect them now, we’re not going to make the vital progress we need towards net-zero. Buying carbon credits linked to their protection is one of the best ways to do this.”

Tropical forests play a key role in taking carbon dioxide out of the atmosphere, helping to reduce global warming and avert climate catastrophe. But the carbon they capture is not taken out of the atmosphere permanently: forests can be destroyed by pests, floods, fire, wind – and by human clearance.

This impermanence, and therefore the difficulty of reliably measuring the long-term climate benefit of tropical forest protection, has made it an unattractive proposition for investors wanting to offset their carbon emissions.

And this is despite it being a far cheaper investment than more permanent, technology-based methods of carbon capture and storage.

Protection of tropical forests, a nature-based solution to climate change, comes with additional benefits: helping to conserve biodiversity, and supporting the livelihoods of people living near the forests.

“Nature-based carbon solutions are highly undervalued right now because the market doesn’t know how to account for the fact that forests aren’t a permanent carbon storage solution. Our method takes away a lot of the uncertainties,” said Anil Madhavapeddy, a Professor in the University of Cambridge’s Department of Computer Science and Technology, who was involved in the study.

The new method, developed by scientists at the Universities of Cambridge and Exeter and the London School of Economics, is called ‘Permanent Additional Carbon Tonne’ (PACT) accounting, and can be used to value a wide range of nature-based solutions.

“Carbon finance is a way for us – the carbon emitters of the richer world – to direct funds towards rural communities in the tropics so they can get more out of the land they have, without cutting down more trees,” said Andrew Balmford, Professor of Conservation Science at the University of Cambridge and first author of the paper.

Co-author Srinivasan Keshav, Robert Sansom Professor of Computer Science at Cambridge added: “Our new approach has the potential to address market concerns around nature-based solutions to carbon offsetting, and lead to desperately needed investment.”

Conversion of tropical forest to agricultural land results in vast carbon emissions. Around 30% of all progress towards the ambitious net-zero commitments made at COP26 is reliant on better management of carbon in nature.

Other carbon credit investment options include technologies that remove carbon dioxide from the atmosphere and lock it deep in the Earth for hundreds of years. These permanent storage options may currently be easier to value, say the researchers, but they typically cost substantially more than nature-based solutions and do nothing to protect natural habitats that are vital in regulating the global climate and mitigating the extinction crisis.

The research was funded primarily by the Tezos Foundation. It was conducted by researchers at the Cambridge Centre for Carbon Credits.

Reference: Balmford, A. et al.: ‘Realising the social value of impermanent carbon credits.’ Nature Climate Change, October 2023. DOI: 10.1038/s41558-023-01815-0

Srinivasan Keshav explains more about the work here.

More information about Cambridge PACT.



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Using lasers to ‘heat and beat’ 3D-printed steel could help reduce costs

Retrieval of a stainless steel part made by 3D printing

source: www.cam.ac.uk

Researchers have developed a new method for 3D printing metal that could help reduce costs and make more efficient use of resources.

This method could help reduce the costs of metal 3D printing, which could in turn improve the sustainability of the metal manufacturing industryMatteo Seita

The method, developed by a research team led by the University of Cambridge, allows structural modifications to be ‘programmed’ into metal alloys during 3D printing, fine-tuning their properties without the ‘heating and beating’ process that’s been in use for thousands of years.

The new 3D printing method combines the best qualities of both worlds: the complex shapes that 3D printing makes possible, and the ability to engineer the structure and properties of metals that traditional methods allow. The results are reported in the journal Nature Communications.

3D printing has several advantages over other manufacturing methods. For example, it’s far easier to produce intricate shapes using 3D printing, and it uses far less material than traditional metal manufacturing methods, making it a more efficient process. However, it also has significant drawbacks.

“There’s a lot of promise around 3D printing, but it’s still not in wide use in industry, mostly because of high production costs,” said Dr Matteo Seita from Cambridge’s Department of Engineering, who led the research. “One of the main drivers of these costs is the amount of tweaking that materials need after production.”

Since the Bronze Age, metal parts have been made through a process of heating and beating. This approach, where the material is hardened with a hammer and softened by fire, allows the maker to form the metal into the desired shape and at the same time impart physical properties such as flexibility or strength.

“The reason why heating and beating is so effective is because it changes the internal structure of the material, allowing control over its properties,” said Seita. “That’s why it’s still in use after thousands of years.”

One of the major downsides of current 3D printing techniques is an inability to control the internal structure in the same way, which is why so much post-production alteration is required. “We’re trying to come up with ways to restore some of that structural engineering capability without the need for heating and beating, which would in turn help reduce costs,” said Seita. “If you can control the properties you want in metals, you can leverage the greener aspects of 3D printing.”

Working with colleagues in Singapore, Switzerland, Finland and Australia, Seita developed a new ‘recipe’ for 3D-printed metal that allows a high degree of control over the internal structure of the material as it is being melted by a laser.

By controlling the way that the material solidifies after melting, and the amount of heat that is generated during the process, the researchers can programme the properties of the end material. Normally, metals are designed to be strong and tough, so that they are safe to use in structural applications. 3D-printed metals are inherently strong, but also brittle.

The strategy the researchers developed gives full control over both strength and toughness, by triggering a controlled reconfiguration of the microstructure when the 3D-printed metal part is placed in a furnace at relatively low temperature. Their method uses conventional laser-based 3D printing technologies, but with a small tweak to the process.

“We found that the laser can be used as a ‘microscopic hammer’ to harden the metal during 3D printing,” said Seita. “However, melting the metal a second time with the same laser relaxes the metal’s structure, allowing the structural reconfiguration to take place when the part is placed in the furnace.”

Their 3D printed steel, which was designed theoretically and validated experimentally, was made with alternating regions of strong and tough material, making its performance comparable to steel that’s been made through heating and beating.

“We think this method could help reduce the costs of metal 3D printing, which could in turn improve the sustainability of the metal manufacturing industry,” said Seita. “In the near future, we also hope to be able to bypass the low-temperature treatment in the furnace, further reducing the number of steps required before using 3D printed parts in engineering applications.”

The team included researchers from Nanyang Technological University, the Agency for Science, Technology and Research (A*STAR), the Paul Scherrer Institute, VTT Technical Research Centre of Finland, and the Australian Nuclear Science & Technology Organisation. Matteo Seita is a Fellow of St John’s College, Cambridge.

Reference:
Shubo Gao et al. ‘Additive manufacturing of alloys with programmable microstructure and properties.’ Nature Communications (2023). DOI: 10.1038/s41467-023-42326-y



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Simple blood test can help diagnose bipolar disorder

Person providing a drop of blood for a medical test

source: www.cam.ac.uk

Researchers have developed a new way of improving diagnosis of bipolar disorder that uses a simple blood test to identify biomarkers associated with the condition.

The ability to diagnose bipolar disorder with a simple blood test could ensure that patients get the right treatment the first timeJakub Tomasik

The researchers, from the University of Cambridge, used a combination of an online psychiatric assessment and a blood test to diagnose patients with bipolar disorder, many of whom had been misdiagnosed with major depressive disorder.

The researchers say the blood test on its own could diagnose up to 30% of patients with bipolar disorder, but that it is even more effective when combined with a digital mental health assessment.

Incorporating biomarker testing could help physicians differentiate between major depressive disorder and bipolar disorder, which have overlapping symptoms but require different pharmacological treatments.

Although the blood test is still a proof of concept, the researchers say it could be an effective complement to existing psychiatric diagnosis and could help researchers understand the biological origins of mental health conditions. The results are reported in the journal JAMA Psychiatry.

Bipolar disorder affects approximately one percent of the population – as many as 80 million people worldwide – but for nearly 40% of patients, it is misdiagnosed as major depressive disorder.

“People with bipolar disorder will experience periods of low mood and periods of very high mood or mania,” said first author Dr Jakub Tomasik, from Cambridge’s Department of Chemical Engineering and Biotechnology. “But patients will often only see a doctor when they’re experiencing low mood, which is why bipolar disorder frequently gets misdiagnosed as major depressive disorder.”

“When someone with bipolar disorder is experiencing a period of low mood, to a physician, it can look very similar to someone with major depressive disorder,” said Professor Sabine Bahn, who led the research. “However, the two conditions need to be treated differently: if someone with bipolar disorder is prescribed antidepressants without the addition of a mood stabiliser, it can trigger a manic episode.”

The most effective way to get an accurate diagnosis of bipolar disorder is a full psychiatric assessment. However, patients often face long waits to get these assessments, and they take time to carry out.

“Psychiatric assessments are highly effective, but the ability to diagnose bipolar disorder with a simple blood test could ensure that patients get the right treatment the first time and alleviate some of the pressures on medical professionals,” said Tomasik.

The researchers used samples and data from the Delta study, conducted in the UK between 2018 and 2020, to identify bipolar disorder in patients who had received a diagnosis of major depressive disorder within the previous five years and had current depressive symptoms. Participants were recruited online through voluntary response sampling.

More than 3000 participants were recruited, and they each completed an online mental health assessment of more than 600 questions. The assessment covered a range of topics that may be relevant to mental health disorders, including past or current depressive episodes, generalised anxiety, symptoms of mania, family history or substance abuse.

Of the participants who completed the online assessment, around 1000 were selected to send in a dried blood sample from a simple finger prick, which the researchers analysed for more than 600 different metabolites using mass spectrometry. After completing the Composite International Diagnostic Interview, a fully structured and validated diagnostic tool to establish mood disorder diagnoses, 241 participants were included in the study.

Analysis of the data showed a significant biomarker signal for bipolar disorder, even after accounting for confounding factors such as medication. The identified biomarkers were correlated primarily with lifetime manic symptoms and were validated in a separate group of patients who received a new clinical diagnosis of major depressive disorder or bipolar disorder during the study’s one-year follow-up period.

The researchers found that the combination of patient-reported information and the biomarker test significantly improved diagnostic outcomes for people with bipolar disorder, especially in those where the diagnosis was not obvious.

“The online assessment was more effective overall, but the biomarker test performs well and is much faster,” said Bahn. “A combination of both approaches would be ideal, as they’re complementary.”

“We found that some patients preferred the biomarker test, because it was an objective result that they could see,” said Tomasik. “Mental illness has a biological basis, and it’s important for patients to know it’s not in their mind. It’s an illness that affects the body like any other.”

“In addition to the diagnostic capabilities of biomarkers, they could also be used to identify potential drug targets for mood disorders, which could lead to better treatments,” said Bahn. “It’s an exciting time to be in this area of research.”

A patent has been filed on the research by Cambridge Enterprise, the University’s commercialisation arm. The research was supported by the Stanley Medical Research Institute and Psyomics, a University spin-out company co-founded by Sabine Bahn.

Sabine Bahn is Professor of Neurotechnology at the Department of Chemical Engineering and Biotechnology and is a Fellow of Lucy Cavendish College, Cambridge.

Reference:
Jakub Tomasik et al. ‘Metabolomic Biomarker Signatures for Bipolar and Unipolar Depression.’ JAMA Psychiatry (2023). DOI: 10.1001/jamapsychiatry.2023.4096



The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.