All posts by Admin

Over Half a Million People Take Part In Largest Ever Study of Psychological Sex Differences and Autistic Traits

Over half a million people take part in largest ever study of psychological sex differences and autistic traits

 

source: www.cam.ac.uk

Scientists at the University of Cambridge have completed the world’s largest ever study of typical sex differences and autistic traits. They tested and confirmed two long-standing psychological theories: the Empathising-Systemising theory of sex differences and the Extreme Male Brain theory of autism.

Big data is important to draw conclusions that are replicable and robust. This is an example of how scientists can work with the media to achieve big data science

David Greenberg

Working with the television production company Channel 4, they tested over half a million people, including over 36,000 autistic people. The results are published today in the Proceedings of the National Academy of Sciences.

The Empathising-Systemising theory predicts that women, on average, will score higher than men on tests of empathy, the ability to recognize what another person is thinking or feeling, and to respond to their state of mind with an appropriate emotion. Similarly, it predicts that men, on average, will score higher on tests of systemising, the drive to analyse or build rule-based systems.

The Extreme Male Brain theory predicts that autistic people, on average, will show a masculinised shift on these two dimensions: namely, that they will score lower than the typical population on tests of empathy and will score the same as if not higher than the typical population on tests of systemising.

Whereas both theories have been confirmed in previous studies of relatively modest samples, the new findings come from a massive sample of 671,606 people, which included 36,648 autistic people. They were replicated in a second sample of 14,354 people. In this new study, the scientists used very brief 10-item measures of empathy, systemising, and autistic traits.

Using these short measures, the team identified that in the typical population, women, on average, scored higher than men on empathy, and men, on average, scored higher than women on systemising and autistic traits. These sex differences were reduced in autistic people. On all these measures, autistic people’s scores, on average, were ‘masculinised’: that is, they had higher scores on systemising and autistic traits and lower scores on empathy, compared to the typical population.

The team also calculated the difference (or ‘d-score’) between each individual’s score on the systemising and empathy tests. A high d-score means a person’s systemising is higher than their empathy, and a low d-score means their empathy is higher than their systemising.

They found that in the typical population, men, on average, had a shift towards a high d-score, whereas women, on average, had a shift towards a low d-score. Autistic individuals, on average, had a shift towards an even higher d-score than typical males. Strikingly, d-scores accounted for 19 times more of the variance in autistic traits than other variables, including sex.

Finally, men, on average, had higher autistic trait scores than women. Those working in STEM (Science, Technology, Engineering and Mathematics), on average, had higher systemising and autistic traits scores than those in non-STEM occupations. And conversely, those working in non-STEM occupations, on average, had had higher empathy scores than those working in STEM.

In the paper, the authors discuss how it is important to bear in mind that differences observed in this study apply only to group averages, not to individuals. They underline that these data say nothing about an individual based on their gender, autism diagnosis, or occupation. To do that would constitute stereotyping and discrimination, which the authors strongly oppose.

Further, the authors reiterate that the two theories are applicable to only two dimensions of typical sex differences: empathy and systemising. They do not apply to all sex differences, such as aggression, and to extrapolate the theories beyond these two dimensions would be a misinterpretation.

Finally, the authors highlight that although autistic people on average struggle with ‘cognitive’ empathy – recognizing other people’s thoughts and feelings – they nevertheless have intact ‘affective’ empathy – they care about others. It is a common misunderstanding that autistic people struggle with all forms of empathy, which is untrue.

Dr Varun Warrier, from the Cambridge team, said: “These sex differences in the typical population are very clear. We know from related studies that individual differences in empathy and systemising are partly genetic, partly influenced by our prenatal hormonal exposure, and partly due to environmental experience. We need to investigate the extent to which these observed sex differences are due to each of these factors, and how these interact.”

Dr David Greenberg, from the Cambridge team, said: “Big data is important to draw conclusions that are replicable and robust. This is an example of how scientists can work with the media to achieve big data science.”

Dr Carrie Allison, from the Cambridge team, said: “We are grateful to both the general public and to the autism community for participating in this research. The next step must be to consider the relevance of these findings for education, and support where needed.”

Professor Simon Baron-Cohen, Director of the Autism Research Centre at Cambridge who proposed these two theories nearly two decades ago, said: “This research provides strong support for both theories. This study also pinpoints some of the qualities autistic people bring to neurodiversity. They are, on average, strong systemisers, meaning they have excellent pattern-recognition skills, excellent attention to detail, and an aptitude in understanding how things work. We must support their talents so they achieve their potential – and society benefits too.”

This study was supported by the Autism Research Trust, the Medical Research Council, Wellcome, and the Templeton World Charity Foundation., Inc. It was conducted in association with the NIHR CLAHRC for Cambridgeshire and Peterborough NHS Foundation Trust, and the NIHR Cambridge Biomedical Research Centre.

Reference
Greenberg, DM et al. Testing the Empathizing-Systemising theory of sex differences and the Extreme Male Brain theory of autism in half a million people. PNAS; 12 Nov 2018; DOI: 10.1073/pnas.1811032115

If you’d like to complete these measures and participate in studies at the Autism Research Centre please register here


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Ancient DNA Analysis Unlocks Secrets of Ice Age Tribes in The Americas

source: www.cam.ac.uk

Scientists have sequenced 15 ancient genomes spanning from Alaska to Patagonia and were able to track the movements of the first humans as they spread across the Americas at “astonishing” speed during the last Ice Age, and also how they interacted with each other in the following millennia.

Our study proves that Spirit Cave and Lagoa Santa were actually genetically closer to contemporary Native Americans than to any other ancient or contemporary group sequenced to date

Eske Willeslev

The results have been published in the journal Science as part of a wide-ranging international study, led by the University of Cambridge, which genetically analysed the DNA of a series of well-known and controversial ancient remains across North and South America.

The research also discovered clues of a puzzling Australasian genetic signal in the 10,400-year-old Lagoa Santa remains from Brazil revealing a previously unknown group of early South Americans – but the Australasian link left no genetic trace in North America.

Additionally, a legal battle over a 10,600-year-old ancient skeleton – called the ‘Spirit Cave Mummy’ – has ended after advanced DNA sequencing found it was related to a Native American tribe. The researchers were able to dismiss a longstanding theory that a group called Paleoamericans existed in North America before Native Americans. The Paleoamerican hypothesis was first proposed in the 19th century, but this new study disproves that theory.

“Spirit Cave and Lagoa Santa were very controversial because they were identified as so-called ‘Paleoamericans’ based on craniometry – it was determined that the shape of their skulls was different to current day Native Americans,” said Professor Eske Willeslev, who holds positions at the Universities of Cambridge and Copenhagen, and led the study. “Our study proves that Spirit Cave and Lagoa Santa were actually genetically closer to contemporary Native Americans than to any other ancient or contemporary group sequenced to date.”

The scientific and cultural significance of the Spirit Cave remains, which were found in 1940 in a small rocky alcove in the Great Basin Desert, was not properly understood for 50 years. The preserved remains of the man in his forties were initially believed to be between 1,500 and 2000 years old but during the 1990s new textile and hair testing dated the skeleton at 10,600 years old.

The Fallon Paiute-Shoshone Tribe, a group of Native Americans based in Nevada near Spirit Cave, claimed cultural affiliation with the skeleton and requested immediate repatriation of the remains.

Their request was refused and the tribe sued the US government, a lawsuit that pitted tribal leaders against anthropologists, who argued the remains provided invaluable insights into North America’s earliest inhabitants and should continue to be displayed in a museum.

The deadlock continued for 20 years until the tribe agreed that Professor Willeslev could carry out genome sequencing on DNA extracted from the Spirit Cave for the first time.

“I assured the tribe that my group would not do the DNA testing unless they gave permission and it was agreed that if Spirit Cave was genetically a Native American the mummy would be repatriated to the tribe,” said Professor Willeslev, who is a Fellow of St John’s College.

The team extracted DNA from the inside of the skull proving that the skeleton was an ancestor of present-day Native Americans. Spirit Cave was returned to the tribe in 2016 and there was a private reburial ceremony earlier this year. The tribe were kept informed throughout the two-year project and two members visited the lab in Copenhagen to meet the scientists and they were present when all of the DNA sampling was taken.

The genome of the Spirit Cave skeleton has wider significance because it not only settled the legal and cultural dispute between the tribe and the Government, it also helped reveal how ancient humans moved and settled across the Americas. The scientists were able to track the movement of populations from Alaska to as far south as Patagonia. They often separated from each other and took their chances travelling in small pockets of isolated groups.

Dr David Meltzer, from the Department of Anthropology, Southern Methodist University, Dallas, said: “A striking thing about the analysis of Spirit Cave and Lagoa Santa is their close genetic similarity which implies their ancestral population travelled through the continent at astonishing speed. That’s something we’ve suspected due to the archaeological findings, but it’s fascinating to have it confirmed by the genetics. These findings imply that the first peoples were highly skilled at moving rapidly across an utterly unfamiliar and empty landscape. They had a whole continent to themselves and they were travelling great distances at speed.”

The study also revealed surprising traces of Australasian ancestry in ancient South American Native Americans but no Australasian genetic link was found in North American Native Americans.

Dr Victor Moreno-Mayar, from the Centre for GeoGenetics, University of Copenhagen and first author of the study, said: “We discovered the Australasian signal was absent in Native Americans prior to the Spirit Cave and Lagoa Santa population split which means groups carrying this genetic signal were either already present in South America when Native Americans reached the region, or Australasian groups arrived later. That this signal has not been previously documented in North America implies that an earlier group possessing it had disappeared or a later arriving group passed through North America without leaving any genetic trace.”

Dr Peter de Barros Damgaard, from the Centre for GeoGenetics, University of Copenhagen, explained why scientists remain puzzled but optimistic about the Australasian ancestry signal in South America. He explained: “If we assume that the migratory route that brought this Australasian ancestry to South America went through North America, either the carriers of the genetic signal came in as a structured population and went straight to South America where they later mixed with new incoming groups, or they entered later. At the moment we cannot resolve which of these might be correct, leaving us facing extraordinary evidence of an extraordinary chapter in human history! But we will solve this puzzle.”

The population history during the millennia that followed initial settlement was far more complex than previously thought. The peopling of the Americas had been simplified as a series of north to south population splits with little to no interaction between groups after their establishment.

The new genomic analysis presented in the study has shown that around 8,000 years ago, Native Americans were on the move again, but this time from Mesoamerica into both North and South America.

Researchers found traces of this movement in the genomes of all present-day indigenous populations in South America for which genomic data is available to date.

Dr Moreno-Mayar added: “The older genomes in our study not only taught us about the first inhabitants in South America but also served as a baseline for identifying a second stream of genetic ancestry, which arrived from Mesoamerica in recent millennia and that is not evident from the archaeological record. These Mesoamerican peoples mixed with the descendants of the earliest South Americans and gave rise to most contemporary groups in the region.”

Reference: 
J. Victor 
Moreno-Mayar et al. ‘Early human dispersals within the Americas.’ Science (2018). DOI: 10.1126/science.aav2621

Adapted from a St John’s College press release.

Inset image: Skulls and other human remains from P.W. Lund’s Collection from Lagoa Santa, Brazil. Kept in the Natural History Museum of Denmark. Credit: Natural History Museum of Denmark


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Selective Amnesia: How Rats and Humans Are Able To Actively Forget Distracting Memories

source: www.cam.ac.uk

Our ability to selectively forget distracting memories is shared with other mammals, suggests new research from the University of Cambridge. The discovery that rats and humans share a common active forgetting ability – and in similar brain regions – suggests that the capacity to forget plays a vital role in adapting mammalian species to their environments, and that its evolution may date back at least to the time of our common ancestor.

Quite simply, the very act of remembering is a major reason why we forget, shaping our memory according to how it is used

Michael Anderson

The human brain is estimated to include some 86 billion neurons (or nerve cells) and as many as 150 trillion synaptic connections, making it a powerful machine for processing and storing memories. We need to retrieve these memories to help us carry out our daily tasks, whether remembering where we left the car in the supermarket car park or recalling the name of someone we meet in the street. But the sheer scale of the experiences people could store in memory over our lives creates the risk of being overwhelmed with information. When we come out of the supermarket and think about where we left the car, for example, we only need to recall where we parked the car today, rather than being distracted by recalling every single time we came to do our shopping.

Previous work by Professor Michael Anderson at the Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, showed that humans possess the ability to actively forget distracting memories, and that retrieval plays a crucial role in this process. His group has shown how intentional recall of a past memory is more than simply reawakening it; it actually leads us to forget other competing experiences that interfere with retrieval of the memory we seek.

“Quite simply, the very act of remembering is a major reason why we forget, shaping our memory according to how it is used,” says Professor Anderson.

“People are used to thinking of forgetting as something passive. Our research reveals that people are more engaged than they realise in actively shaping what they remember of their lives. The idea that the very act of remembering can cause forgetting is surprising and could tell us more about people’s capacity for selective amnesia.”

While this process improves the efficiency of memory, it can sometimes lead to problems. If the police interview a witness to a crime, for example, their repeated questioning about selected details might lead the witness to forget information that could later prove important.

Although the ability to actively forget has been seen in humans, it is unclear whether it occurs in other species. Could this ability be unique to our species, or at least to more intelligent mammals such as monkeys and great apes?

In a study published today in the journal Nature Communications, Professor Anderson together with Pedro Bekinschtein and Noelia Weisstaub of Universidad Favaloro in Argentina, has shown that the ability to actively forget is not a peculiarly human characteristic: rats, too, share our capacity for selective forgetting and use a very similar brain mechanism, suggesting this is an ability shared among mammals.

To demonstrate this, the researchers devised an ingeniously simple task based on rats’ innate sense of curiosity: when put into an environment, rats actively explore to learn more about it. When exploring an environment, rats form memories of any new objects they find and investigate.

Building on this simple observation, the researchers allowed rats to explore two previously-unseen objects (A and B) in an open arena – the objects included a ball, a cup, small toys, or a soup can.  Rats first got to explore object A for five minutes, and then were removed from the arena; they were then placed back in the arena 20 minutes later with object B, which they also explored for five minutes.

To see whether rats showed retrieval-induced forgetting, like humans, rats next performed “retrieval practice” on one of the two objects (e.g. A) to see how this affected their later memory for the competitor object (B). During this retrieval practice phase, the researchers repeatedly placed the rat in the arena with the object they wanted the rat to remember (e.g. A), together with another object never seen in the context of the arena. Rats instinctively prefer exploring novel objects, and so on these “retrieval practice” trials, the rats clearly preferred to explore the new objects, implying that they indeed had remembered A and saw it as “old news”.

To find out how repeatedly retrieving A affected rats’ later memory for B, in a final phase conducted 30 minutes later, the researchers placed the rat into the arena with B and an entirely new object.  Strikingly, on this final test, the rats explored both B and the new object equally – by selectively remembering their experience with A over and over, rats had actively trained themselves to forget B.

In contrast, in control conditions in which the researchers skipped the retrieval practice phase and replaced it with an equal amount of relaxing time in the rats’ home cage, or an alternative memory storage task not involving retrieval, rats showed excellent memory for B.

Professor Anderson’s team then identified an area towards the front of the rat’s brain that controls this active forgetting mechanism. When a region at the front of the rat’s brain known as the medial prefrontal cortex was temporarily ‘switched off’ using the drug muscimol, the animal entirely lost its ability to selectively forget competing memories; despite undergoing the same “retrieval practice” task as before, rats now recognised B. In humans, the ability to selectively forget in this manner involves engaging an analogous region in the prefrontal cortex.

“Rats appear to have the same active forgetting ability as humans do – they forget memories selectively when those memories cause distraction,” says Professor Anderson. “And, crucially, they use a similar prefrontal control mechanism as we do. This discovery suggests that this ability to actively forget less useful memories may have evolved far back on the ‘Tree of Life’, perhaps as far back as our common ancestor with rodents some 100 million years ago.”

Professor Anderson says that now that we know that the brain mechanisms for this process are similar in rats and humans, it should be possible to study this adaptive forgetting phenomenon at a cellular – or even molecular – level. A better understanding of the biological foundations of these mechanisms may help researchers develop improved treatments to help people forget traumatic events.

The research was funded by the Medical Research Council, the National Agency of Scientific and Technological Promotion of Argentina and the International Brain Research Organization.

Reference
Bekinschtein, B, et al. A Retrieval-Specific Mechanism of Adaptive Forgetting in the Mammalian Brain. Nature Comms; 7 Nov 2019; DOI: 10.1038/s41467-018-07128-7


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

New Efficiency Record Set For Perovskite LEDs

source: www.cam.ac.uk

Researchers have set a new efficiency record for LEDs based on perovskite semiconductors, rivalling that of the best organic LEDs (OLEDs).

Compared to OLEDs, which are widely used in high-end consumer electronics, the perovskite-based LEDs, developed by researchers at the University of Cambridge, can be made at much lower costs, and can be tuned to emit light across the visible and near-infrared spectra with high colour purity.

The researchers have engineered the perovskite layer in the LEDs to show close to 100% internal luminescence efficiency, opening up future applications in display, lighting and communications, as well as next-generation solar cells.

These perovskite materials are of the same type as those found to make highly efficient solar cells that could one day replace commercial silicon solar cells. While perovskite-based LEDs have already been developed, they have not been nearly as efficient as conventional OLEDs at converting electricity into light.

Earlier hybrid perovskite LEDs, first developed by Professor Sir Richard Friend’s group at the University’s Cavendish Laboratory four years ago, were promising, but losses from the perovskite layer, caused by tiny defects in the crystal structure, limited their light-emission efficiency.

Now, Cambridge researchers from the same group and their collaborators have shown that by forming a composite layer of the perovskites together with a polymer, it is possible to achieve much higher light-emission efficiencies, close to the theoretical efficiency limit of thin-film OLEDs. Their results are reported in the journal Nature Photonics.

“This perovskite-polymer structure effectively eliminates non-emissive losses, the first time this has been achieved in a perovskite-based device,” said Dr Dawei Di from Cambridge’s Cavendish Laboratory, one of the corresponding authors of the paper. “By blending the two, we can basically prevent the electrons and positive charges from recombining via the defects in the perovskite structure.”

The perovskite-polymer blend used in the LED devices, known as a bulk heterostructure, is made of two-dimensional and three-dimensional perovskite components and an insulating polymer. When an ultra-fast laser is shone on the structures, pairs of electric charges that carry energy move from the 2D regions to the 3D regions in a trillionth of a second: much faster than earlier layered perovskite structures used in LEDs. Separated charges in the 3D regions then recombine and emit light extremely efficiently.

“Since the energy migration from 2D regions to 3D regions happens so quickly, and the charges in the 3D regions are isolated from the defects by the polymer, these mechanisms prevent the defects from getting involved, thereby preventing energy loss,” said Di.

“The best external quantum efficiencies of these devices are higher than 20% at current densities relevant to display applications, setting a new record for perovskite LEDs, which is a similar efficiency value to the best OLEDs on the market today,” said Baodan Zhao, the paper’s first author.

While perovskite-based LEDs are beginning to rival OLEDs in terms of efficiency, they still need better stability if they are to be adopted in consumer electronics. When perovskite-based LEDs were first developed, they had a lifetime of just a few seconds. The LEDs developed in the current research have a half-life close to 50 hours, which is a huge improvement in just four years, but still nowhere near the lifetimes required for commercial applications, which will require an extensive industrial development programme. “Understanding the degradation mechanisms of the LEDs is a key to future improvements,” said Di.

The research was funded by the Engineering and Physical Sciences Research Council (EPSRC) and the European Research Council (ERC).

Reference:
Baodan Zhao et al. ‘High-efficiency perovskite-polymer bulk heterostructure light-emitting diodes.’ Nature Photonics (2018). DOI: 10.1038/s41566-018-0283-4


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Observation of Blood Vessel Cells Changing Function Could Lead tTo Early Detection of Blocked Arteries

source: www.cam.ac.uk

A study in mice has shown that it may be possible to detect the early signs of atherosclerosis, which leads to blocked arteries, by looking at how cells in our blood vessels change their function.

The muscle cells that line the blood vessels have long been known to multi-task. While their main function is pumping blood through the body, they are also involved in ‘patching up’ injuries in the blood vessels. Overzealous switching of these cells from the ‘pumping’ to the ‘repair’ mode can lead to atherosclerosis, resulting in the formation of ‘plaques’ in the blood vessels that block the blood flow.

Using state-of-the art genomics technologies, an interdisciplinary team of researchers based in Cambridge and London has caught a tiny number of vascular muscle cells in mouse blood vessels in the act of switching and described their molecular properties. The researchers used an innovative methodology known as single-cell RNA-sequencing, which allows them to track the activity of most genes in the genome in hundreds of individual vascular muscle cells.

Their findings, published today in Nature Communications, could pave the way for detecting the ‘switching’ cells in humans, potentially enabling the diagnosis and treatment of atherosclerosis at a very early stage in the future.

Atherosclerosis can lead to potentially serious cardiovascular diseases such as heart attack and stroke. Although there are currently no treatments that reverse atherosclerosis, lifestyle interventions such as improved diet and increased exercise can reduce the risk of the condition worsening; early detection can minimise this risk.

“We knew that although these cells in healthy tissues look similar to each other, they are actually quite a mixed bag at the molecular level,” explains Dr Helle Jørgensen, a group leader at the University of Cambridge’s Division of Cardiovascular Medicine, who co-directed the study. “However, when we got the results, a very small number of cells in the vessel really stood out. These cells lost the activity of typical muscle cell genes to various degrees, and instead expressed a gene called Sca1 that is best known to mark stem cells, the body’s ‘master cells’.”

The ability to detect the activity (or ‘expression’) of thousands of genes in parallel in these newly-discovered cells has been a game-changer, say the researchers.

“Single-cell RNA-sequencing has allowed us to see that in addition to Sca1, these cells expressed a whole set of other genes with known roles in the switching process,” says Lina Dobnikar, a computational biologist based at Babraham Institute and joint first author on the study. “While these cells did not necessarily show the properties of fully-switched cells, we could see that we caught them in the act of switching, which was not possible previously.”

To confirm that these unusual cells originated from muscle cells, the team used another new technology, known as lineage labelling, which allowed the researchers to trace the history of a gene’s expression in each cell.

“Even when the cells have entirely shut down muscle cell genes, lineage labelling demonstrated that at some point either they or their ancestors were indeed the typical muscle cells,” says Annabel Taylor, a cell biologist in Jørgensen’s lab and joint first author on the study.

Knowing the molecular profile of these unusual cells has made it possible to study their behaviour in disease. Researchers have confirmed that these cells become much more numerous in damaged blood vessels and in atherosclerotic plaques, as would be expected from switching cells.

“We were fortunate in that single-cell RNA-sequencing technologies had been rapidly evolving while we were working on the project,” says Dr Mikhail Spivakov, a genomics biologist and group leader at MRC London Institute of Medical Sciences, who co-directed the study with Jørgensen. Dr Spivakov carried out the work while he was a group leader at the Babraham Institute. “When we started out, looking at hundreds of cells was the limit, but for the analysis of atherosclerotic plaques we really needed thousands. By the time we got to doing this experiment, it was already possible.”

In the future, the findings by the team may pave the way for catching atherosclerosis early and treating it more effectively.

“Theoretically, seeing an increase in the numbers of switching cells in otherwise healthy vessels should raise an alarm”, says Jørgensen. “Likewise, knowing the molecular features of these cells may help selectively target them with specific drugs. However, it is still early days. Our study was done in mice, where we could obtain large numbers of vascular muscle cells and modify their genomes for lineage labelling. Additional research is still required to translate our results into human cells first and then into the clinic.”

The research was funded by the British Heart Foundation and UK Research and Innovation.

Reference
Dobnikar, L, Taylor, AL et al. Disease-relevant transcriptional signatures identified in individual smooth muscle cells from healthy vessels. Nature Communications; 1 Nov 2019; DOI: 10.1038/s41467-018-06891-x


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Multi-Million Pound Initiative From Microsoft To Support AI Research At Cambridge

source: www.cam.ac.uk

The University of Cambridge is joining with Microsoft to help tackle the problem of ‘brain drain’ in AI and machine learning research.

By working together with industry on issues such as how best to use AI and machine learning, we can not only help solve complex issues for industry, but continue to support world-leading research and train the next generation of leaders in the field

Andy Neely

As part of the Microsoft Research – Cambridge University Machine Learning Initiative, Microsoft will help increase AI and machine learning research capacity and capability at Cambridge by supporting visiting researchers, postdoctoral researchers, PhD students and interns from the UK, EU and beyond.

The new Initiative builds on more than two decades of collaboration between the University and Microsoft Research Cambridge, and will be based in the University’s Department of Engineering. It will be formally announced today at the Microsoft Future Decoded Conference in London.

AI and machine learning have the potential to revolutionise how we interact with the world, but before these technologies can be widespread and used in industries such as healthcare, education and transportation, there are complex problems that need to be solved.

A shortage of skills in AI and machine-learning, particularly at PhD level and above, has led to many large tech companies recruiting from academia, leaving behind a shortage in research and teaching capacity at universities.

“By focusing on a two-way collaborative initiative for long-term growth, not short-term gain, we are taking a different approach to this problem. We are working with universities to build up AI and machine learning talent and research in the UK,” said Chris Bishop, Lab Director, Microsoft Research Cambridge. “Our researchers regularly work together on projects with global impact, and this initiative will help to build on the already strong links between the University of Cambridge and Microsoft.”

“Cambridge has a culture of ideas going back and forth between industry and academia, and this agreement with Microsoft is a prime example,” said Professor Andy Neely, Pro-Vice-Chancellor for Enterprise and Business Relations at Cambridge. “By working together with industry on issues such as how best to use AI and machine learning, we can not only help solve complex issues for industry, but continue to support world-leading research and train the next generation of leaders in the field.”

Earlier this year the Government and the AI sector agreed a Sector Deal to further boost the UK’s global reputation as a leader in developing AI technologies, ensuring the UK remains a go-to destination for AI innovation and investment.

Secretary of State for Digital, Culture, Media and Sport, Jeremy Wright, said: “The UK is a beacon for international talent and at the forefront of emerging technologies because of the ideas developed in our world-leading universities.

“This new collaboration between Microsoft and Cambridge University will help us continue to develop home-grown AI talent and supports the government’s modern Industrial Strategy and £1 billion AI sector deal. It is crucial that we do all we can to capitalise on our global advantage in this technology.”

Business Secretary Greg Clark said: “The UK has an unmatched heritage in AI and its application in emerging sectors and technologies.

“This partnership between one of the world’s leading universities and technology developer and Microsoft is a great example of collaboration between business and academia.  The UK’s leading research and innovation base are driving parts of our modern Industrial Strategy supported with the biggest increase in public research and development investment in the UK’s history.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Studies Raise Questions Over How Epigenetic Information Is Inherited

source: www.cam.ac.uk

Evidence has been building in recent years that our diet, our habits or traumatic experiences can have consequences for the health of our children – and even our grandchildren. The explanation that has gained most currency for how this occurs is so-called ‘epigenetic inheritance’ – patterns of chemical ‘marks’ on or around our DNA that are hypothesised to be passed down the generations. But new research from the University of Cambridge suggests that this mechanism of non-genetic inheritance is likely to be very rare.

There’s been a lot of excitement and hype surrounding the extent to which our epigenetic information is passed on to subsequent generations, but our work suggests that it’s not as pervasive as was previously thought

Tessa Bertozzi

A second study, also from Cambridge, suggests, however, that one way that environmental effects are passed on may in fact be through molecules produced from the DNA known as RNA that are found in a father’s sperm.

The mechanism by which we inherit innate characteristics from our parents is well understood: we inherit half of our genes from our mother and half from our father. However, the mechanism whereby a ‘memory’ of the parent’s environment and behaviour might be passed down through the generations is not understood.

Epigenetic inheritance has proved a compelling and popular explanation. The human genome is made up of DNA – our genetic blueprint. But our genome is complemented by a number of ‘epigenomes’ that vary by cell type and developmental time point.  Epigenetic marks are attached to our DNA and dictate in part whether a gene is on or off, influencing the function of the gene. The best understood epigenetic modification is DNA methylation, which places a methyl group on one of the bases of DNA (the A, C, G or T that make up our genetic code).

One model in which DNA methylation is associated with epigenetic inheritance is a mouse mutant called Agouti Viable Yellow. The coat of this mouse can be completely yellow, completely brown, or a pattern of these two colours – yet, remarkably, despite their different coat colours, the mice are genetically identical.

The explanation of how this occurs lies with epigenetics. Next to one of the key genes for coat colour lies a section of genetic code known as a ‘transposable element’ – a small mobile DNA ‘cassette’ that is actually repeated many times in the mouse genome but here acts to regulate the coat colour gene.

As many of these transposable elements come from external sources – for example, from a virus’s genome – they could be dangerous to the host’s DNA. But organisms have evolved a way of controlling their movement through methylation, which is most often a silencing epigenetic mark.

In the case of the gene for coat colour, if methylation switches off the transposable element completely, the mouse will be brown; if acquisition of methylation fails completely, the mouse will be yellow. But this does not affect the genetic code itself, just the epigenetic landscape of that DNA segment.

And yet, a yellow-coated female is more likely to have yellow-coated offspring and a brown-coated female is more likely to have brown-coated offspring. In other words, the epigenetically regulated behaviour of the transposable element is somehow being inherited from parent to offspring.

A team led by Professor Anne Ferguson-Smith at Cambridge’s Department of Genetics set out to examine this phenomenon in more detail, asking whether similar variably-methylated transposable elements existed elsewhere that could influence a mouse’s traits, and whether the ‘memory’ of these methylation patterns could be passed from one generation to the next. Their results are published in the journal Cell.

The researchers found that while these transposable elements were common throughout the genome – transposable elements comprise around 40% of a mouse’s total genome – the vast majority were completely silenced by methylation and hence had no influence on genes.

Only around one in a hundred of these sequences were variably-methylated. Some of these are able to regulate nearby genes, whereas others may have the ability to regulate genes located further away in the genome in a long-range capacity.

When the team looked at the extent to which the methylation patterns on these regions could be passed down to subsequent generations, only one of the six regions they studied in detail showed evidence of epigenetic inheritance – and even then, the effect size was small. Furthermore, only methylation patterns from the mother, not the father, were passed on.

“One might have assumed that all the variably-methylated elements we identified would show memory of parental epigenetic state, as is observed for coat colour in Agouti Viable Yellow mice,” says Tessa Bertozzi, a PhD candidate and one of the study’s first authors. “There’s been a lot of excitement and hype surrounding the extent to which our epigenetic information is passed on to subsequent generations, but our work suggests that it’s not as pervasive as was previously thought.”

“In fact, what we showed was that methylation marks at these transposable elements are reprogrammed from one generation to the next,” adds Professor Ferguson-Smith. “There’s a mechanism that removes methylation from the vast majority of the genome and puts it back on again, once in the process of generating eggs and sperms and again before the fertilised egg implants into the uterus. How the methylation patterns at the regions we have identified get reconstructed after this genome-wide erasure is still somewhat of a mystery.

“We know there are some genes – imprinted genes for example– that do not get reprogrammed in this way in the early embryo. But these are exceptions, not the rule.”

Professor Ferguson-Smith says that there is evidence that some environmentally-induced information can somehow be passed down generations. For example, her studies in mice show that the offspring of a mother who is undernourished during pregnancy are at increased risk of type 2 diabetes and obesity – and their offspring will in turn go on to be obese and diabetic. Again, she showed that DNA methylation was not the culprit – so how does this occur?

Every sperm is scarred?

The answer may come from research at the Wellcome/Cancer Research UK Gurdon Institute, also at the University of Cambridge, in collaboration with the lab of Professor Isabelle Mansuy from the University of Zürich and Swiss Federal Institute of Technology. In a study carried out in mice and published in the journal Molecular Psychiatry, they report how the ‘memory’ of early life trauma can be passed down to the next generation via RNA molecules carried by sperm.

Dr Katharina Gapp from Erica Miska’s lab at the Gurdon Institute and the Mansuy lab have previously shown that trauma in postnatal life increases the risk of behavioural and metabolic disorders not only in the directly exposed individuals but also in their subsequent offspring.

Now, the team has shown that the trauma can cause alterations in ‘long RNA’ (RNA molecules containing more than 200 nucleotides) in the father’s sperm and that these contribute to the inter-generational effect. This complements earlier research that found alterations in ‘short RNA’ molecules (with fewer than 200 nucleotides) in the sperm. RNA is a molecule that serves a number of functions, including, for some of the long versions called messenger RNA, ‘translating’ DNA code into functional proteins and regulating functions within cells.

Using a set of behavioural tests, the team showed that specific effects on the resulting offspring mediated by long RNA included risk-taking, increased insulin sensitivity and overeating, whereas small RNA conveyed the depressive-like behaviour of despair.

Dr Gapp said: “While other research groups have recently shown that small RNAs contribute to inheritance of the effects of chronic stress or changes in nutrition, our study indicates that long RNA can also contribute to transmitting some of the effects of early life trauma. We have added another piece to the puzzle for potential interventions in transfer of information down the generations.”

References
Kazachenka, A, Bertozzi, TM et al. Identification, Characterization, and Heritability of Murine Metastable Epialleles: Implications for Non-genetic Inheritance. Cell; 25 Oct 2018; DOI: 10.1016/j.cell.2018.09.043

Gapp K et al. Alterations in sperm long RNA contribute to the epigenetic inheritance of the effects of postnatal trauma. Molecular Psychiatry; 30 Oct 2018; DOI: 10.1038/s41380-018-0271-6


Researcher Profile: Tessa Bertozzi

Epigenetics has become something of a buzzword in recent years. It is the study of chemical modifications to DNA that switch genes on and off without changing the underlying DNA sequence. But what particularly excites interest is the extent to which these modifications, which can be altered by our environment – our diet, our behaviour, for example – can be inherited alongside DNA.

“The unknowns far outweigh the knowns in the young field of epigenetics, which is part of what makes it such an exciting time,” explains Tessa Bertozzi, a PhD student in the lab of Professor Anne Ferguson-Smith at Cambridge.

Tessa grew up in Mexico before moving to Seattle, Washington and then to southern California. “I came across Anne’s research in one of my undergraduate courses and found it fascinating. I contacted her soon after that and four years later I’m a final-year PhD student in her lab at Cambridge!”

Professor Ferguson-Smith’s lab has recently identified regions of the mouse genome that show different methylation levels across genetically identical mice. Tessa focuses on the mechanisms underlying the reconstruction of this epigenetic variation across generations.

“I conduct breeding experiments with mice and use specialised sequencing technologies to look at their DNA methylation patterns. While I am often found at the bench or analysing data on my computer, I also spend time developing ideas at meetings, seminars, and conferences, as well as participating in outreach activities.”

Cambridge has been a hub for epigeneticists for a while now, she says. “It is very motivating to be surrounded by like-minded researchers eager to interact and collaborate. In fact, my PhD has relied heavily on a number of collaborations across Cambridge.

“The University attracts academics from all over the world, making it a vibrant international community of people with different backgrounds and experiences. I have met and interacted with incredibly interesting people over the years.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

New Efficiency Record Set For Perovskite LEDs

soiurce: www.cam.ac.uk

Researchers have set a new efficiency record for LEDs based on perovskite semiconductors, rivalling that of the best organic LEDs (OLEDs).

Compared to OLEDs, which are widely used in high-end consumer electronics, the perovskite-based LEDs, developed by researchers at the University of Cambridge, can be made at much lower costs, and can be tuned to emit light across the visible and near-infrared spectra with high colour purity.

The researchers have engineered the perovskite layer in the LEDs to show close to 100% internal luminescence efficiency, opening up future applications in display, lighting and communications, as well as next-generation solar cells.

These perovskite materials are of the same type as those found to make highly efficient solar cells that could one day replace commercial silicon solar cells. While perovskite-based LEDs have already been developed, they have not been nearly as efficient as conventional OLEDs at converting electricity into light.

Earlier hybrid perovskite LEDs, first developed by Professor Sir Richard Friend’s group at the University’s Cavendish Laboratory four years ago, were promising, but losses from the perovskite layer, caused by tiny defects in the crystal structure, limited their light-emission efficiency.

Now, Cambridge researchers from the same group and their collaborators have shown that by forming a composite layer of the perovskites together with a polymer, it is possible to achieve much higher light-emission efficiencies, close to the theoretical efficiency limit of thin-film OLEDs. Their results are reported in the journal Nature Photonics.

“This perovskite-polymer structure effectively eliminates non-emissive losses, the first time this has been achieved in a perovskite-based device,” said Dr Dawei Di from Cambridge’s Cavendish Laboratory, one of the corresponding authors of the paper. “By blending the two, we can basically prevent the electrons and positive charges from recombining via the defects in the perovskite structure.”

The perovskite-polymer blend used in the LED devices, known as a bulk heterostructure, is made of two-dimensional and three-dimensional perovskite components and an insulating polymer. When an ultra-fast laser is shone on the structures, pairs of electric charges that carry energy move from the 2D regions to the 3D regions in a trillionth of a second: much faster than earlier layered perovskite structures used in LEDs. Separated charges in the 3D regions then recombine and emit light extremely efficiently.

“Since the energy migration from 2D regions to 3D regions happens so quickly, and the charges in the 3D regions are isolated from the defects by the polymer, these mechanisms prevent the defects from getting involved, thereby preventing energy loss,” said Di.

“The best external quantum efficiencies of these devices are higher than 20% at current densities relevant to display applications, setting a new record for perovskite LEDs, which is a similar efficiency value to the best OLEDs on the market today,” said Baodan Zhao, the paper’s first author.

While perovskite-based LEDs are beginning to rival OLEDs in terms of efficiency, they still need better stability if they are to be adopted in consumer electronics. When perovskite-based LEDs were first developed, they had a lifetime of just a few seconds. The LEDs developed in the current research have a half-life close to 50 hours, which is a huge improvement in just four years, but still nowhere near the lifetimes required for commercial applications, which will require an extensive industrial development programme. “Understanding the degradation mechanisms of the LEDs is a key to future improvements,” said Di.

The research was funded by the Engineering and Physical Sciences Research Council (EPSRC) and the European Research Council (ERC).

Reference:
Baodan Zhao et al. ‘High-efficiency perovskite-polymer bulk heterostructure light-emitting diodes.’ Nature Photonics (2018). DOI: 10.1038/s41566-018-0283-4


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge Partners in New €1 billion European Quantum Flagship

Cambridge partners in new €1 billion European Quantum Flagship

source: www.cam.ac.uk

The University of Cambridge is a partner in the €1 billion Quantum Flagship, an EU-funded initiative to develop quantum technologies across Europe.

The Flagships are the largest and most transformative investments in research of the European Union, and will cement the EU leadership in future and emerging technologies

Andrea Ferrari

The Quantum Flagship, which is being officially launched today in Vienna, is one of the most ambitious long-term research and innovation initiatives of the European Commission. It is funded under the Horizon 2020 programme, and will have a budget of €1 billion over the next ten years.

The Quantum Flagship is the third large-scale research and innovation initiative of this kind funded by the European Commission, after the Graphene Flagship – of which the University of Cambridge is a founding partner – and the Human Brain Project. The Quantum Flagship work in Cambridge is being coordinated by Professor Mete Atature of the Cavendish Laboratory and Professor Andrea Ferrari, Director of the Cambridge Graphene Centre.

Quantum technologies take advantage of the ability of particles to exist in more than one quantum state at a time. A quantum computer could enable us to make calculations that are well out of reach of even the most powerful supercomputers, while quantum secure communication could power ‘unhackable’ networks made safe by the laws of physics.

The long-term research goal is the so-called quantum web, where quantum computers, simulators and sensors are interconnected via quantum networks, distributing information and quantum resources such as coherence and entanglement.

The potential performance increase resulting from quantum technologies may yield unprecedented computing power, guarantee data privacy and communication security, and provide ultra-high precision synchronisation and measurements for a range of applications available to everyone, locally and in the cloud.

The new Quantum Flagship will bring together academic and industrial partners, with over 500 researchers working on solving these problems, and help turn the results into technological opportunities that can be taken up by industry.

In close partnership with UK, Italian, Spanish, Swedish universities and companies, Cambridge will develop layered quantum materials and devices for scalable integrated photonic circuits, for applications in quantum communication and networks.

Cambridge is investigating and refining layered semiconductors just a few atoms thick, based on materials known as transition metal dichalcogenides (TMDs). Certain TMDs contain quantum light sources that can emit single photons of light, which could be used in quantum computing and sensing applications.

These quantum light emitters occur randomly in layered materials, as is the case for most other material platforms. Over the past three years, the Cambridge researchers have developed a technique to obtain large-scale arrays of these quantum emitters in different TMDs and on a variety of substrates, establishing a route to build quantum networks on compact chips. The Cambridge team has also shown how to electrically control emission from these devices.

Additionally, the researchers have found that TMDs can support complex quasi-particles, called quintons. Quintons could be a source of entangled photons – particles of light which are intrinsically linked, no matter how far apart they are – if they can be trapped in quantum emitters.

These findings are the basis of the work being done in the Quantum Flagship, aimed at the development of scalable on-chip devices for quantum integrated photonic circuits, to enable secure quantum communications and quantum sensing applications.

“Our goal is to bring some of the amazing properties of the layered materials platform into the quantum technologies realm for a number of applications,” said Atature. “Achieving compact integrated quantum photonic circuits is a challenge pursued globally and our patented layered materials technology offers solutions to this challenge. This is a great project that combines quantum physics, optoelectronics and materials science to produce technology for the future.”

“Quantum technology is a key investment area for Europe, and layered materials show great promise for the generation and manipulation of quantum light for future technological advances,” said Ferrari. “The Graphene Flagship led the way for these large European Initiatives, and we are pleased to be part of the new Quantum Flagship. The Flagships are the largest and most transformative investments in research of the European Union, and will cement the EU leadership in future and emerging technologies.”

Andrus Ansip, Commission Vice-President for the Digital Single Market, said: “Europe is determined to lead the development of quantum technologies worldwide. The Quantum Technologies Flagship project is part of our ambition to consolidate and expand Europe’s scientific excellence. If we want to unlock the full potential of quantum technologies, we need to develop a solid industrial base making full use of our research.”

Inset images: Mete Atature and Andrea Ferrari; Artist’s impression of on-chip quantum photonics architecture with single photon sources and nonlinear switches on optical waveguides, credit Matteo Barbone.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

3D ‘Organ On a Chip’ Could Accelerate Search For New Disease Treatments

3D ‘organ on a chip’ could accelerate search for new disease treatments

source: www.cam.ac.uk

Researchers have developed a three-dimensional ‘organ on a chip’ which enables real-time continuous monitoring of cells, and could be used to develop new treatments for disease while reducing the number of animals used in research.

Two-dimensional cell models have served the scientific community well, but we now need to move to three-dimensional cell models in order to develop the next generation of therapies

Róisín Owens

The device, which incorporates cells inside a 3D transistor made from a soft sponge-like material inspired by native tissue structure, gives scientists the ability to study cells and tissues in new ways. By enabling cells to grow in three dimensions, the device more accurately mimics the way that cells grow in the body.

The researchers, led by the University of Cambridge, say their device could be modified to generate multiple types of organs – a liver on a chip or a heart on a chip, for example – ultimately leading to a body on a chip which would simulate how various treatments affect the body as whole. Their results are reported in the journal Science Advances.

Traditionally, biological studies were (and still are) done in petri dishes, where specific types of cells are grown on a flat surface. While many of the medical advances made since the 1950s, including the polio vaccine, have originated in petri dishes, these two-dimensional environments do not accurately represent the native three-dimensional environments of human cells, and can, in fact, lead to misleading information and failures of drugs in clinical trials.

“Two-dimensional cell models have served the scientific community well, but we now need to move to three-dimensional cell models in order to develop the next generation of therapies,” said Dr Róisín Owens from Cambridge’s Department of Chemical Engineering and Biotechnology, and the study’s senior author.

“Three-dimensional cell cultures can help us identify new treatments and know which ones to avoid if we can accurately monitor them,” said Dr Charalampos Pitsalidis, a postdoctoral researcher in the Department of Chemical Engineering & Biotechnology, and the study’s first author.

Now, 3D cell and tissue cultures are an emerging field of biomedical research, enabling scientists to study the physiology of human organs and tissues in ways that have not been possible before. However, while these 3D cultures can be generated, technology that accurately assesses their functionality in real time has not been well-developed.

“The majority of the cells in our body communicate with each other by electrical signals, so in order to monitor cell cultures in the lab, we need to attach electrodes to them,” said Dr Owens. “However, electrodes are pretty clunky and difficult to attach to cell cultures, so we decided to turn the whole thing on its head and put the cells inside the electrode.”

The device which Dr Owens and her colleagues developed is based on a ‘scaffold’ of a conducting polymer sponge, configured into an electrochemical transistor. The cells are grown within the scaffold and the entire device is then placed inside a plastic tube through which the necessary nutrients for the cells can flow. The use of the soft, sponge electrode instead of a traditional rigid metal electrode provides a more natural environment for cells and is key to the success of organ on chip technology in predicting the response of an organ to different stimuli.

Other organ on a chip devices need to be completely taken apart in order to monitor the function of the cells, but since the Cambridge-led design allows for real-time continuous monitoring, it is possible to carry out longer-term experiments on the effects of various diseases and potential treatments.

“With this system, we can monitor the growth of the tissue, and its health in response to external drugs or toxins,” said Pitsalidis. “Apart from toxicology testing, we can also induce a particular disease in the tissue, and study the key mechanisms involved in that disease or discover the right treatments.”

The researchers plan to use their device to develop a ‘gut on a chip’ and attach it to a ‘brain on a chip’ in order to study the relationship between the gut microbiome and brain function as part of the IMBIBE project, funded by the European Research Council.

The researchers have filed a patent for the device in France.

Reference:
C. Pitsalidis et al. ‘Transistor in a tube: a route to three-dimensional bioelectronics.’ Science Advances (2018). DOI: 10.1126/sciadv.aat4253

Researcher profile: Dr Charalampos Pitsalidis

Dr Charalampos Pitsalidis is a postdoctoral researcher in the Department of Chemical Engineering & Biotechnology, where he develops prototypes of miniaturised platforms that can be integrated with advanced cell cultures for drug screening. A physicist with materials science background, he collaborates with biologists and chemists, in the UK and around the world, in order to develop and test drug screening platforms to help reduce the number of animals used in research.

“Animal studies remain the major means of drug screening in the later stages of drug development however they are increasingly questioned due to ethics, cost and relevance concerns. The reduction of animals in research is what motivates my work.

“I hope that one day I will have managed to make a small contribution in accelerating the drug discovery pipeline and towards the replacement reduction and refinement of animal research,” he said. “I believe that in 2018, we have everything in our hands, huge technological advancements, and all we need is to develop better and more predictive tools for assessing various therapies. It is not impossible; it just requires a systematic and highly collaborative approach across multiple disciplines.”

He calls Cambridge a truly inspiring place to work. “The state-of-the-art facilities and world-class infrastructure with cutting-edge equipment allow us to conduct high-quality research,” he said. “On top of that, the highly collaborative environment among the various groups and the various departments support multidisciplinary research endeavours and well-balanced research. The strong university and entrepreneurial ecosystem in both high tech and biological science makes Cambridge an ideal place for innovative research in my field.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Brexit: The Three Transition Options Open To The UK

Brexit: the three transition options open to the UK

source: www.cam.ac.uk

Will the UK agree to an extended transition period, keeping it bound by EU rules for longer after exiting the EU? Here, Professor Kenneth Armstrong outlines three “potential models” to extend the transition period, as explored in his new research paper published today.

A perpetual transition would be politically unacceptable… and conflict with EU law. It would, therefore, need an exit mechanism

Kenneth Armstrong

For some time now, both the United Kingdom and European Union have agreed that once the UK ceases to be a Member State on 29 March 2019, it will enter into a ‘stand-still’ period – during which the UK will continue to be bound by its existing EU obligations.

The rationale behind this is to avoid a ‘cliff-edge’ departure that would see tariffs and regulatory controls imposed on cross-border trade between the UK and the EU.

To the extent there has been disagreement between the two sides it has been on terminology – the EU refers to this as a ‘transition period’ while the UK insists on calling it an ‘implementation period’ – and duration – the UK sought a two-year period whereas the EU was only willing to agree a transition that would end on 31 December 2020 (coinciding with the end of the current budgetary ‘multi-annual framework’). The UK agreed to the EU’s offer of a transition ending in December 2020.

However, the duration of the transition period has come back to the fore of the negotiations for two reasons.

The UK believes that the issue of how to avoid a hard border on the island of Ireland can only properly be resolved in the context of the negotiations on the future economic relationship. The UK had hoped that this might be negotiated in parallel with the withdrawal arrangements.

However, the EU has insisted that it is only the framework for future cooperation that can be discussed in the context of the withdrawal negotiations, meaning that the terms of a future economic relationship can only be agreed once the UK leaves. As long as the UK is in transition, the issue of frontier controls on the island of Ireland does not arise.

But with the transitional period ending at the end of 2020, EU negotiators have insisted on the need for a ‘backstop’ to ensure that, if transition ends without a deal that meets the commitments made in the 2017 Joint Report, a ‘hard border’ in Ireland will be avoided. It is the failure to reach agreement on a backstop which is making negotiators on both sides reconsider a time-limited transition period.

The second reason is that the pace of negotiations, coupled with deep disagreement over the UK Government’s ‘Chequers Plan’, suggest that the transition period as currently conceived will be too short to allow for negotiations on a future relationship to be concluded. Taken together with the backstop issue, minds have turned to whether it would be prudent to extend transition.

In a recent European Policy Centre paper, Tobias Lock and Fabian Zuleeg make a strong case for the extension of transition, suggesting that a one-time one-year option to extend transition would be a workable solution.

In a new Research Paper, I have looked at three potential models for an extended transition:

  • A one-off option to extend transition for a year following the end of the initial transition period (the Lock and Zuleeg model)
  • A rolling or open-ended transition with an exit mechanism
  • An extended transition and implementation facility.

While Lock and Zuleeg make a good case, their proposal still risks a ‘second cliff-edge’ at the end of an extended transitional period if there is no agreement on a future relationship. A one-year optional extension may not give negotiators enough time to reach an agreement, and might not create sufficient confidence to avoid the need to negotiate a backstop.

The most obvious way to avoid a backstop would be to keep the UK in transition unless and until a new economic partnership between the UK and the EU was agreed (provided also that this met the commitments on the Irish border agreed in the 2017 Joint Report).

However, a perpetual transition would be politically unacceptable, difficult to manage in budgetary terms, and conflict with EU law. It would, therefore, need an exit mechanism. This could be modelled on Article 50 itself and allow either the UK or the EU to notify the other of their intention to end the transition period.

A compromise solution draws on the existing draft Agreement, and would allow transition to end once new agreements on customs and trade, foreign, security and defence policy are agreed and became applicable. Unlike an open transition, this facility would need a defined endpoint, and a proposed deadline of 31 December 2022 is suggested.

The aim would be to give negotiators the flexibility to agree new partnership arrangements, but with incentives to reach agreements early – avoiding the continued use of the transition and implementation facility. The UK and EU could depart transition well before the facility expired.

Kenneth Armstrong is Professor of European law and holds a Leverhulme Trust Major Research Fellowship for the project The Brexit Effect – Convergence, Divergence and Variation in UK Regulatory Policy.

The full Faculty of Law working paper can be viewed here. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

A healthy Lifestyle Cuts Stroke Risk, Irrespective of Genetic Risk

A healthy lifestyle cuts stroke risk, irrespective of genetic risk

source: www.cam.ac.uk

People at high genetic risk of stroke can still reduce their chance of having a stroke by sticking to a healthy lifestyle, in particular stopping smoking and not being overweight, finds a study in The BMJ today.

This drives home just how important a healthy lifestyle is for all of us, even those without an obvious genetic predisposition

Hugh Markus

Stroke is a complex disease caused by both genetic and environmental factors, including diet and lifestyle. But could adhering to a healthy lifestyle offset the effect of genetics on stroke risk?

An international team led by researchers at the University of Cambridge decided to find out by investigating whether a genetic risk score for stroke is associated with actual (“incident”) stroke in a large population of British adults.

They developed a genetic risk score based on 90 gene variants known to be associated with stroke from 306,473 white men and women in the UK Biobank – a database of biological information from half a million British adults.

Participants were aged between 40 and 73 years and had no history of stroke or heart attack. Adherence to a healthy lifestyle was based on four factors: non-smoker, diet rich in fruit, vegetables and fish, not overweight or obese (body mass index less than 30), and regular physical exercise.

Hospital and death records were then used to identify stroke events over an average follow-up of seven years.

Across all categories of genetic risk and lifestyle, the risk of stroke was higher in men than women.

Risk of stroke was 35% higher among those at high genetic risk compared with those at low genetic risk, irrespective of lifestyle.

However, an unfavourable lifestyle was associated with a 66% increased risk of stroke compared with a favourable lifestyle, and this increased risk was present within any genetic risk category.

A high genetic risk combined with an unfavourable lifestyle profile was associated with a more than twofold increased risk of stroke compared with a low genetic risk and a favourable lifestyle.

These findings highlight the benefit for entire populations of adhering to a healthy lifestyle, independent of genetic risk, say the researchers. Among the lifestyle factors, the most significant associations were seen for smoking and being overweight or obese.

This is an observational study, so no firm conclusions can be drawn about cause and effect, and the researchers acknowledge several limitations, such as the narrow range of lifestyle factors, and that the results may not apply more generally because the study was restricted to people of European descent.

However, the large sample size enabled study of the combination of genetic risk and lifestyle in detail. As such, the researchers conclude that their findings highlight the potential of lifestyle interventions to reduce risk of stroke across entire populations, even in those at high genetic risk of stroke.

Professor Hugh Markus from the Department of Clinical Neurosciences at University of Cambridge says: “This drives home just how important a healthy lifestyle is for all of us, even those without an obvious genetic predisposition. Some people are at an added disadvantage if ‘bad’ genes put them at a higher risk of stroke, but even so they can still benefit from not smoking and from having a healthy diet.”

The research was funded by the British Heart Foundation and the NIHR Cambridge Biomedical Research Centre.

Adapted from a press release by The BMJ.

Reference
Rutten-Jacobs, LCA, et al. Genetic risk, incident stroke, and the benefits of adhering to a healthy lifestyle: follow-up study of 306,473 UK Biobank participants. BMJ; 25 Oct 2018; DOI: 10.1136/bmj.k4168


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Brain Training App Helps Reduce OCD Symptoms, Study Finds

Brain training app helps reduce OCD symptoms, study finds

source: www.cam.ac.uk

A ‘brain training’ app developed at the University of Cambridge could help people who suffer from obsessive compulsive disorder (OCD) manage their symptoms, which may typically include excessive handwashing and contamination fears.

This technology will allow people to gain help at any time within the environment where they live or work, rather than having to wait for appointments

Barbara Sahakian

In a study published in the journal Scientific Reports, Baland Jalal and Professor Barbara Sahakian from the Department of Psychiatry, show how just one week of training can lead to significant improvements.

One of the most common types of OCD, affecting up to 46% of OCD patients, is characterised by severe contamination fears and excessive washing behaviour. Excessive washing can be harmful as sometimes OCD patients use spirits, surface cleansers or even bleach to clean their hands. The behaviours can have a serious impact on people’s lives, their mental health, their relationships and their ability to hold down jobs.

This repetitive and compulsive behaviour is also associated with ‘cognitive rigidity’ – in other words, an inability to adapt to new situations or new rules. Breaking out of compulsive habits, such as handwashing, requires cognitive flexibility so that the OCD patient can switch to new activities instead.

OCD is treated using a combination of medication such as Prozac and a form of cognitive behavioural therapy (‘talking therapy’) termed ‘exposure and response prevention’. This latter therapy often involves instructing OCD patients to touch contaminated surfaces, such as a toilet, but to refrain from then washing their hands.

These treatments are not particularly effective, however – as many as 40% of patients fail to show a good response to either treatment. This may be in part because often people with OCD have suffered for years prior to receiving a diagnosis and treatment. Another difficulty is that patients may fail to attend exposure and response prevention therapy as they find it too stressful to undertake.

For these reasons, Cambridge researchers developed a new treatment to help people with contamination fears and excessive washing. The intervention, which can be delivered through a smartphone app, involves patients watching videos of themselves washing their hands or touching fake contaminated surfaces.

Ninety-three healthy people who had indicated strong contamination fears as measured by high scores on the ‘Padua Inventory Contamination Fear Subscale’ participated in the study. The researchers used healthy volunteers rather than OCD patients in their study to ensure that the intervention did not potentially worsen symptoms.

The participants were divided into three groups: the first group watched videos on their smartphones of themselves washing their hands; the second group watched similar videos but of themselves touching fake contaminated surfaces; and the third, control group watched themselves making neutral hand movements on their smartphones.

After only one week of viewing their brief 30 second videos four times a day, participants from both of the first two groups – that is, those who had watched the hand washing video and those with the exposure and response prevention video – improved in terms of reductions in OCD symptoms and showed greater cognitive flexibility compared with the neutral control group. On average, participants in the first two groups saw their Yale-Brown Obsessive Compulsive Scale (YBOCS) scores improve by around 21%. YBOCS scores are the most widely used clinical assessments for assessing the severity of OCD.

Importantly, completion rates for the study were excellent – all participants completed the one-week intervention, with participants viewing their video an average (mean) of 25 out of 28 times.

Mr Jalal said: “Participants told us that the smartphone washing app allowed them to easily engage in their daily activities. For example, one participant said ‘if I am commuting on the bus and touch something contaminated and can’t wash my hands for the next two hours, the app would be a sufficient substitute’.”

Professor Sahakian said: “This technology will allow people to gain help at any time within the environment where they live or work, rather than having to wait for appointments. The use of smartphone videos allows the treatment to be personalised to the individual.

“These results while very exciting and encouraging, require further research, examining the use of these smartphone interventions in people with a diagnosis of OCD.”

The smartphone app is not currently available for public use. Further research is required before the researchers can show conclusively that it is effective at helping patients with OCD.

The research was funded by the Wellcome Trust, NIHR Cambridge Biomedical Research Centre, the Medical Research Council and the Wallitt Foundation.

Reference
Baland Jalal, Annette Bruhl, Claire O’Callaghan, Thomas Piercy, Rudolf N. Cardinal, Vilayanur S. Ramachandran and Barbara J. Sahakian. Novel smartphone interventions improve cognitive flexibility and obsessive-compulsive disorder symptoms in individuals with contamination fears. Scientific Reports; 23 Oct 2018; DOI: 10.1038/s41598-018-33142-2


Researcher profile: Baland Jalal

“Cambridge is the perfect place for the ‘idealistic scholar’ – those who believe they can re-write the science textbooks. The culture—like no other—embraces novel ideas, even if outlandish and far-fetched on the surface,” says Baland Jalal, a neuroscientist at the Behavioural and Clinical Neuroscience Institute and PhD candidate at Trinity College.

“It is no coincidence that the foremost scientists in history have stepped foot here, including my scientific hero Newton. One cannot help but feel inspired, as if part of a lineage of greatness—‘standing on the shoulder of giants’.”

Jalal considers himself fortunate to have been able to stand on the shoulders of proverbial giants throughout his research career. He received his initial training at the University of California in the laboratory of legendary neuroscientist VS Ramachandran.

“California was an enchanting experience. Rama and I would often go for long strolls on San Diego’s beaches where he would tell mesmerizing stories about the good-old-days when he was a Cambridge student and how he later invented his famous ‘mirror box’ for phantom limb pain. He was like a second father – a mentor who instilled in me a genuine love of science.”

Jalal now works with husband-and-wife team Professors Barbara Sahakian and Trevor Robbins, who he describes as embodying “the ‘Cambridge spirit’ of innovation”. His work is ultimately about developed new psychiatric treatments. “This often involves taking an unorthodox and somewhat radical approach—thinking ‘outside the box’ so to speak,” he says. Ideas include the above treatment for OCD and a second treatment based on the ‘rubber hand illusion’, making a fake hand feel like it is your own.

His other area of interest is in sleep paralysis—being paralyzed from head to toe while seeing ghosts and space aliens when waking up from sleep. He has studied this peculiar phenomenon around the world and recently invented a novel meditation-relaxation therapy for this condition, called MR Therapy.

“I hope my research will lead to new therapies that can help people in distress around the world – especially folks in low-income countries who don’t have adequate access to health care. The feeling I have when someone tells me that my work has helped alleviate their anguish is – simply – indescribable.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Study Unearths Britain’s First Speech Therapists

Study unearths Britain’s first speech therapists

Joseph Priestley: theologian, scientist, clergyman and stammerer
source: www.cam.ac.uk

On International Stammering Awareness Day (22 October), a new study reveals that Britain’s first speech therapists emerged at least a century earlier than previously thought.

It is tempting to think that sympathy for stammering is a very recent phenomenon but a significant change in attitudes took hold in the eighteenth century

Elizabeth Foyster

Until now, historians had assumed that John Thelwall became Britain’s first speech therapist in the early nineteenth century.*
But Cambridge historian Elizabeth Foyster has discovered that James Ford was advertising his services in London as early as 1703, and that many other speech therapists emerged over the course of the eighteenth century.
Ford’s advert (pictured), published in the Post Man newspaper on 23 October 1703, states that “he removes Stammering, and other impediments in Speech”, as well as teaching “Foreigners to pronounce English like Natives”.
Ford had previously worked with the deaf and dumb but realised that there was more money to be made by offering other speech improvement services as a branch of education for wealthy children.
“In the eighteenth century, speaking well was crucial to being accepted in polite society and to succeeding in a profession,” said Foyster. “Speech impediments posed a major obstacle and the stress this caused often made a sufferer’s speech even worse. At the same time, wealthy parents were made to feel guilty and they started spending increasingly large sums to try to “cure” their children.”
By 1703, Ford was based in Newington Green, in the suburbs of London, but twice a week he waited near the city’s Royal Exchange and Temple Bar to secure business from merchants, financiers and lawyers desperate to improve their children’s life chances.
By 1714, some of these families were seeking out the help of Jacob Wane, a therapist who drew on a 33-year personal struggle with the condition. And by the 1760s, several practitioners were competing for business in London.
“We have lost sight of these origins of speech therapy because historians have been looking to identify a profession which had agreed qualifications for entry, an organising body, scientific methods and standards, as we have today,” said Foyster. “In the eighteenth century, speech therapy was regarded as an art not a science. But with its attention to the individual, and the psychological as well as physiological causes of speech defects, we can see the roots of today’s speech therapy.”

Art and business

Foyster’s study, published in the journal Cultural and Social History, shows that speech specialists emerged in the early eighteenth century as new attention was given to the role of the nerves, emotions and psychological origins of speech impediments.
Prior to this, in the seventeenth century, the main cure on offer had involved painful physical intervention including the cutting of tongues. But as speech defects came to be understood as resulting from nervous disorders, entrepreneurial therapists stepped in to end the monopoly of the surgeons.
“These men, and some women, made no claim to medical knowledge,” Foyster says. “In fact, some were very keen to emphasise that they were nothing like the surgeons who had caused so much unnecessary pain. They described themselves as ‘Artists’ and their gentler methods were much more attractive to wealthy clients.”
These speech ‘artists’ jealously guarded their trade secrets but gave away some clues to their methods in print. Close attention was paid to the position of the lips, tongue and mouth; clients were given breathing and voice exercises to practise; and practitioners emphasised the importance of speaking slowly so that every sound could be articulated.
By the 1750s, London’s speech therapists had become masters of publicity publishing books, placing advertisements in newspapers and giving lectures in universities and other venues. In 1752, Samuel Angier achieved the remarkable feat of lecturing to Cambridge academics on four occasions about speech impediments and the ‘art of pronunciation’, despite having never attended university himself.
Foyster has identified several successful speech therapy businesses, some of which were passed down from one generation to the next. Most of these were based in London but practitioners would often follow their clientele to fashionable resort towns such as Bath and Margate.
In 1761, Charles Angier became the third generation to take over his family’s business; and by the 1780s, he claimed to be able to remove all speech impediments within six to eight months if his pupils were ‘attentive’. By then, he was reported to be charging fifty guineas ‘for the Cure’ at a time when many Londoners were earning less than ten guineas a year.
To be successful, these entrepreneurs had to separate themselves from quackery. Some heightened their credibility by securing accreditation from respected physicians while others printed testimonials from satisfied clients beneath their newspaper advertisements.

Suffering and determination

Foyster’s study also sheds light on the appalling suffering and inspirational determination of stammerers in the eighteenth century, including some well-known figures.
Joseph Priestley (1733-1804), the theologian, scientist and clergyman (pictured), recalled that his worsening stammer made ‘preaching very painful, and took from me all chance of recommending myself to any better place’.
His fellow scientist, Erasmus Darwin, also suffered from a stammer, as did Darwin’s daughter, Violetta, and eldest son, Charles. In 1775, Darwin compiled detailed instructions to help his daughter overcome her stammer which involved sounding out each letter and practising problematic words for weeks on end.
“It is tempting to think that sympathy for stammering is a very recent phenomenon but a significant change in attitudes took hold in the eighteenth century,” said Foyster. “While stammerers continued to be mocked and cruelly treated, polite society became increasingly compassionate, especially when someone demonstrated a willingness to seek specialist help.”
References:
Elizabeth Foyster, ‘Fear of Giving Offence Makes Me Give the More Offence’: Politeness, Speech and Its Impediments in British Society, c.1660–1800.’ Cultural and Social History (2018). DOI: 10.1080/14780038.2018.1518565
* Denyse Rockey, ‘The Logopaedic thought of John Thelwall, 1764-1834: First British Speech Therapist‘, British Journal of Disorders of Communication (1977). DOI: 10.3109/13682827709011313

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

History Shows Abuse of Children In Custody Will Remain An ‘Inherent Risk’ – Report

History shows abuse of children in custody will remain an ‘inherent risk’ – report

New research conducted for the current independent inquiry suggests that – despite recent policy improvements – cultures of child abuse are liable to emerge while youth custody exists, and keeping children in secure institutions should be limited as far as possible.

History tells us that it is impossible to ‘manage out’ the risk of abuse through improved policies alone

Caroline Lanskey

A new report on the history of safeguarding children detained for criminal offences in the UK has concluded that it is impossible to remove the potential for abuse in secure institutions, and that the use of custody for children should only be a “last resort”.

A team of criminologists and historians from the universities of Cambridge and Edinburgh were asked by HM Prison and Probation Service (HMPPS) to build a “collective memory” of the abuse cases and preventative policies that emerged in the youth wing of the UK’s secure estate between 1960 and 2016.

The research was commissioned to help prepare HMPPS to give evidence to the Independent Inquiry into Child Sexual Abuse. It covers physical and sexual abuse in secure children’s homes and training centres, young offender institutions such as Deerbolt and Feltham, and their predecessors: detention centres and borstals.

Drawing on often limited archival records – as well as inspection reports and previous findings – the research reveals how past safeguards broke down, failing to recognise children in custody as vulnerable.

Researchers found abuse was especially likely at times of overcrowding and budgetary constraint, and occurred despite contemporary beliefs that protective policies were working.

The historical overview goes beyond individual misconduct to show how whole institutions become “detached from their purpose”, with undertrained staff collectively drifting into “morally compromised” cultures where abusive acts appear acceptable even as procedure is followed.

The researchers say this “acculturation” at times extended to inspectorates and monitors overfamiliar with failing systems. They argue that it is vital to ensure effective complaints processes and protect whistle-blowers.

The report has been produced by Cambridge criminologists and Dr Lucy Delap and Professor Louise Jackson from the History and Policy network, and is published online today alongside a policy paper summarising the findings.

“History tells us that it is impossible to ‘manage out’ the risk of abuse through improved policies alone,” said report co-author Dr Caroline Lanskey, from Cambridge’s Institute of Criminology (IoC).

“The steep power imbalance between staff and children means there is a need to focus on staff culture, rather than only on detailed policy, in order to establish greater trust between staff and young people in a secure institution,” she said.

Until the 1990s safeguards against abuse were weak, and ineffective in many institutions, say researchers. Children were often left to “fend for themselves” in detention centres such as Medomsley, where reports of sexual abuse during the 1970s and 1980s have since come to light.

The research reveals major rifts in the mid-1970s between the external Board of Visitors – Medomsley’s main monitoring body – and the centre’s management over disciplinary approaches. Inspections of the time recorded that neither staff nor children “seem to know what the purpose of the centre really is…”

Inspectors were concerned with basic functions such as kitchen cleanliness. That the kitchen manager worked unsupervised, and hand-picked his team of children and young people, was not perceived as risky. This Medomsley manager was subsequently convicted of sexual offences.

“Inspectors and Boards of Visitors checked procedure, but they lacked the concepts and language to recognise that certain situations were potentially abusive. These blind spots persisted until at least the 1990s,” said Ben Jarman, a researcher at Cambridge’s IoC, who carried out the archival research.

The turn of the millennium saw a “new orthodoxy” in protective policies, combined with a spike in custodial sentences for children that wouldn’t decline again until 2010.

Part of this policy shift included the questioning of long-standing practices such as strip-searching and forms of restraint, and whether they amounted to abuse.

“Strip-searching before the 1990s seems to have been so routine and unremarkable that it’s hardly mentioned in the documentary record,” said Jarman. “As late as 1995, inspectors at Deerbolt reported without comment that staff believed more routine strip searches were required.”

However, by 2002 inspectors were expressing serious concerns about untargeted strip-searching. A 2005 inspection of Feltham described strip-searches as “degrading”, and an independent inquiry the following year argued that, in any other circumstances, such practices would “trigger a child protection investigation”.

The use of pain-inducing restraint has also become the subject of fierce debate and some policy change, following the deaths of two children in secure training centres in 2004.

Strip-searching and restraint are still used but much more carefully regulated. New monitoring systems attempt to take account of the ‘voice’ of children, who the report suggests have been recast as ‘users’ of custodial ‘services’.

Yet improved safeguards can inspire false confidence and mask the “corruption of care”, say researchers. The exposure by the BBC of violence and bullying by staff in Medway Secure Training Centre in 2016 came shortly after an inspection declaring safety there to be “good”.

“Investigations at Medway concluded that child protection failed despite the apparent compliance with safeguarding policies,” said Jarman. “Inadequately trained and under pressure to achieve contractual targets, some of the staff did not appear to understand what they were doing was wrong.”

“We wouldn’t argue for fewer safeguards, but without a focus on staff culture, even the best policies can be circumvented when an abusive climate develops,” he added.

“The ever-present potential for abuse means that custody should be used for children only as a last resort, where there is no alternative,” the report concludes.

The full report, Safeguarding children in the secure estate: 1960 -2016, available here.   


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Targeting Hard-To-Treat Cancers

Targeting hard-to-treat cancers

source: www.cam.ac.uk

Cambridge leads a £10 million interdisciplinary collaboration to target the most challenging of cancers.

We are going to pierce through the body’s natural barriers and deliver anti-cancer drugs to the heart of the tumour.

George Malliaras

While the survival rate for most cancers has doubled over the past 40 years, some cancers such as those of the pancreas, brain, lung and oesophagus still have low survival rates.

Such cancers are now the target of an Interdisciplinary Research Collaboration (IRC) led by the University of Cambridge and involving researchers from Imperial College London, University College London and the Universities of Glasgow and Birmingham.

“Some cancers are difficult to remove by surgery and highly invasive, and they are also hard to treat because drugs often cannot reach them at high enough concentration,” explains George Malliaras, Prince Philip Professor of Technology in Cambridge’s Department of Engineering, who leads the IRC. “Pancreatic tumour cells, for instance, are protected by dense stromal tissue, and tumours of the central nervous system by the blood-brain barrier.”

The aim of the project, which is funded for six years by the Engineering and Physical Sciences Research Council, is to develop an array of new delivery technologies that can deliver almost any drug to any tumour in a large enough concentration to kill the cancerous cells.

Chemists, engineers, material scientists and pharmacologists will focus on developing particles, injectable gels and implantable devices to deliver the drugs. Cancer scientists and clinicians from the Cancer Research UK Cambridge Centre and partner sites will devise and carry out clinical trials. Experts in innovative manufacturing technologies will ensure the devices are able to be manufactured and robust enough to withstand surgical manipulation.

One technology the team will examine is the ability of advanced materials to self-assemble and entrap drugs inside metal-organic frameworks. These structures can carry enormous amounts of drugs, and be tuned both to target the tumour and to release the drug at an optimal rate.

“We are going to pierce through the body’s natural barriers,” says Malliaras, “and deliver anti-cancer drugs to the heart of the tumour.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge Team Develops Technique To ‘Listen’ To a Patient’s Brain During Tumour Surgery

Cambridge team develops technique to ‘listen’ to a patient’s brain during tumour surgery

www.cam.ac.uk

Surgeons could soon eavesdrop on a patient’s brain activity during surgery to remove their brain tumour, helping improve the accuracy of the operation and reduce the risk of impairing brain function.

There’s been huge progress in brain imaging and electrophysiology – our understanding of the electricity within our bodies – so why not use this information to improve brain surgery?

Yaara Erez

Patients with low-grade gliomas in their brains – a slow-spreading, but potentially life-threatening tumour – will usually receive surgery to have the tumour removed. But removing brain tissue can be risky as there is no boundary between the brain and tumour – the tumour infiltrates the brain. Removal of tumour can lead to removal of vital parts of the brain and resulting impairments in functions such as speech, movement and executive function (which enables the individual to plan, organise and execute tasks).

To minimise this risk, neurosurgeons open the patient’s skull and then waken them. A local anaesthetic means the patient will feel no pain, and the brain itself contains no pain receptors. The surgeon will probe the patient’s brain, applying mild electric pulses to tissue surrounding the tumour while asking them to perform a set of tasks. For example, the patient may be asked to count from one to five: if an electric pulse applied to a certain place in the brain affects their ability to perform this task, the surgeon will leave this tissue in place.

“As surgeons, we’re always trying to minimise the risk to patients and provide them with the best possible outcomes,” says Thomas Santarius, a neurosurgeon at Addenbrooke’s, Cambridge University Hospitals. “Operating on brain tumours is always a delicate balance between removing as much diseased tissue as possible to give patients better prognosis, while minimising the risk of damage to brain functions that will have a potentially massively detrimental impact on the patient’s life.”

While the current approach is considered the ‘gold standard’, it is not perfect. It takes time to apply the pulses on different parts of the brain and it may miss out some areas that are important for certain functions. The current battery of cognitive tests that surgeons use is also limited and does not test for the essential executive function, for example.

Now, a team of scientists and clinicians from the University of Cambridge and Addenbrooke’s Hospital, led by Mr Santarius, Dr Yaara Erez and Mr Michael Hart, together with Pedro Coelho from Neurophys Ltd, has collaborated to develop a new approach that will enable patients to get a more accurate, personalised ‘read-out’ of their brain networks, and will provide surgeons with real-time feedback on the patient’s brain activity in theatre.

“At the moment, neurosurgeons only know about function in the average brain – they have no patient-specific information,” explains Dr Yaara Erez, a neuroscientist from the MRC Cognition and Brain Sciences Unit at the University of Cambridge. “But there’s been huge progress in brain imaging and electrophysiology – our understanding of the electricity within our bodies – so why not use this information to improve brain surgery? We are aiming to bring all this knowledge into the theatre, providing surgeons with integrated data and the best tools to support their work.”

Under this approach, patients would undergo a number of neuroimaging examinations using magnetic resonance imaging (MRI) before surgery aimed at identifying not only the exact location of the tumour but also how different regions of their brains communicate with each other.

As part of this process, a 3D-printed copy of the patient’s brain will be used, showing where the tumour is located. This model is intended to help surgeons plan the surgery, discuss with the patient the potential risks from surgery and involve the patient in decisions over which tissue to remove.

“Doctors need to be able to talk through the options with patients, and we hope that using neuroimaging data and presenting this as a 3D model will help surgeons with the planning of surgery and ensure patients are better informed about the risks and benefits from surgery,” says Dr Erez.

During surgery, once the patient’s skull has been opened, the surgeon will place electrodes on the surface of the brain, to ‘listen’ to their brain activity. A computer algorithm will analyse this information as the patient performs a battery of cognitive tests, giving live feedback to the surgeon. This will enable the surgeon to predict more accurately the likely impact of removing a particular area of brain tissue.

In particular, executive function is difficult to test using electrical stimulation – in part because it involves networks of regions across the brain. Dr Erez hopes that a combination of improved cognitive tests and a more accurate understanding of an individual patient’s networks will enable surgeons to monitor potential impairment to executive function during surgery.

“This isn’t going to replace brain stimulation during surgery,” says Dr Erez, “but it will guide the surgeon and it will save time and make surgery more efficient, more accurate. It will also enable us to understand how patients’ brains adapt to the presence of a tumour and how well they recover from surgery. It involves equipment that is largely already in use in surgeries, so should be easy and cost effective to implement.”

So far, the team has obtained data from 12 patients, already providing a large amount of data to analyse, with a rich dataset from each patient, collected before, during and after surgery. Although they are currently analysing this information offline, the data will help them find the best measures to provide the required information – what the ideal tasks for patients to perform are – and then to optimise the analysis.

The research has only been possible because of the interaction between researchers and clinicians from a variety of disciplines, says Dr Erez. “At Cambridge, we have different groups of neuroscientists with a range of expertise from psychology and imaging to computer science working with clinicians and surgeons at the hospital.  Whatever we need, we can always find someone in Cambridge who knows how to do it!”

The research is supported by the Medical Research Council, the Royal Society and The Brain Tumour Charity.


Researcher profile: Dr Yaara Erez

Originally from Israel, Dr Yaara Erez is now a neuroscientist at the MRC Cognition and Brain Sciences Unit – a centre that not only has “a long history of great contributions to the theoretical and experimental foundations of cognitive psychology”, she says, but “is also famous for its truly lovely garden!”

Yaara’s background is in Computer Science and Psychology. She spent several years as a software developer before deciding to pursue a PhD in neuroscience, and she is now a Royal Society Dorothy Hodgkin Research Fellow. Her background is proving essential for understanding the inner workings of the brain.

“We process the information around us in an active way – we pay attention to what is relevant to us and filter out what we don’t need. We do that all the time, effortlessly and efficiently, but from a computational perspective it is a very complicated problem. We only have hints about how this is done in the brain.”

Yaara’s interest lies in the brain systems that allow us to behave flexibly, adapt our behaviour to changing circumstances, and select only the information that we need. These systems are involved in a wide range of cognitive function known as ‘executive function’, including problem-solving, keeping focus, switching focus and planning, all of which are essential to normal healthy life. “It’s important to understand these brain mechanisms because it may help us develop treatments for patients with different brain disorders that affect cognitive function, such as stroke, brain tumour, depression, and many more,” she says.

While Yaara’s research is basic science, she is interested in its clinical application and how the knowledge might be used to improve healthcare and treatments for patients. “I believe we can improve existing procedures so patients can have a high quality of life after brain surgery. We can and we should use our knowledge from basic neuroscience to improve treatments for patients.”

Her work uses a variety of techniques that involve different types of brain signals that she collects from healthy volunteers and patients with brain tumours. “This data is very complex, so requires detailed analysis, which I like. The combination of the data from the different techniques, and what we can learn from each of them, makes my work exciting and enables me to get the full picture.

Yaara recalls the day she first saw a live brain surgery on an awake patient. “As a neuroscientist, I study the brain and know quite a lot about it, but seeing a real brain and how brief pulses of electrical stimulation immediately affect behaviour was a different level of experience and truly eye-opening.”

Cambridge, says Yaara, is the perfect place for her research. “There are people from all over the world, and they all bring their expertise, knowledge, and perspective. My research is multidisciplinary in its nature, and the combination of the different expertise of people in Cambridge makes it work. We also have great facilities here and are very fortunate to have such a great University Hospital as Addenbrooke’s as our local hospital.

“I enjoy meeting and working with people from all around the world, and the international community in Cambridge is amazing.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Graphene May Exceed Bandwidth Demands of Future Telecommunications

Graphene may exceed bandwidth demands of future telecommunications

source: www.cam.ac.uk

Researchers from the Cambridge Graphene Centre, together with industrial and academic collaborators within the European Graphene Flagship project, showed that integrated graphene-based photonic devices offer a solution for the next generation of optical communications.

The researchers have demonstrated how properties of graphene – a two-dimensional form of carbon – enable ultra-wide bandwidth communications and low power consumption to radically change the way data is transmitted across the optical communications systems.

This could make graphene-integrated devices the key ingredient in the evolution of 5G, the Internet-of-Things (IoT), and Industry 4.0. The findings are published in Nature Reviews Materials.

As conventional semiconductor technologies approach their physical limitations, researchers need to explore new technologies to realise the most ambitious visions of a future networked global society. Graphene promises a significant step forward in performance for the key components of telecommunications and data communications.

In their new paper, the researchers have presented a vision for the future of graphene-based integrated photonics, and provided strategies for improving power consumption, manufacturability and wafer-scale integration. With this new publication, the Graphene Flagship partners also provide a roadmap for graphene-based photonics devices surpassing the technological requirement for the evolution of datacom and telecom markets driven by 5G, IoT, and the Industry 4.0.

“Graphene integrated in a photonic circuit is a low cost, scalable technology that can operate fibre links at a very high data rates,” said study lead author Marco Romagnoli from CNIT, the National Interuniversity Consortium for Telecommunications in Italy.

Graphene photonics offers advantages both in performance and manufacturing over the state of the art. Graphene can ensure modulation, detection and switching performances meeting all the requirements for the next evolution in photonic device manufacturing.

Co-author Antonio D’Errico, from Ericsson Research, says that “graphene for photonics has the potential to change the perspective of Information and Communications Technology in a disruptive way. Our publication explains why, and how to enable new feature rich optical networks.”

This industrial and academic partnership, comprising researchers in the Cambridge Graphene Centre, CNIT, Ericsson, Nokia, IMEC, AMO, and ICFO produced the vision for the future of graphene photonic integration.

“Collaboration between industry and academia is key for explorative work towards entirely new component technology,” said co-author Wolfgang Templ of Nokia Bell Labs. “Research in this phase bears significant risks, so it is important that academic research and industry research labs join the brightest minds to solve the fundamental problems. Industry can give perspective on the relevant research questions for potential in future systems. Thanks to a mutual exchange of information we can then mature the technology and consider all the requirements for a future industrialization and mass production of graphene-based components.”

“An integrated approach of graphene and silicon-based photonics can meet and surpass the foreseeable requirements of the ever-increasing data rates in future telecom systems,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. “The advent of the Internet of Things, Industry 4.0 and the 5G era represent unique opportunities for graphene to demonstrate its ultimate potential.”

Reference: 
Marco Romagnoli et al. ‘Graphene-based integrated photonics for next-generation datacom and telecom.’ Nature Reviews Materials (2018). DOI: 10.1038/s41578-018-0040-9.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

New Legal Tool Aims To Increase Openness, Sharing and Innovation In Global Biotechnology

New legal tool aims to increase openness, sharing and innovation in global biotechnology

source: www.cam.ac.uk

A new easy-to-use legal tool that enables exchange of biological material between research institutes and companies launches today.

The OpenMTA provides a new pathway for open exchange of DNA components – the basic building blocks for new engineering approaches in biology

Jim Haseloff

The OpenMTA is a Material Transfer Agreement (MTA) designed to foster a spirit of openness, sharing and innovation in global biotechnology. MTAs provide the legal frameworks within which research organisations lay down terms and conditions for sharing their materials – everything from DNA to plant seeds to patient samples.

Use of the OpenMTA allows redistribution and commercial use of materials, while respecting the rights of creators and promoting safe practice and responsible research. The new standardised framework also eases the administrative burden for technology transfer offices, negating the need to negotiate unique terms for individual transfers of widely-used material.

The OpenMTA launches today with a commentary published in the journal Nature Biotechnology. It provides a new way to openly exchange low level “nuts and bolts” components for biological research and engineering, complementing existing, more restrictive arrangements for material transfer.

The OpenMTA was developed through a collaboration, led by the San Francisco-based BioBricks Foundation and UK-based OpenPlant Synthetic Biology Research Centre. OpenPlant is a joint initiative between the University of Cambridge, John Innes Centre and the Earlham Institute, which aims to develop open technologies and responsible innovations for industrial biotechnology sustainabile agriculture.

Professor Jim Haseloff, University of Cambridge, UK, said: “The OpenMTA provides a new pathway for open exchange of DNA components – the basic building blocks for new engineering approaches in biology. It is a necessary step towards building a commons [commonly owned resource] that will underpin and democratise access to future biotechnological advances and sustainable industries.”

The collaboration brought together an international working group comprising researchers, technology transfer professionals, social scientists and legal experts to inform the creation of a legal framework that could improve sharing of biomaterials and increase innovation. The team identified five design goals on which to base the new agreement: access, attribution, reuse, redistribution and non-discrimination.  Additional design goals included issues of safety and, in particular, the sharing of biomaterials in an international context.

Dr Linda Kahl, Senior Counsel of the BioBricks Foundation, said: “We encourage organisations worldwide to sign the OpenMTA Master Agreement and start using it. In five years’ time my ideal is for the OpenMTA to be the default option for the transfer of research materials within and between academic research institutions and companies.

“Instead of automatically placing restrictions on materials, people will ask whether restrictions on use and redistribution are appropriate and instead use this tool to promote sharing and innovation in a way that does not compromise safety.”

Dr Colette Matthewman, Programme Manager for the OpenPlant Synthetic Biology Research Centre, said: “We hope to see the OpenMTA enable an international flow of non-proprietary tools between academic, government, NGO and industry researchers, to be used, reused and expanded upon to develop new tools and innovations.”

The agreement will facilitate the use, modification and redistribution of tools for innovation in academic and commercial research, and promote access for researchers in less privileged institutions and world regions.

Dr Fernán Federici, Millennium Institute for Integrative Biology (iBio), Santiago, Chile, said: “The OpenMTA will be particularly useful in Latin America, allowing researchers to redistribute materials imported from overseas sources, reducing shipping costs and waiting times for future local users. We are implementing it in an international project that requires sharing genetic tools among labs in four different continents. We believe, the OpenMTA will support projects based on community-sourced resources and distributed repositories that lead to more fluid collaborations.”

The OpenPlant Synthetic Biology Research Centre is funded by the UK Biotechnology and biological Sciences Research Council and the Engineering and Physics Council as part of the UK Synthetic Biology for Growth programme.

Adapted from a press release from the John Innes Centre. 

Reference
Kahl, L et al. Opening options for material transfer. Nature Biotechnology; 11 Oct 2018


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Austerity Cuts ‘Twice As Deep’ In England As Rest of Britain

Austerity cuts ‘twice as deep’ in England as rest of Britain

ource www.cam.ac.uk

Research finds significant inequalities in cuts to council services across the country, with deprived areas in the north of England and London seeing the biggest drops in local authority spending since 2010.

Public finance is politics hidden in accounting columns

Mia Gray

A “fine-grained” analysis of local authority budgets across Britain since 2010 has found that the average reduction in service spending by councils was almost 24% in England compared to just 12% in Wales and 11.5% in Scotland.

While some areas – Glasgow, for example – experienced significant service loss, the new study suggests that devolved powers have allowed Scottish and Welsh governments to mitigate the harshest local cuts experienced in parts of England.

University of Cambridge researchers found that, across Britain, the most severe cuts to local service spending between 2010 and 2017 were generally associated with areas of “multiple deprivation”.

This pattern is clearest in England, where all 46 councils that cut spending by 30% or more are located. These local authorities tend to be more reliant on central government, with lower property values and fewer additional funding sources, as well as less ability to generate revenue through taxes.

The north was hit with the deepest cuts to local spending, closely followed by parts of London. The ten worst affected councils include Salford, South Tyneside and Wigan, as well as the London boroughs of Camden and Hammersmith and Fulham. Westminster council had a drop in service spending of 46% – the most significant in the UK.

The research also shows a large swathe of southern England, primarily around the ‘home counties’, with low levels of reliance on central government and only relatively minor local service cuts. Northern Ireland was excluded from the study due to limited data.

The authors of the new paper, published today in the Cambridge Journal of Regions, Economy and Society, say the findings demonstrate how austerity has been pushed down to a local level, “intensifying territorial injustice” between areas.

They argue that initiatives claimed by government to ameliorate austerity, such as local retention of business taxes, will only fuel unfair competition and inequality between regions – as local authorities turn to “beggar thy neighbor” policies in efforts to boost tax bases and buffer against austerity.

“The idea that austerity has hit all areas equally is nonsense,” said geographer Dr Mia Gray, who conducted the research with her Cambridge colleague Dr Anna Barford.

“Local councils rely to varying degrees on the central government, and we have found a clear relationship between grant dependence and cuts in service spending.

“The average cuts to local services have been twice as deep in England compared to Scotland and Wales. Cities have suffered the most, particularly in the old industrial centres of the north but also much of London,” said Gray.

“Wealthier areas can generate revenues from business tax, while others sell off buildings such as former back offices to plug gaping holes in council budgets.

“The councils in greatest need have the weakest local economies. Many areas with populations that are ageing or struggling to find employment have very little in the way of a public safety net.

“The government needs to decide whether it is content for more local authorities to essentially go bust, in the way we have already seen in Northamptonshire this year,” she said.

Local authorities with largest spending drop Change in service spending 2010-2017
Westminster -46%
Salford -45%
South Tyneside -44%
Slough -44%
Wigan -43%
Oldham -42%
Gateshead -41%
Camden -39%
Hammersmith & Fulham -38%
Kensington & Chelsea -38%

The latest study used data from the Institute of Fiscal Studies to conduct a spatial analysis of Britain’s local authority funding system.

Gray and Barford mapped the levels of central grant dependence across England’s councils, and the percentage fall of service spend by local authorities across Scotland, Wales and England between financial years 2009/2010 and 2016/2017.

Some of the local services hit hardest across the country include highways and transport, culture, adult social care, children and young people’s services, and environmental services.

The part of central government formerly known as the Department of Communities and Local Government experienced a dramatic overall budget cut of 53% between 2010 and 2016.

As budget decisions were hit at a local level, “mandatory” council services – those considered vital – were funded at the expense of “discretionary” services. However, the researchers found these boundaries to be blurry.

“Taking care of ‘at risk’ children is a mandatory concern. However, youth centres and outreach services are considered unessential and have been cut to the bone. Yet these are services that help prevent children becoming ‘at risk’ in the first place,” said Gray.

“There is a narrative at national and local levels that the hands of politicians are tied, but many of these funding decisions are highly political. Public finance is politics hidden in accounting columns.”

Gray points out that once local councils “go bust” and Section 114 notices are issued, as with Northamptonshire Council, administrators are sent in who then take financial decisions that supersede any democratic process.

The research has also contributed to the development of a new play from the Menagerie Theatre Company, in which audience members help guide characters through situations taken from the lives of those in austerity-hit Britain. The play opens tonight in Oxford, and will be performed in community venues across the country during October and November.

Gray added: “Ever since vast sums of public money were used to bail out the banks a decade ago, the British people have been told that there is no other choice but austerity imposed at a fierce and relentless rate.”

“We are now seeing austerity policies turn into a downward spiral of disinvestment in certain people and places. Local councils in some communities are shrunk to the most basic of services. This could affect the life chances of entire generations born in the wrong part of the country.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

European Research Network Aims To Tackle Problematic Internet Use

European research network aims to tackle problematic internet use

source: www.cam.ac.uk

A pan-European network to tackle problematic internet usage officially launches today with the publication of its manifesto, setting out the important questions that need to be addressed by the research community.

Despite dedicated research leading to some breakthroughs in our understanding of the psychology and biology that underpins these behaviours, we still don’t know enough about the risk factors for problematic internet use

Sam Chamberlain

As the internet has become an integral part of modern life and its use has grown, so too has its problematic use become a growing concern across all age groups. It has provided a new environment in which a wide range of problematic behaviours may emerge, such as those relating to gaming, gambling, buying, pornography viewing, social networking, ‘cyber-bullying’ and ‘cyberchondria’, which can have mental and physical health consequences.

The newly created European Problematic Use of the Internet (EU-PUI) Research Network was formed in response to the emerging public health importance of problematic internet use and is funded through a €520,000 grant from COST (European Cooperation in Science and Technology). The network’s aims include identifying key genetic, psychological and social factors that lead people to disordered online behaviours including excessive video gaming, pornography viewing and use of social networks.

Professor Naomi Fineberg, Consultant Psychiatrist from the University of Hertfordshire and Chair of the new network, said: “Problematic Use of the Internet is a serious issue. Just about everyone uses the internet, but information on problem use is still lacking. Research has often been confined to individual countries, or problematic behaviours such as Internet gaming. So we don’t know the real scale of the problem, what causes problematic use, or whether different cultures are more prone to problematic use than others.”

The network, which includes 123 experts from 38 countries across Europe, has today published its manifesto in the journal European Neuropsychopharmacology, setting out the research priorities to help the scientific and clinical communities understand and tackle problematic internet use. These include:

  • Age- and culture-appropriate assessment tools to screen, diagnose and measure the severity of different forms of problematic internet use
  • Understanding its impact on health and quality of life
  • Clarifying the possible role of genetics and personality features
  • Consideration of the impact of social factors in its development
  • Developing and testing effective interventions, both to prevent and to treat its various forms
  • Identifying biomarkers, including digital markers, to improve early detection and intervention

Professor Fineberg adds: “There’s no doubt that some of the mental health problems we are looking at appear rather like addiction, such as online gambling or gaming. Some lean towards the OCD end of the spectrum, like compulsive social media checking. But we will need more than just psychiatrists and psychologists to help solve these problems. We need to bring together a range of experts, such as neuroscientists, geneticists, child and adult psychiatrists, those with lived experience of these problems and policymakers, in the decisions we make about the internet.

“We need to remember that the Internet is not a passive medium; we know that many programmes or platforms earn their money by keeping people involved and by encouraging continued participation; and they may need to be regulated – not just from a commercial viewpoint, but also from a public health perspective.”

Dr Sam Chamberlain, Consultant Psychiatrist from the University of Cambridge, who is leading research priorities for the network, added: “Despite dedicated research leading to some breakthroughs in our understanding of the psychology and biology that underpins these behaviours, we still don’t know enough about the risk factors for problematic internet use.

“The current level of evidence has to be increased to improve our ability to diagnose problems and predict an individual’s prognosis, as well as to develop effective interventions to help affected individuals and those at greatest risk.”

Reference
Fineberg, NA et al. Manifesto for a European Research Network into Problematic Usage of the Internet. European Neuropsychopharmacology; 9 Oct 2018; DOI: 10.1016/j.euroneuro.2018.08.004


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Restoring Europe’s Endangered Landscapes For Life

Restoring Europe’s endangered landscapes for life

source: www.cam.ac.uk

Cambridge Conservation Initiative (CCI) last week unveiled a programme to restore priority landscapes across Europe. The Endangered Landscapes Programme (ELP) will provide a demonstration of nature’s powers of recovery, and the benefits to habitats, species and people of restoring biodiversity and ecosystem processes to degraded land and seas.

Lisbet Rausing and Peter Baldwin have united Cambridge in common cause with our closest neighbours, reflecting our best selves, our best interests, and the best hope for future generations.

Professor Stephen J Toope

The programme represents a US$30 million (£23 million) investment from Arcadia, a charitable fund of Lisbet Rausing and Peter Baldwin, in partnership with CCI, a collaboration between nine conservation organisations and the University of Cambridge seeking to transform biodiversity conservation. By catalysing strategic partnerships between leaders in research, education, policy and practice CCI aims to transform the global understanding and conservation of biodiversity and, through this, secure a sustainable future for biodiversity and society.

“We need to stop thinking about protected areas as isolated units in the landscape – we need to approach conservation at a landscape-scale if we are really going to make a difference. The Endangered Landscapes Programme is an ambitious attempt to apply ‘more, bigger, better and joined’, at a landscape-scale, right across Europe,” said Professor Sir John Lawton, Chair of the ELP Oversight and Selection Panel.

The ELP aims to deliver an ambitious vision for the future in which landscapes:

  • Support viable populations of native species with the capacity for landscape-scale movement;
  • Provide space for the natural functioning of ecological processes, so reducing or even eliminating the need for intensive management;
  • Are resilient to short and longer-term change (such as climate fluctuations);
  • Provide sustainable cultural, social and economic benefits to people.

Included in the initial group of ELP-funded projects are plans to return predatory sandbar sharks and Mediterranean monk seals to the seas off the coast of Turkey; create opportunities for key species such as wolves, moose, European bison and greater spotted eagles to move more freely in the vast Prypiat Polesia area of Belarus and Ukraine; establish one of Europe’s largest wilderness areas in the Carpathian Mountains of Romania; and restore Caledonian pinewoods to some of the UK’s most spectacular landscapes in the Scottish Highlands.

“Lisbet Rausing and Peter Baldwin have united Cambridge in common cause with our closest neighbours, reflecting our best selves, our best interests, and the best hope for future generations,” said Professor Stephen J Toope, Vice-Chancellor of the University of Cambridge.

Alongside this ecological work, the projects include conservation enterprise programmes, based on nature-based businesses that will provide income, employment and cultural benefits for communities and landowners. CCI will support the project recipients and help drive the success of the ELP as a model for landscape-scale restoration throughout Europe by:

  • Supporting participatory planning and development of new and innovative landscape restoration initiatives;
  • Building capacity nationally and locally, by facilitating the transfer of skills and know-how between individuals and institutions;
  • Sharing knowledge, lessons and experience to help deliver strategies, policies and technical information required for creating sustainable landscapes;
  • Demonstrating to decision-makers the environmental, social and economic benefits that are possible from the recovery of nature and ecosystem processes.

The accelerating loss of the natural world represents one of the greatest challenges of the 21st century. Building a better future requires a better understanding of nature and its values to people, and the practical interventions required to support economic, social and political transitions towards more equitable and effective stewardship of the planet. It is the ambition of Arcadia and CCI that the ELP will not only be effective in achieving its own aims but that it will inspire others across Europe and the world to consider how they, too, can work to restore and improve landscapes for the future.

“Landscape-scale restoration ecology works. Nature is out there: waiting. Let’s invite her back in. Together we will restore and rewild, and thus protect, Europe – our home, our continent, our love,” said Dr Lisbet Rausing, Founder, Arcadia Fund.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Location, Location, Location: Researchers Develop Model To Predict Retail Failure

Location, location, location: researchers develop model to predict retail failure

www.cam.ac.uk

Researchers have used a combination of location and transport data to predict the likelihood that a given retail business will succeed or fail.

One of the most important questions for any new business is the amount of demand it will receive.

Krittika D’Silva

Using information from ten different cities around the world, the researchers, led by the University of Cambridge, have developed a model that can predict with 80% accuracy whether a new business will fail within six months. The results will be presented at the ACM Conference on Pervasive and Ubiquitous Computing (Ubicomp), taking place this week in Singapore.

While the retail sector has always been risky, the past several years have seen a transformation of high streets as more and more retailers fail. The model built by the researchers could be useful for both entrepreneurs and urban planners when determining where to locate their business or which areas to invest in.

“One of the most important questions for any new business is the amount of demand it will receive. This directly relates to how likely that business is to succeed,” said lead author Krittika D’Silva, a Gates Scholar and PhD student at Cambridge’s Department of Computer Science and Technology. “What sort of metrics can we use to make those predictions?”

D’Silva and her colleagues used more than 74 million check-ins from the location technology platform Foursquare from Chicago, Helsinki, Jakarta, London, Los Angeles, New York, Paris, San Francisco, Singapore and Tokyo; and data from 181 million taxi trips from New York and Singapore.

Using this data, the researchers classified venues according to the properties of the neighbourhoods in which they were located, the visit patterns at different times of day, and whether a neighbourhood attracted visitors from other neighbourhoods.

“We wanted to better understand the predictive power that metrics about a place at a certain point in time have,” said D’Silva.

Whether a business succeeds or fails is normally based on a number of controllable and uncontrollable factors. Controllable factors might include the quality or price of the store’s product, its opening hours and its customer satisfaction. Uncontrollable factors might include unemployment rates of a city, overall economic conditions and urban policies.

“We found that even without information about any of these uncontrollable factors, we could still use venue-specific, location-related and mobility-based features in predicting the likely demise of a business,” said D’Silva.

The data showed that across all ten cities, venues that are popular around the clock, rather than just at certain points of day, are more likely to succeed. Additionally, venues that are in demand outside of the typical popular hours of other venues in the neighbourhood tend to survive longer. The data also suggested that venues in diverse neighbourhoods, with multiple types of businesses, tend to survive longer.

While the ten cities had certain similarities, the researchers also had to account for their differences.

“The metrics that were useful predictors vary from city to city, which suggests that factors affect cities in different ways,” said D’Silva. “As one example, that the speed of travel to a venue is a significant metric only in New York and Tokyo. This could relate to the speed of transit in those cities or perhaps to the rates of traffic.”

To test the predictive power of their model, the researchers first had to determine whether a particular venue had closed within the time window of their data set. They then ‘trained’ the model on a subset of venues, telling the model what the features of those venues were in the first time window and whether the venue was open or closed in a second time window. They then tested the trained model on another subset of the data to see how accurate it was.

According to the researchers, their model shows that when deciding when and where to open a business, it is important to look beyond the static features of a given neighbourhood and to consider the ways that people move to and through that neighbourhood at different times of day. They now want to consider how these features vary across different neighbourhoods in order to improve the accuracy of their model.

Reference:
Krittika D’Silva et al. ‘The Role of Urban Mobility in Retail Business Survival.’ Paper presented to the Ubicomp 2018, Singapore, 8-12 October 2018.  http://ubicomp.org/ubicomp2018/program/program.html#s36


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Scientists Develop Mouse ‘Embryo Like Structures’ With Organisation Along Body’s Major Axes

source: www.cam.ac.uk

A team of scientists at the University of Cambridge has developed an artificial mouse embryo-like structure capable of forming the three major axes of the body. The technique, reported today in the journal Nature, could reduce the use of mammalian embryos in research.

We were surprised to see how far gastruloids develop, their complex organisation and the presence of early-stage tissues and organ

Alfonso Martinez Arias

The definitive architecture of the mammalian body is established shortly after the embryo implants into the uterus. This body plan has spatial references, or axes, that guide the emergence of tissues and organs: an antero-posterior axis defined by the head at one end and the tail at the other, an orthogonal dorso-ventral axis and a medio-lateral axis, which orientates the arrangement of internal organs like the liver, pancreas or the heart.

Studying the processes orchestrating the formation of early mammalian embryos is hampered by the difficulty in obtaining them. Earlier findings from the Cambridge group had shown that embryonic stem cells could self-organise in culture into structures with an antero-posterior polarity.

Now, in collaboration with researchers from the University of Geneva and the Swiss Federal Institute of Technology Lausanne (EPFL), they have extended the cultures to reveal a capacity of mouse stem cells to produce ‘pseudo-embryos’ that display some of the important characteristics of a normal mouse embryo. Established from only 300 embryonic stem cells, these structures, called ‘gastruloids’, exhibit developmental features and organisation comparable to the posterior part of a six to ten day-old embryo.

The study shows that gastruloids organise themselves with regard to the three main body axes, as they do in embryos, and follow similar patterns of gene expression. One example of this is the pattern of expression of Hox genes, an ensemble of genes that are expressed in a precise sequential order in the embryo and act as landmarks for different aspects of the body, including the position of different vertebrae or of limbs. This degree of organisation makes gastruloids a remarkable system for the study of the early stages of normal or abnormal embryonic development in mammals.

“These results significantly extend our earlier findings. We were surprised to see how far gastruloids develop, their complex organisation and the presence of early-stage tissues and organ,” says Professor Alfonso Martinez Arias, leader of the University of Cambridge team, at its Department of Genetics.

Professor Denis Duboule from the University of Geneva and at the EPFL explained, “To determine whether gastruloids organise themselves into bona fide embryonic structures, we characterised their level of genetic activity at different stages of development”.

The researchers identified and quantified the RNA transcribed from gastruloids and compared the expressed genes with those of mouse embryos at comparable stages of development, which showed there was a high degree of similarity.

“Gastruloids form structures similar to the posterior part of the embryo, from the base of the brain to the tail, whose development program is somewhat different from that of the head,” says Dr Leonardo Beccari, co-first author of the study, from the University of Geneva.

These embryo-like structures express genes characteristic of the various types of progenitor cells necessary for the constitution of future tissues.

“The complexity of gene expression profiles increases over time, with the appearance of markers from different embryonic cell lineages, much like the profiles observed in control embryos,” adds Dr Naomi Moris from the Cambridge team, co-first author of the article.

“The implementation of the Hox gene network over time, which mimics that of the embryo, particularly confirms the remarkably high level of self-organisation of gastruloids,” explains Mehmet Girgin, co-first author of the study and PhD student at the Institute of Bioengineering at EPFL.

The researchers say that these pseudo-embryos will allow an alternative method to animal research, in accordance with the principle of the ‘3Rs’ (the reduction, replacement and refinement of the use of animals in research). The finding that so much of the development of an embryo can be recapitulated using stem cells will also increase researchers’ ability to study the genetic mechanisms underlying normal development and disease.

Earlier in the year, the group led by Professor Magdalena Zernicka-Goetz at the Department of Physiology, Development and Neuroscience at the University of Cambridge reported embryo-like structures capable of generating an anteroposterior axis but which required additional, extra-embryonic, stem cells to generate anteroposterior polarity. The new work shows surprisingly that stem cells can self-organise the three axes independently of the extra-embryonic tissues.

“It makes things much simpler for research,” says Professor Martinez Arias. “Not only do gastruloids self-organise to generate the three axes, but they also mimic the spatial and temporal patterns of embryos, without extra-embryonic tissue. This suggests that gastruloids can become a useful tool, particularly in understanding gene expression during development.”

This work was largely funded by the Biotechnology and Biological Sciences Research Council (BBSRC), the National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs), the Engineering and Physical Sciences Research Council (EPSRC) and the European Research Council.

Adapted from a press release from the University of Geneva.

Reference
Beccari, L, Moris, N, Girgin, M, et al. Multi-axial self-organisation properties of mouse embryonic stem cells into gastruloids. Nature; 3 Oct 2018; DOI: 10.1038/s41586-018-0578-0

Image caption
Seven-day old gastruloid. The cell nuclei are marked in blue. Neural progenitor cells (green) are distributed along the antero-posterior axis. Progenitor cells of the tail bud (pink) are confined to the posterior extremity of the gastruloid and indicate the direction of its elongation. © Mehmet Girgin, EPFL


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms

Sir Greg Winter wins the 2018 Nobel Prize in Chemistry

source: www.cam.ac.uk

Sir Greg Winter, of the University of Cambridge, has been jointly awarded the 2018 Nobel Prize in Chemistry, along with Frances Arnold and George Smith, for his pioneering work in using phage display for the directed evolution of antibodies, with the aim of producing new pharmaceuticals.

It came as a bit of a shock, and I felt a bit numb for a while. It’s almost like you’re in a different universe.

Greg Winter on finding out about his Nobel Prize award

The first pharmaceutical based on this method, adalimumab, was approved in 2002 and is used for rheumatoid arthritis, psoriasis and inflammatory bowel diseases. Since then, phage display has produced antibodies that can neutralise toxins, counteract autoimmune diseases and cure metastatic cancer.

The Royal Swedish Academy of Sciences announced the 2018 Prize this morning with one half to Frances H. Arnold and the other half jointly to George P. Smith and Sir Gregory P. Winter.

The Nobel Assembly said: “The 2018 Nobel Laureates in Chemistry have taken control of evolution and used it for purposes that bring the greatest benefit to humankind. Enzymes produced through directed evolution are used to manufacture everything from biofuels to pharmaceuticals. Antibodies evolved using a method called phage display can combat autoimmune diseases and in some cases cure metastatic cancer.”

Winter, the Master of Trinity College, is a genetic engineer and is best known for his research and inventions relating to humanised and human therapeutic antibodies. Sir Gregory is a graduate of Trinity College and was a Senior Research Fellow before becoming Master.

His research career has been based almost entirely in Cambridge at the Medical Research Council’s Laboratory of Molecular Biology and the Centre for Protein Engineering, and during this time he also founded three Cambridge biotech companies based on his inventions: Cambridge Antibody Technology (acquired by AstraZeneca), Domantis (acquired by GlaxoSmithKline) and Bicycle Therapeutics.

Winter becomes the 107th Affiliate of Cambridge to be awarded a Nobel Prize. Born in 1951 in Leicester, Sir Greg studied Natural Sciences at Trinity College, Cambridge, and was awarded his PhD, also from Cambridge, in 1977.

“It came as a bit of a shock, and I felt a bit numb for a while. It’s almost like you’re in a different universe,” said Winter, on hearing he had been jointly awarded the Prize.

“For a scientist, a Nobel Prize is the highest accolade you can get, and I’m so lucky because there are so many brilliant scientists and not enough Nobel Prizes to go around.”

The University’s Vice-Chancellor, Professor Stephen Toope, said: “I am thrilled to hear that Sir Greg Winter has been awarded this year’s Nobel Prize in Chemistry. Greg’s work has been vital in the development of new therapies for debilitating health conditions such as rheumatoid arthritis, and has led to breakthroughs in cancer care. These advances continue to transform the lives of people across the world.

“It gives me the greatest pleasure, on behalf of our community, to congratulate the University of Cambridge’s latest Nobel Prize winner.”

Patrick Maxwell, Regius Professor of Physic and Head of the School of Clinical Medicine at the University of Cambridge, said: “I am absolutely delighted that Sir Greg’s work has been recognised with a Nobel Prize. The work for which the prize is awarded was carried out on the Cambridge Biomedical Campus. It directly led to the power of monoclonal antibodies being harnessed for treatment of disease. His inventions really have produced silver bullets that have transformed the way medicine is practised.”

Professor Sir Alan Fersht, former Master of Gonville and Caius, collaborated with Winter on early protein engineering work. “Greg Winter is an outstandingly creative scientist of a practical bent,” he said.

“He has applied his skills and imagination to the benefit of humankind to create, amongst other inventions, novel engineered antibodies that have formed the basis of a new pharmaceutical industry to treat disease and cancer. It is a thoroughly worthy Nobel Prize.”

Professor Dame Carol Robinson, Royal Society of Chemistry president, said: “Today’s Nobel Prize in chemistry highlights the tremendous role of chemistry in contributing to many areas of our lives including pharmaceuticals, detergents, green catalysis and biofuels. It is a great advert for chemistry to have impact in so many areas.

“Directed evolution of enzymes and antibody technology are subjects that I have followed with keen interest; both are now transforming medicine. It would have been hard to predict the outcome of this research at the start – this speaks to the need for basic research.

“I am delighted to see these areas of chemistry recognised and congratulate all three Nobel Laureates.”

Frances H. Arnold, who also shared today’s Prize, conducted the first directed evolution of enzymes, which are proteins that catalyse chemical reactions. Since then, she has refined the methods that are now routinely used to develop new catalysts. The uses of Frances Arnold’s enzymes include more environmentally friendly manufacturing of chemical substances, such as pharmaceuticals, and the production of renewable fuels for a greener transport sector.

In 1985, George Smith developed an elegant method known as phage display, where a bacteriophage – a virus that infects bacteria – can be used to evolve new proteins.

More details on previous Cambridge winners can be found here: https://www.cam.ac.uk/research/research-at-cambridge/nobel-prize

You can read more about the 2018 Nobel Prize in Chemistry here: https://www.nobelprize.org/uploads/2018/10/popular-chemistryprize2018.pdf

The Nobel Prize

@NobelPrize

Sir Gregory Winter, awarded the in Chemistry, has used phage display to produce new pharmaceuticals. Today phage display has produced antibodies that can neutralise toxins, counteract autoimmune diseases and cure metastatic cancer.


Sir Greg Winter

Born in 1951 in Leicester, Greg studied Natural Sciences at Trinity College, Cambridge, and completed a PhD in 1977 at the Laboratory of Molecular Biology (LMB), Cambridge, where he worked on the amino acid sequence of tryptophanyl tRNA synthetase from the bacterium Bacillus stearothemophilus. Greg continued to specialise in protein and nucleic acid sequencing through post-doctoral research at the LMB and became a Group Leader in 1981.

In the 1980s, Greg became interested in the idea that all antibodies have the same basic structure, with only small changes making them specific for one target. Previously, Georges Köhler and César Milstein had won the Nobel Prize for their work at the LMB in discovering a method to isolate and reproduce individual, or monoclonal, antibodies from among the multitude of antibody proteins the immune system makes to seek and destroy foreign invaders attacking the body. However, these monoclonal antibodies had limited application in human medicine, because mouse monoclonal antibodies are rapidly inactivated by the human immune response, which prevents them from providing long-term benefits.

Greg Winter then pioneered a technique to “humanise” mouse monoclonal antibodies – a technique that was used in the development of Campath-1H by Cambridge scientists. This antibody now looks promising for the treatment of multiple sclerosis. Humanised monoclonal antibodies form the majority of antibody-based drugs on the market today and include several blockbuster antibodies, such as Keytruda, which was developed by LifeArc, the Medical Research Council’s technology transfer organisation, and works with your immune system to help fight certain cancers.

Greg then went on to develop methods for making fully human antibodies. This technique was used in the development of Humira (also known as adalimumab) by Cambridge Antibody Technology, an MRC spin-out company founded by Greg. Humira, the first fully human monoclonal antibody drug, was launched in 2003 as a treatment for rheumatoid arthritis. Today, monoclonal antibodies account for a third of all new treatments. These include therapeutic products for breast cancer, leukaemia, asthma, psoriasis and transplant rejection, and dozens more that are in late-stage clinical trials.

Greg has been presented with many awards for his work, including the 2013 Gairdner Foundation International Award, the MRC’s 2013 Millennium Medal, and the Cancer Research Institute’s William B. Coley Award in 1999. He is a Fellow of the Royal Society and of the Academy of Medical Sciences, was Deputy Director of the LMB from 2006-2011, and acting Director 2007-2008. He has been Master of Trinity College, Cambridge since 2012, and received a Knighthood for services to Molecular Biology in 2004.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.