All posts by Admin

DNA Enzyme Shuffles Cell Membranes a Thousand Times Faster Than Its Natural Counterpart

DNA enzyme shuffles cell membranes a thousand times faster than its natural counterpart

source: www.cam.ac.uk

A new synthetic enzyme, crafted from DNA rather than protein, ‘flips’ lipid molecules within the cell membrane, triggering a signal pathway that could be harnessed to induce cell death in cancer cells.

This work highlights the enormous potential of synthetic DNA nanostructures for personalised drugs and therapeutics for a variety of health conditions.

Alexander Ohmann

Researchers at the University of Cambridge and the University of Illinois at Urbana-Champaign say their lipid-scrambling DNA enzyme is the first to outperform naturally occurring enzymes – and does so by three orders of magnitude. Their findings are published in the journal Nature Communications.

“Cell membranes are lined with a different set of molecules on the inside and outside, and cells devote a lot of resources to maintaining this,” said study leader Aleksei Aksimentiev, a professor of physics at Illinois. “But at some points in a cell’s life, the asymmetry has to be dismantled. Then the markers that were inside become outside, which sends signals for certain processes, such as cell death. There are enzymes in nature that do that called scramblases. However, in some other diseases where scramblases are deficient, this doesn’t happen correctly. Our synthetic scramblase could be an avenue for therapeutics.”

Aksimentiev’s group came upon DNA’s scramblase activity when looking at DNA structures that form pores and channels in cell membranes. They used the Blue Waters supercomputer at the National Center for Supercomputing Applications at Illinois to model the systems at the atomic level. They saw that when certain DNA structures insert into the membrane – in this case, a bundle of eight strands of DNA with cholesterol at the ends of two of the strands – lipids in the membrane around the DNA begin to shuffle between the inner and outer membrane layers.

To verify the scramblase activity predicted by the computer models, Aksimentiev’s group at Illinois partnered with Professor Ulrich Keyser’s group at Cambridge. The Cambridge group synthesised the DNA enzyme and tested it in model membrane bubbles, called vesicles, and then in human breast cancer cells.

“The results show very conclusively that our DNA nanostructure facilitates rapid lipid scrambling,” said co-first author Alexander Ohmann, a PhD student in Keyser’s group in Cambridge’s Cavendish Laboratory. “Most interestingly, the high flipping rate indicated by the molecular dynamics simulations seems to be of the same order of magnitude in experiments: up to a thousand-fold faster than what has previously been shown for natural scramblases.”

On its own, the DNA scramblase produces cell death indiscriminately, said Aksimentiev. The next step is to couple it with targeting systems that specifically seek out certain cell types, a number of which have already been developed for other DNA agents.

“We are also working to make these scramblase structures activated by light or some other stimulus, so they can be activated only on demand and can be turned off,” said Aksimentiev.

“Although we have still a long way to go, this work highlights the enormous potential of synthetic DNA nanostructures with possible applications for personalised drugs and therapeutics for a variety of health conditions in the future,” said Ohmann, who has also written a blog post on their new paper.

The US National Science Foundation and the National Institutes of Health supported this work.

Reference: 
Alexander Ohmann et al. ‘A synthetic enzyme built from DNA flips 107 lipids per second in biological membranes.’ Nature Communications (2018). DOI: 10.1038/s41467-018-04821-5

​Adapted from a University of Illinois at Urbana-Champaign press release.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Mysterious 11,000-Year-Old Skull Headdresses Go On Display In Cambridge

Mysterious 11,000-year-old skull headdresses go on display in Cambridge

source: www.cam.ac.uk

Three 11,500-year-old deer skull headdresses – excavated from a world-renowned archaeological site in Yorkshire – will go on display, one for the first time, at Cambridge University’s Museum of Archaeology and Anthropology (MAA) from today.

The most mysterious objects found at Star Carr are 33 deer skull headdresses. Only three similar objects have been discovered elsewhere – all in Germany.

Jody Joy

The headdresses are the star exhibits in A Survival Story – Prehistoric Life at Star Carrwhich gives visitors a fascinating glimpse into life in Mesolithic-era Britain following the end of the last Ice Age.

At the time people were building their homes on the shore of Lake Flixton, five miles inland from what is now the North Yorkshire coast, Britain was still attached to Europe with climates warming rapidly.

As well as the spectacular headdresses, made of red deer skull and antlers, the exhibition features other Mesolithic-era objects such as axes and weapons used to hunt a range of animals such as red deer and elk.

Also going on display is a wooden paddle – used to transport settlers around the lake – as well as objects for making fire. Beads and pendants made of shale and amber also provide evidence of how people adorned themselves, as do objects used for making cloths from animal skins.

Most of the objects on display are from MAA. They were recovered from excavations conducted at the site by Cambridge archaeologist Professor Grahame Clark. More recently, excavations have been conducted by the archaeologists from the Universities of Chester, Manchester and York.

It is also the first time so many of the artefacts belonging to MAA have been on display side-by-side. Many of the objects are very fragile and can’t be moved, meaning it is a unique opportunity to see such a wide selection of material from the Star Carr site.

Exhibition curator Dr Jody Joy said: “Star Carr is unique. Only a scattering of stone tools normally survive from so long ago; but the waterlogged ground there has preserved bone, antler and wooden objects. It’s here that archaeologists have found the remains of the oldest house in Britain, exotic jewellery and mysterious headdresses.

“This was a time before farming, before pottery, before metalworking – but the people who made their homes there returned to the same place for hundreds of years.

“The most mysterious objects found at Star Carr are 33 deer skull headdresses. Only three similar objects have been discovered elsewhere – all in Germany. Someone has removed parts of the antlers and drilled holes in the skulls, but archaeologists don’t know why. They may have been hunting disguises, they may have been used in ceremonies or dances. We can never know for sure, but this is why Star Carr continues to intrigue us.”

As well as the headdresses, archaeologists have also discovered scatters of flint showing where people made stone tools, and antler points used to hunt and fish. 227 points were found at Star Carr, more than 90pc of all those ever discovered in Britain.

Closer to what was the lake edge (Lake Flixton has long since dried up), there is evidence of Mesolithic-era enterprise including wooden platforms used as walkways and jetties (the earliest known examples of carpentry in Europe) – where boats would have given access to the lake and its two islands.

First discovered in 1947 by an amateur archaeologist, work at Star Carr continues to this day. Unfortunately, recent artefacts are showing signs of decay as changing land use around the site causes the peat where many artefacts have been preserved naturally for millennia to dry out. It is now a race against time for archaeologists to discover more about the site before it is lost.

“Star Carr shows that although life was very different 11,500 years ago, people shared remarkably similar concerns to us,” added Joy. “They needed food, warmth and comfort. They made sense of the world through ritual and religion.

“The people of Star Carr were very adaptable and there is much we can learn from them as we too face the challenges of rapid climate change. There are still many discoveries to be made, but these precious archaeological remains are now threatened by the changing environment.

“As they are so old, the objects from Star Carr are very fragile and they must be carefully monitored and stored. As a result, few artefacts are normally on display. This is a rare opportunity to see so many of these objects side-by-side telling the story of this extraordinary site.”

A Survival Story – Prehistoric Life at Star Carr is on display at the Li Ka Shing Gallery at the Museum of Archaeology and Anthropology, Downing Street, Cambridge, from June 21 to December 30, 2019. Entry is free.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Low-Cost Plastic Sensors Could Monitor a Range of Health Conditions

Low-cost plastic sensors could monitor a range of health conditions

source: www.cam.ac.uk

An international team of researchers have developed a low-cost sensor made from semiconducting plastic that can be used to diagnose or monitor a wide range of health conditions, such as surgical complications or neurodegenerative diseases.

This work opens up new directions in biosensing, where materials can be designed to interact with a specific metabolite, resulting in far more sensitive and selective sensors.

Anna-Maria Pappa

The sensor can measure the amount of critical metabolites, such as lactate or glucose, that are present in sweat, tears, saliva or blood, and, when incorporated into a diagnostic device, could allow health conditions to be monitored quickly, cheaply and accurately. The new device has a far simpler design than existing sensors, and opens up a wide range of new possibilities for health monitoring down to the cellular level. The results are reported in the journal Science Advances.

The device was developed by a team led by the University of Cambridge and King Abdullah University of Science and Technology (KAUST) in Saudi Arabia. Semiconducting plastics such as those used in the current work are being developed for use in solar cells and flexible electronics, but have not yet seen widespread use in biological applications.

“In our work, we’ve overcome many of the limitations of conventional electrochemical biosensors that incorporate enzymes as the sensing material,” said lead author Dr Anna-Maria Pappa, a postdoctoral researcher in Cambridge’s Department of Chemical Engineering and Biotechnology. “In conventional biosensors, the communication between the sensor’s electrode and the sensing material is not very efficient, so it’s been necessary to add molecular wires to facilitate and ‘boost’ the signal.”

To build their sensor, Pappa and her colleagues used a newly-synthesised polymer developed at Imperial College that acts as a molecular wire, directly accepting the electrons produced during electrochemical reactions. When the material comes into contact with a liquid such as sweat, tears or blood, it absorbs ions and swells, becoming merged with the liquid. This leads to significantly higher sensitivity compared to traditional sensors made of metal electrodes.

Additionally, when the sensors are incorporated into more complex circuits, such as transistors, the signal can be amplified and respond to tiny fluctuations in metabolite concentration, despite the tiny size of the devices.

Initial tests of the sensors were used to measure levels of lactate, which is useful in fitness applications or to monitor patients following surgery. However, according to the researchers, the sensor can be easily modified to detect other metabolites, such as glucose or cholesterol by incorporating the appropriate enzyme, and the concentration range that the sensor can detect can be adjusted by changing the device’s geometry.

“This is the first time that it’s been possible to use an electron accepting polymer that can be tailored to improve communication with the enzymes, which allows for the direct detection of a metabolite: this hasn’t been straightforward until now,” said Pappa. “It opens up new directions in biosensing, where materials can be designed to interact with a specific metabolite, resulting in far more sensitive and selective sensors.”

Since the sensor does not consist of metals such as gold or platinum, it can be manufactured at a lower cost and can be easily incorporated in flexible and stretchable substrates, enabling their implementation in wearable or implantable sensing applications.

“An implantable device could allow us to monitor the metabolic activity of the brain in real time under stress conditions, such as during or immediately before a seizure and could be used to predict seizures or to assess treatment,” said Pappa.

The researchers now plan to develop the sensor to monitor metabolic activity of human cells in real time outside the body. The Bioelectronic Systems and Technologies group where Pappa is based is focused on developing models that can closely mimic our organs, along with technologies that can accurately assess them in real-time. The developed sensor technology can be used with these models to test the potency or toxicity of drugs.

The research was funded by the Marie Curie Foundation, the KAUST Office of Sponsored Research, and the Engineering and Physical Sciences Research Council.

Reference:
A.M. Pappa et al. ‘Direct metabolite detection with an n-type accumulation mode organic electrochemical transistor.’ Science Advances (2018). DOI: 10.1126/sciadv.aat0911


Researcher profile: Anna Maria Pappa

I strongly believe that through diversity comes creativity, comes progress. I qualified as an engineer, and later earned my Master’s degree at Aristotle University of Thessaloniki in Greece. My PhD is in Bioelectronics from École des Mines de Saint-Étienne in France and leaving my comfort zone to study abroad proved to be an invaluable experience. I met people from different cultures and mindsets from all over the world, stretched my mind and expanded my horizons.

Now, I always look for those with different views.  I travel frequently for conferences and visit other laboratories across Europe, the United States and Saudi Arabia. When you work in a multidisciplinary field it is essential to establish and keep good collaborations: this is the only way to achieve the desired outcome.

Being part of a University where some of the world’s most brilliant scientists studied and worked is a great privilege. Cambridge combines a historic and traditional atmosphere with cutting-edge research in an open, multicultural society. The state-of-the-art facilities, the openness in innovation and strong collaborations provide a unique combination that can only lead to excellence.

As an engineer, creating solutions to important yet unresolved issues for healthcare is what truly motivates me. I’m working on a drug discovery platform using bioelectronics, and my work sets out to improve and accelerate drug discovery by providing novel technological solutions for drug screening and disease management. I hope my research will lead to a product that will impact healthcare. In the future, I imagine a healthcare system where the standard one-size-fits-all approach shifts to a more personalised and tailored model.

I’m a strong advocate for Women in STEMM, and in October 2017 I was awarded a L’Oréal-UNESCO For Women in Science Fellowship, an award that honours the contributions of women in science. For me, the award not only represents a scientific distinction but also gives me the unique opportunity, as an ambassador of science, to inspire and motivate young girls to follow the career they desire.

I think it’s absolutely vital, at every opportunity, for all of us to honour and promote girls and women in science. Unfortunately, women still struggle when it comes to joining male-dominated fields, and even to establish themselves later at senior roles. We still face stereotypes and social restrictions, even if it is not as obvious today as it was in the past.  A question I always ask girls during my outreach activities at schools, is, ‘do I look like a scientist?’, and the answer I most often get is no! I think this misperception of what STEMM professionals look like, or of what they actually do on a daily basis is what discourages girls early on to follow STEMM careers. This needs to change.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge Researchers Join New Initiative On Urban Air Pollution

Cambridge researchers join new initiative on urban air pollution

source: www.cam.ac.uk

Cambridge researchers are part of a cutting-edge project unveiled by Mayor of London Sadiq Khan last week to better understand Londoners’ exposure to air pollution and improve air quality in the capital.

Addressing air pollution in cities is a vital but complex challenge.

Rod Jones

As part of the initiative, a network of air quality sensors will be deployed across the capital, measuring pollution levels in tens of thousands of locations. Findings from the project will be shared with other cities across the UK and globally, including the C40 Cities Climate Leadership Group.

From July onwards, more than a hundred low-cost air quality sensors will be attached to lampposts and buildings in the worst-affected and most sensitive locations in the capital. These fixed sensors will be deployed alongside mobile sensors carried by Google Street View cars taking readings every 30 metres.

It is hoped that the resulting ‘hyperlocal’ network of sensors will create the world’s most sophisticated air monitoring system. Improving the monitoring of London air quality in this way should help identify those initiatives that make the biggest contributions to cutting air pollution.

Cambridge’s Department of Chemistry is a pioneer in the use of low-cost sensors for the measurement of air quality and our researchers have used them in projects at Heathrow airport, in Beijing, and most recently Dhakar. Their role in this project is providing expertise in low-cost air quality sensors, and the analysis and interpretation of results from the static and mobile sensor networks.

This initiative brings together a range of partners from academia, industry and charity. It will be run by a team of air quality experts led by the charity Environmental Defense Fund Europe, partnering with Air Monitors Ltd, Google Earth Outreach, Cambridge Environmental Research Consultants, the University of Cambridge, the National Physical Laboratory and the Environmental Defense Fund in the United States. King’s College London will also be undertaking a linked study focused on schools that will form part of the year-long project.

According to the Mayor’s office, London already has one of the best networks of air quality monitors of any city. However, it does not cover enough of the capital. More sensors and more data are needed to say for sure which actions to tackle pollution are working best.

More sensors will also help to explain how air quality changes not just because of the amount of traffic, but also because of other factors such as the weather and the topography of the capital.

Online maps showing data in real time will be created, giving Londoners information on just how dirty the air they breathe really is as they move around the city. New tools like this will help the capital take action to tackle the most dangerous environmental threat to their health.

“This project will provide a step change in data collection and analysis that will enable London to evaluate the impact of both air quality and climate change policies and develop responsive interventions,” said Baroness Bryony Worthington, Executive Director for Environmental Defense Fund Europe. “A clear output of the project will be a revolutionary air monitoring model and intervention approach that can be replicated cost-effectively across other UK cities and globally.”

“Addressing air pollution in cities is a vital but complex challenge,” said Professor Rod Jones from the Department of Chemistry. “Many factors influence air quality and we are looking forward to working alongside our partners on this project as we know that by combining fixed and mobile monitors, and sampling air quality at so many locations, we will get a much more accurate picture of what is going on – I’m particularly excited by the potential of this project to be repeated in other megacities worldwide that have critical air pollution problems.”

Originally published on the Department of Chemistry website.

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Why Life On Earth First Got Big

Why life on Earth first got big

source: www.cam.ac.uk

Some of the earliest complex organisms on Earth – possibly some of the earliest animals to exist – got big not to compete for food, but to spread their offspring as far as possible.

Reproduction appears to have been the main reason that life on Earth got big when it did.

Emily Mitchell

The research, led by the University of Cambridge, found that the most successful organisms living in the oceans more than half a billion years ago were the ones that were able to ‘throw’ their offspring the farthest, thereby colonising their surroundings. The results are reported in the journal Nature Ecology and Evolution.

Prior to the Ediacaran period, between 635 and 541 million years ago, life forms were microscopic in size, but during the Ediacaran, large, complex organisms first appeared, some of which – such as a type of organism known as rangeomorphs – grew as tall as two metres. These organisms were some of the first complex organisms on Earth, and although they look like ferns, they may have been some of the first animals to exist – although it’s difficult for scientists to be entirely sure. Ediacaran organisms do not appear to have mouths, organs or means of moving, so they are thought to have absorbed nutrients from the water around them.

As Ediacaran organisms got taller, their body shapes diversified, and some developed stem-like structures to support their height.

In modern environments, such as forests, there is intense competition between organisms for resources such as light, so taller trees and plants have an obvious advantage over their shorter neighbours. “We wanted to know whether there were similar drivers for organisms during the Ediacaran period,” said Dr Emily Mitchell of Cambridge’s Department of Earth Sciences, the paper’s lead author. “Did life on Earth get big as a result of competition?”

Mitchell and her co-author Dr Charlotte Kenchington from Memorial University of Newfoundland in Canada examined fossils from Mistaken Point in south-eastern Newfoundland, one of the richest sites of Ediacaran fossils in the world.

Earlier research hypothesised that increased size was driven by the competition for nutrients at different water depths. However, the current work shows that the Ediacaran oceans were more like an all-you-can-eat buffet.

“The oceans at the time were very rich in nutrients, so there wasn’t much competition for resources, and predators did not yet exist,” said Mitchell, who is a Henslow Research Fellow at Murray Edwards College. “So there must have been another reason why life forms got so big during this period.”

Since Ediacaran organisms were not mobile and were preserved where they lived, it’s possible to analyse whole populations from the fossil record. Using spatial analysis techniques, Mitchell and Kenchington found that there was no correlation between height and competition for food. Different types of organisms did not occupy different parts of the water column to avoid competing for resources – a process known as tiering.

“If they were competing for food, then we would expect to find that the organisms with stems were highly tiered,” said Kenchington. “But we found the opposite: the organisms without stems were actually more tiered than those with stems, so the stems probably served another function.”

According to the researchers, one likely function of stems would be to enable the greater dispersion of offspring, which rangeomorphs produced by expelling small propagules. The tallest organisms were surrounded by the largest clusters of offspring, suggesting that the benefit of height was not more food, but a greater chance of colonising an area.

“While taller organisms would have been in faster-flowing water, the lack of tiering within these communities shows that their height didn’t give them any distinct advantages in terms of nutrient uptake,” said Mitchell. “Instead, reproduction appears to have been the main reason that life on Earth got big when it did.”

Despite their success, rangeomorphs and other Ediacaran organisms disappeared at the beginning of the Cambrian period about 540 million years ago, a period of rapid evolutionary development when most major animal groups first appear in the fossil record.

The research was funded by the Natural Environment Research Council, the Cambridge Philosophical Society, Murray Edwards College and Newnham College, Cambridge.

Reference
Emily G. Mitchell and Charlotte G. Kenchington. ‘The utility of height for the Ediacaran organisms of Mistaken Point.’ Nature Ecology and Evolution (2018). DOI: 10.1038/s41559-018-0591-6

Inset image: 
A close-up view of the Mistaken Point ‘E’ surface community. Credit: Emily Mitchell. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

New 3D Imaging Analysis Technique Could Lead To Improved Arthritis Treatment

New 3D Imaging Analysis Technique Could Lead To Improved Arthritis Treatment
source: www.cam.ac.uk

An algorithm to monitor the joints of patients with arthritis, which could change the way that the severity of the condition is assessed, has been developed by a team of engineers, physicians and radiologists led by the University of Cambridge.

Using this technique, we’ll hopefully be able to identify osteoarthritis earlier, and look at potential treatments before it becomes debilitating.

Tom Turmezei

The technique, which detects tiny changes in arthritic joints, could enable greater understanding of how osteoarthritis develops and allow the effectiveness of new treatments to be assessed more accurately, without the need for invasive tissue sampling. The results are published in the journal Scientific Reports.

Osteoarthritis is the most common form of arthritis in the UK. It develops when the articular cartilage that coats the ends of bones, and allows them to glide smoothly over each other at joints, is worn down, resulting in painful, immobile joints. Currently there is no recognised cure and the only definitive treatment is surgery for artificial joint replacement.

Osteoarthritis is normally identified on an x-ray by a narrowing of the space between the bones of the joint due to a loss of cartilage. However, x-rays do not have enough sensitivity to detect subtle changes in the joint over time.

“In addition to their lack of sensitivity, two-dimensional x-rays rely on humans to interpret them,” said lead author Dr Tom Turmezei from Cambridge’s Department of Engineering. “Our ability to detect structural changes to identify disease early, monitor progression and predict treatment response is frustratingly limited by this.”

The technique developed by Turmezei and his colleagues uses images from a standard computerised tomography (CT) scan, which isn’t normally used to monitor joints, but produces detailed images in three dimensions.

The semi-automated technique, called joint space mapping (JSM), analyses the CT images to identify changes in the space between the bones of the joint in question, a recognised surrogate marker for osteoarthritis. After developing the algorithm with tests on human hip joints from bodies that had been donated for medical research, they found that it exceeded the current ‘gold standard’ of joint imaging with x-rays in terms of sensitivity, showing that it was at least twice as good at detecting small structural changes. Colour-coded images produced using the JSM algorithm illustrate the parts of the joint where the space between bones is wider or narrower.

“Using this technique, we’ll hopefully be able to identify osteoarthritis earlier, and look at potential treatments before it becomes debilitating,” said Turmezei, who is now a consultant at the Norfolk and Norwich University Hospital’s Department of Radiology. “It could be used to screen at-risk populations, such as those with known arthritis, previous joint injury, or elite athletes who are at risk of developing arthritis due to the continued strain placed on their joints.”

While CT scanning is regularly used in the clinic to diagnose and monitor a range of health conditions, CT of joints has not yet been approved for use in research trials. According to the researchers, the success of the JSM algorithm demonstrates that 3D imaging techniques have the potential to be more effective than 2D imaging. In addition, CT can now be used with very low doses of radiation, meaning that it can be safely used more frequently for the purposes of ongoing monitoring.

“We’ve shown that this technique could be a valuable tool for the analysis of arthritis, in both clinical and research settings,” said Turmezei. “When combined with 3D statistical analysis, it could be also be used to speed up the development of new treatments.”

Tom Turmezei acknowledges the Wellcome Trust for research funding. Ken Poole acknowledges the support of the Cambridge NIHR Biomedical Research Centre.

Reference
T.D. Turmezei et al. ‘A new quantitative 3D approach to imaging of structural joint disease.’ Scientific Reports (2018). DOI: 10.1038/s41598-018-27486-y 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Study Identifies Key Challenges When Communicating Potential Policies

Study identifies key challenges when communicating potential policies

source: www.cam.ac.uk

A team of Cambridge researchers sets out to define a new science for policy communications, with ambitions of finding the “Goldilocks zone” between too much and not enough information when informing both legislators and the public on complex issues.

Too much complexity risks a lack of understanding or simply being ignored. However, a brief and easy-to-digest communication may well lack the depth and detail necessary for making an informed decision.

Cameron Brick

Researchers have trawled through what little evidence currently exists on effectively communicating policy options, and point out four communication challenges that are problematic and often overlooked – yet should be required information for those making decisions that affect the lives of millions.

These include the need to highlight both the “winners and losers” of any policy decision, and to find ways of representing trade-offs between, say, financial and ecological or health outcomes. The findings are published today in the Springer Nature journal Palgrave Communications.

Recent decades have seen significant progress in producing information summaries that allow people to better understand how personal health choices affect their lives, say researchers.

However, they argue that similarly clear and concise materials are rarely available for legislators – and all of us citizens – on the potential outcomes of policies with stakes far beyond the individual.

Aiming to create a new science for communicating policy options, a team based at Cambridge’s Winton Centre for Risk and Evidence Communication point out the difficulty of finding the optimal balance between “comprehensibility and coverage” of policy options when informing decision-makers.

“Too much complexity risks a lack of understanding or simply being ignored. However, a brief and easy-to-digest communication may well lack the depth and detail necessary for making an informed decision,” said the Winton Centre’s Dr Cameron Brick, lead author of the new study.

He describes this as the “core tension” at the heart of communicating any policy option. “We certainly see this with Brexit, for example: oversimplifications that don’t provide the full story competing with dense explanations that people struggle to understand.”

“The ideal communication would provide appropriate detail in a quickly and easily understood format to help citizens and policymakers apply their own values to decisions. We want to find out if there is a template that can help achieve this balance.”

In this first analysis from the recently established Winton Centre, Brick and colleagues reviewed policy communications across a wide variety of areas – from taxes to health, climate change and international trade – as well as guidance and evidence for communication effectiveness.

The spectrum of material ranged from a fairly impenetrable seventy-page report on the possibilities for the Heathrow third runway to colourful postcards emblazoned with a single statistic. All were trying to be balanced sources of information to support decision-making, yet none appear to have checked what effect their presentation had on their readers.

Policy decisions have enormous impacts, and citizens and voters need trusted and balanced sources of evidence. However, the team found surprisingly little evidence on effectively communicating policy options.

By comparing materials designed to inform personal choices with those covering policy choices, they identified four main characteristics that make communicating potential policies particularly difficult and are often neglected.

  • Policies almost inevitably create winners and losers, because some groups – whether demographic or regional – become better off than others. It is difficult to summarise the effects on different groups so that audiences can weigh those outcomes.
  • Policies are full of trade-offs – e.g. as financial costs go up pollution goes down – yet each is measured differently. Presenting multiple outcomes with different metrics that allow for easy comparison is a tricky communications problem.
  • Individual choices rarely go beyond our own lifespans. Yet some policy choices can affect generations, and even have different effects as time goes on – another challenge for a quick summary to capture.
  • Expected policy outcomes come with particularly large uncertainties from complex shifts of future social and political events and therefore generally cannot be predicted confidently.

Brick and colleagues point out that including more detail in policy options exacerbates the tension between in-depth coverage of the issues on the one hand, and the ability of audiences to get the gist of the communications on the other – and yet nobody appears to have worked on finding the sweet spot between amount of detail and ease of understanding.

“There is no standard model yet for how to tackle these four challenges, but we hope communicators devise effective strategies as the research progresses,” said Brick.

“We want to try and define that Goldilocks zone between too much information and not enough so that policymakers can see when key information is missing, and people can make choices that fit their values.”

As part of the current study, they used three pieces of policy communication from major organisations such as the UK’s Education Endowment Foundation and the International Panel on Climate Change to illustrate attempts to provide nonpartisan and detailed policy option summaries.

Brick and colleagues will be building on this initial work by conducting rigorous research on policy communications material, including one-on-one surveying with various demographics, and large-scale data collection through online surveys.

Professor Sir David Spiegelhalter, Chairman of the Winton Centre, added: “At the Winton Centre, we are interested in helping people judge the benefits and harms of alternative policies or regulations that are being suggested.”

“The idea of our Centre is to help communicate evidence in a way that is balanced, transparent and doesn’t try to coerce people into thinking or acting in a particular way.”

Reference:
Cameron Brick et al. ‘Winners and losers: communicating the potential impacts of policies.’ Palgrave Communications (2018). DOI: 10.1057/s41599-018-0121-9


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

‘Photographing Tutankhamun’ Exhibition Reveals Historical Context Behind Pioneering Images

‘Photographing Tutankhamun’ exhibition reveals historical context behind pioneering images

source: www.cam.ac.uk

Iconic photography taken during the decade-long excavation of King Tutankhamun’s tomb has gone on display at Cambridge University’s Museum of Archaeology and Anthropology (MAA).

The exhibition gives a fresh take on one of the most famous archaeology discoveries from the last 100 years.

Christina Riggs

The exhibition Photographing Tutankhamun has been curated by University of East Anglia (UEA) Egyptologist Dr Christina Riggs and gives a different view on the ‘golden age’ of archaeology and photography in the Middle East.

The exhibition highlights the work of famous Egyptologist and archaeological photographer Harry Burton and the iconic images he captured during the lengthy excavation of the Pharoah’s tomb in Egypt’s Valley of the Kings. Dr Riggs is the first person to study the entire archive of excavation photographs, as well as the first to consider them from the viewpoint of photographic history in the Middle East.

She said: “The exhibition gives a fresh take on one of the most famous archaeology discoveries from the last 100 years. It questions the influence photography has on our perception and provides insight on the historical context of the discovery – a time when archaeology liked to present itself as a science that only Europeans and Americans could do.

“Through the eyes of the camera lens, the exhibition demonstrates the huge input from the Egyptian government and the hundreds of Egyptians working alongside the likes of Harry Burton and Howard Carter. This refreshing approach helps us understand what Tutankhamun meant to Egyptians in the 1920s – and poses the important question of what science looks like and who does it.”

As part of her project, Dr Riggs studied some 1,400 photographs by Burton in the Metropolitan Museum of Art, New York. Burton worked for the museum for most of his life, and his personal correspondence in their archives has offered important new insights into his work on the tomb of Tutankhamun – including some of the technical problems, personal tensions, and political issues behind the scenes.

Dr Chris Wingfield, Senior Curator (Archaeology) at MAA, said: “With strong collections of historic photographs documenting the history of archaeology and anthropology, we at MAA are excited about hosting an exhibition that explores the important ways in which photography contributed to – you could even say created – the field. We continue to train Egyptologists and archaeologists here in Cambridge, so this exhibition is an opportunity to think about how these disciplines were practised in the past, and to help shape their futures.”

More than two dozen images have been created especially for the exhibition using digital scans from Burton’s original glass-plate negatives, including some never seen before. Also on display are newspaper and publicity materials from the 1920s and beyond, which show how the photographs were used in print. The scans have been made by The Griffith Institute at the University of Oxford, which is home to excavator Howard Carter’s own records of the excavation, including around 1,800 negatives and a set of photo albums.

The exhibition comes to Cambridge from The Collection in Lincoln, where it debuted in November 2017. It runs from June 14-September 23, 2018. Entry is free.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge Launches UK’s First Quantum Network

Cambridge launches UK’s first quantum network

source: www.cam.ac.uk

The UK’s first quantum network was launched today in Cambridge, enabling ‘unhackable’ communications, made secure by the laws of physics, between three sites around the city.

The development of the UK Quantum Network has already led to a much greater understanding of the potential of this technology in secure applications in a range of fields.

Ian White

The ‘metro’ network provides secure quantum communications between the Electrical Engineering Division at West Cambridge, the Department of Engineering in the city centre and Toshiba Research Europe Ltd (TREL) on the Cambridge Science Park.

Quantum links are so secure because they rely on particles of light, or photons, to transmit encryption keys through the optical fibre. Should an attacker attempt to intercept the communication, the key itself changes through the laws of quantum mechanics, rendering the stolen data useless.

Researchers have been testing the ultra-secure network for the last year, providing stable generation of quantum keys at rates between two and three megabits per second. These keys are used to securely encrypt data, both in transit and in storage. Performance has exceeded expectations, with the highest recorded sustained generation of keys in field trials that include encryption of data in multiple 100-gigabit channels.

The Cambridge network is a project of the Quantum Communications Hub, a consortium of eight UK universities, as well as private sector companies and public sector stakeholders. The network was built by Hub partners including the University’s Electrical Engineering Division and TREL, who also supplied the Quantum Key Distribution (QKD) systems. Further input came from ADVA, who supplied the optical transmission equipment, and the University’s Granta Backbone Network, which provided the optical fibre.

The UK Quantum Network is funded by the Engineering and Physical Sciences Research Council (EPSRC) through the UK’s National Quantum Technologies Programme. It brings together concentrations of research excellence and innovation, facilitating greater collaboration between the two in development of applications that exploit the unique formal guarantee of security provided by quantum physics.

“Through this network, we can further improve quantum communications technologies and interoperability, explore and develop applications and services, and also demonstrate these to potential end users and future customers,” said Professor Timothy Spiller of the University of York, and Director of the Quantum Communications Hub.

“The development of the UK Quantum Network has already led to a much greater understanding of the potential of this technology in secure applications in a range of fields, in addition to bringing new insights into the operation of the systems in practice,” said Professor Ian White from Cambridge’s Department of Engineering. “I have no doubt that the network will bring many benefits in the future to researchers, developers and users.”

“Working with the Quantum Communications Hub, Cambridge and ADVA has allowed us to develop an interface for delivering quantum keys to applications,” said Dr Andrew Shields, Assistant Director of Toshiba Research Europe Ltd. “In the coming years, the network will be an important resource for developing new applications and use cases.”

“Development of the network has brought together in the Quantum Communications Hub partnership many world-class researchers and facilities from both UK universities and industry,” said Dr Liam Blackwell, Head of Quantum Technologies at EPSRC. “This is a reflection of EPSRC’s commitment to investing in UK leadership in advanced research and innovation in quantum technologies.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Test Can Identify Patients In Intensive Care At Greatest Risk of Life-Threatening Infections

Test can identify patients in intensive care at greatest risk of life-threatening infections

source: www.cam.ac.uk

Patients in intensive care units are at significant risk of potentially life-threatening secondary infections, including from antibiotic-resistant bacteria such as MRSA and C. difficile. Now, a new test could identify those at greatest risk – and speed up the development of new therapies to help at-risk patients.

In the long term, this will help us target therapies at those most at risk, but it will be immediately useful in helping identify individuals to take part in clinical trials of new treatments

Andrew Conway Morris

Infections in intensive care units (ICU) tend to be caused by organisms, such as multi-resistant gram-negative bacteria found in the gut, that are resistant to frontline antibiotics. Treating such infections means relying on broad spectrum antibiotics, which run the risk of breeding further drug-resistance, or antibiotics that have toxic side-effects.

Estimates of the proportion of patients in ICU who will develop a secondary infection range from one in three to one in two; around a half of these will be pneumonia. However, some people are more susceptible than others to such infections – evidence suggests that the key may lie in malfunction of the immune system.

In a study published in the journal Intensive Care Medicine, a team of researchers working across four sites in Edinburgh, Sunderland and London, has identified markers on three immune cells that correlate with an increased risk of secondary infection. The team was led by researchers at the Universities of Cambridge and Edinburgh and biotech company BD Bioscience.

“These markers help us create a ‘risk profile’ for an individual,” explains Dr Andrew Conway Morris from the Department of Medicine at the University of Cambridge. “This tells us who is at greatest risk of developing a secondary infection.

“In the long term, this will help us target therapies at those most at risk, but it will be immediately useful in helping identify individuals to take part in clinical trials of new treatments.”

Clinical trials for interventions to prevent secondary infections have met with mixed success, in part because it has been difficult to identify and recruit those patients who are most susceptible, say the researchers. Using this new test should help fine tune the selection of clinical trial participants and improve the trials’ chances of success.

The markers identified are found on the surface of key immune cells: neutrophils (frontline immune cells that attack invading pathogens), T-cells (part of our adaptive immune system that seek and destroy previously-encountered pathogens), and monocytes (a type of white blood cell).

The researchers tested the correlation of the presence of these markers with susceptibility to a number of bacterial and fungal infections. An individual who tests positive for all three markers would be at two to three times greater risk of secondary infection compared with someone who tests negative for the markers.

The markers do not indicate which secondary infection an individual might get, but rather that they are more susceptible in general.

“As intensive care specialists, our priority is to prevent patients developing secondary infections and, if they do, to ensure they get the best treatment,” says Professor Tim Walsh from the University of Edinburgh, senior author on the study.

The Immune Failure in Critical Therapy (INFECT) Study examined data from 138 individuals in ICUs and replicated findings from a pilot study in 2013.

A key part of enabling this study was to standardise how the research could be carried out across multiple sites, say the researchers. They used an imaging technique known as flow cytometry, which involves labelling components of the cells with fluorescent markers and then shining a laser on them such that they give off light at different wavelengths. This has previously been difficult to standardise, but the researchers successfully developed a protocol for use, ensuring they could recruit patients from the four study sites.

The study was funded by Innovate UK, BD Bioscience and the National Institute of Academic Anaesthesia.

Reference
Conway Morris, A et al. Cell surface signatures of immune dysfunction risk stratify critically ill patients: INFECT Study. Intensive Care Medicine; June 2018; DOI: 10.1007/s00134-018-5247-0


Researcher Profile: Dr Andrew Conway Morris

 Dr Andrew Conway Morris is an intensive care specialist at Addenbrooke’s Hospital, part of Cambridge University Hospitals. It was the hospital’s location on the Cambridge Biomedical Campus that attracted him back to the city where he had been born and raised.

“I moved to Cambridge in order to take advantage of the fantastic opportunities to work with some of the world’s leading scientists, as well as develop collaborations with the growing biotech and pharmaceutical cluster centred around Addenbrooke’s Hospital,” he says.

Conway Morris undertook his undergraduate medical education in Glasgow before moving to Edinburgh to train in Anaesthesia and Intensive Care Medicine. His PhD in Edinburgh was on dysfunction of immune cells known as neutrophils in critically ill patients and looking at the development of new diagnostic tests for secondary pneumonia.

He is now a Wellcome-funded Senior Research Associate in the John Farman Intensive Care Unit at Addenbrooke’s, where he is trying to find new ways to prevent and treat infections in hospitalised and critically-ill patients.

“I carry out my work using a combination of human cell models and animal models of pneumonia and aim to develop new therapies for infection that do not rely on antibiotics,” he says. “I also have a clinical project evaluating a new molecular diagnostic test for pneumonia, which aims to deliver more rapid and accurate tests for infection.”

Outside of work, it is his children that keep him occupied. “I have two boys who occupy most of my free time – both are football-mad – and I help run a local youth football team,” he adds.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Mother’s Attitude Towards Baby During Pregnancy May Have Implications For Child’s Development

Mother’s attitude towards baby during pregnancy may have implications for child’s development

source: www.cam.ac.uk

Mothers who ‘connect’ with their baby during pregnancy are more likely to interact in a more positive way with their infant after it is born, according to a study carried out at the University of Cambridge. Interaction is important for helping infants learn and develop.

Although we found a relationship between a mother’s attitude towards her baby during pregnancy and her later interactions, this link was only modest. This suggests it is likely to be a part of the jigsaw, rather than the whole story

Sarah Foley

Researchers at the Centre for Family Research carried out a meta-analysis, reviewing all published studies in the field, in an attempt to demonstrate conclusively whether there was a link with the way parents think about their child during pregnancy and their behaviour towards them postnatally.

The results of their work, which draws data from 14 studies involving 1,862 mothers and fathers, are published in the journal Developmental Review.

Studies included in the meta-analysis examined parents’ thoughts and feelings about their child during pregnancy through interviews and questionnaires. For example, in interviews expectant parents were considered to have a ‘balanced’ representation of their child if they showed positive anticipation of their relationship with the child or showed ‘mind-mindedness’ – a propensity to see their child as an individual, with its own thoughts and feelings. This was contrasted by parents who had a ‘distorted’ representation of their child, with a narrow, idealised description of their child, and incomplete or inconsistent descriptions of them.

Once the child had been born, researchers in these studies would observe the interactions between parent and child. One measure they were looking for was ‘sensitivity’ – the ability to notice, interpret and respond in a  timely and appropriate manner to children’s signals, for example if the baby was upset.

Combining the results from all 14 studies, the Cambridge team showed a modest association between positive thoughts and feelings about the infant during pregnancy and later interaction with the infant, but only in mothers.

“Studies have shown that parent-child interaction is crucial for a child’s development and learning, so we wanted to understand if there were prenatal signs that might predict a parent’s behaviour,” says Dr Sarah Foley, the study’s first author, who carried out the research as part of her PhD.

“Although we found a relationship between a mother’s attitude towards her baby during pregnancy and her later interactions, this link was only modest. This suggests it is likely to be a part of the jigsaw, rather than the whole story.”

Research has also shown that increased awareness of the baby during pregnancy is associated with healthy behaviours during pregnancy, such as giving up smoking or attending antenatal appointments.

While more work is needed to determine what form such interventions might take, options might include the midwife encouraging the mother to think about what her baby may be like, or asking the mother to imagine activities they think she and her baby might like to do together.

“This is a relatively new area of research, but could have important implications for children’s development,” adds Dr Foley. “We need more research in this area, but hope it will inform new interventions that could help new mothers engage more with their children.”

Dr Foley says there may be a number of factors that contribute to low levels of attachment with the baby during pregnancy. These include: previous experience of miscarriage, depression or anxiety, a mother’s relationship with her own parents, or cultures in which focusing on the baby is considered inappropriate. However, she says, the paucity of evidence means it is difficult to determine which of these factors would impact on prenatal thoughts about the infant, which might in turn influence the quality of later interaction with the infant.

The study was funded by the Economic and Social Research Council.

Reference
Foley, S and Hughes, C. Great expectations? Do mothers’ and fathers’ prenatal thoughts and feelings about the infant predict parent-infant interaction quality? A meta-analytic review. Developmental Review; June 2018; DOI: 10.1016/j.dr.2018.03.007


Researcher Profile: Dr Sarah Foley

Sarah and her cousin’s baby, Sophia Murphy

“Working with children throws up lots of unexpected and fun moments,” says Dr Sarah Foley. “One day you’re being splashed while standing on a toilet-seat filming bath-times, the next you’re catching YouTube-worthy vomiting action shots and being used as a climbing frame by one child to ensure you can film another!”

Sarah has just completed an ESRC-funded PhD at the Centre for Family Research, working with Professor Claire Hughes. She has spent several years at Cambridge now, having completed her undergraduate degree in Social and Political Sciences at St Catharine’s College. The Centre, she says, “is an incredibly stimulating academic environment with immense support and lively discussions over cake on a Friday morning!”

Her doctoral research looked at expectant mothers’ and fathers’ thoughts and feelings in the last trimester of pregnancy as predictors of their adjustment to parenthood and subsequent parenting over the first two years of life. “Despite an increase in fathers’ involvement in childcare, the majority of research remains focused on mothers,” she says.

Her current research involves, in part, looking at parents’ expectations of their roles and division of childcare, and the consequences when these expectations are not met. This is timely in light of recent changes to parental leave in the UK and societal shifts in notions of the involved father, she says.

Sarah’s research is part of the ESRC-funded New Fathers and Mothers Study, a longitudinal study of 200 first time parents from Cambridge, and 200 from the Netherlands and New York.

“The children in the study are turning three this year and we’re busy seeing how they are getting on at nursery,” she explains. “This typically involves me getting down on the floor and testing the children’s social understanding and thinking skills through a variety of fun tasks.”

She hopes that her research will lead to changes in antenatal education and early parent support that promote discussion of parents’ thoughts and feelings about parenthood and their future infant. In November 2017, as part of the ESRC Festival of Social Science, she ran a free ante-natal class for new parents that discussed the realities of parenthood, the importance of self-care and simple parenting tips rather than simply focusing on birth plans.

“The journey through parenthood is filled with joy, but also elements of confusion, and sometimes pain. Crucially, parents should not feel alone and I hope that through greater dissemination of my research findings, through classes or perhaps a book or an app, we can support new parents and encourage more ‘honest conversations’ about parenthood.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Genome-Editing Tool Could Increase Cancer Risk In Cells, Say Researchers

Genome-editing tool could increase cancer risk in cells, say researchers

source: www.cam.ac.uk

More research needs to be done to understand whether CRISPR-Cas9 – molecular ‘scissors’ that make gene editing a possibility – may inadvertently increase cancer risk in cells, according to researchers from the University of Cambridge and the Karolinska Institutet.

We don’t want to sound alarmist, and are not saying that CRISPR-Cas9 is bad or dangerous. This is clearly going to be a major tool for use in medicine, so it’s important to pay attention to potential safety concerns

Jussi Taipale

The team, led by Professor Jussi Taipale, now at the Department of Biochemistry, Cambridge, found that CRISPR-Cas9 triggers a mechanism designed to protect cells from DNA damage, making gene editing more difficult. Cells which lack this mechanism are easier to edit than normal cells. This can lead to a situation where genome-edited cell populations have increased numbers of cells in which an important mechanism protecting cells against DNA damage is missing.

Discovered in bacteria, the CRISPR-Cas9 system is part of the armoury that bacteria use to protect themselves from the harmful effects of viruses. Today it is being co-opted by scientists worldwide as a way of removing and replacing gene defects.

One part of the CRISPR-Cas9 system acts like a GPS locator that can be programmed to go to an exact place in the genome. The other part – the ‘molecular scissors’ – cuts both strands of the faulty DNA so that it can be replaced with DNA that does not have the defect.

However, in a study published today in the journal Nature Medicine, researchers found unexpected consequences from using CRISPR-Cas9.

“We managed to edit cancer cells easily, but when we tried to edit normal, healthy cells, very little happened,” explains Dr Emma Haapaniemi from the Karolinska Institutet, Sweden, the study’s first author.

“When we looked at this further, we found that cutting the genome with CRISPR-Cas9 induced the activation of a protein known as p53, which acts like a cell’s alarm system, signalling that DNA is damaged, and opens the cellular ‘first aid kit’ that repairs damage to the DNA. The triggering of this system makes editing much more difficult.”

In fact, this process went further, leading to the strong selection of cells that lacked the p53 pathway. Absence of p53 in cells makes them more likely to become tumorous as damage can no longer be corrected. Around a half of all tumour cells are missing this pathway.

“CRISPR-Cas9 is a very promising biological tool, both for research purposes and for potential life-saving medical treatments, and so has understandably led to great excitement within the scientific community,” says Professor Taipale, who led the work while at the Karolinska Institutet.

“We don’t want to sound alarmist, and are not saying that CRISPR-Cas9 is bad or dangerous. This is clearly going to be a major tool for use in medicine, so it’s important to pay attention to potential safety concerns. Like with any medical treatment, there are always side effects or potential harm and this should be balanced against the benefits of the treatment.”

The team found that by decreasing activity of p53 in a cell, they could more efficiently edit healthy cells. While this might also decrease the risk of selecting for p53-deficient cells, it could leave cells temporarily vulnerable to mutations that cause cancer.

Professor Taipale says that once they better understand how the DNA response is triggered by the cut, it may be possible to prevent this mechanism kicking in, reducing the selective advantage of cells deficient in p53.

“Although we don’t yet understand the mechanisms behind the activation of p53, we believe that researchers need to be aware of the potential risks when developing new treatments,” he adds. “This is why we decided to publish our findings as soon as we discovered that cells edited with CRISPR-Cas9 can go on to become cancerous.”

The research was supported by the Knut and Alice Wallenberg Foundation, Cancerfonden, Barncancerfonden and the Academy of Finland.

A second team in Novartis Research Institute in Boston, MA, has independently obtained similar results. They are also published today in the same journal.

Reference
Haapaniemi, E et al. CRISPR/Cas9-genome editing induces a p53-mediated DNA damage response. Nature Medicine; 11 June 2018; DOI: 10.1038/s41591-018-0049-z


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cost and Scale of Field Trials For Bovine TB Vaccine May Make Them Unfeasible

Cost and scale of field trials for bovine TB vaccine may make them unfeasible

source: www.cam.ac.uk

Field trials for a vaccine to protect cattle against bovine tuberculosis (bovine TB) would need to involve 500 herds – potentially as many as 75,000-100,000 cattle – to demonstrate cost effectiveness for farmers, concludes a study published today in the journal eLife.

Our results highlight the enormous scale of trials that would be necessary to evaluate BCG alongside continuing testing in the field. Such trials would be hugely expensive, and it isn’t even clear whether enough farms could be recruited

Andrew Conlan

Instead, the researchers suggest that the scale and cost of estimating the effect of a vaccine on transmission could be dramatically reduced by using smaller, less expensive experiments in controlled settings – using as few as 200 animals.

Bovine TB is an infectious disease that affects livestock and wildlife in many parts of the world. In the UK, it is largely spread between infected cattle; badgers are also involved, transmitting to and receiving infection from cattle. Culls to keep badger populations small and reduce the likelihood of infecting cattle have proven controversial both with the public and among scientists.

The UK has a policy of ‘test and slaughter’ using the tuberculin test and slaughter of infected animals. A vaccine (BCG) exists, but can cause some vaccinated cattle to test positive falsely. As such, the vaccine is currently illegal in Europe. Researchers are trying to develop a so-called ‘DIVA test’ (‘Differentiates Infected from Vaccinated Animals’) that minimises the number of false positives, but none are yet licensed for use in the UK.

The European Union has said it would consider relaxing its laws against bovine TB vaccination if the UK government were able to prove that a vaccine is effective on farms. Any field trials would need to follow requirements set by the European Food Safety Authority (EFSA).

In research published today, a team of researchers led by the University of Cambridge has shown using mathematical modelling that satisfying two key EFSA requirements would have profound implications for the likely benefits and necessary scale of any field trials.

The first of these requirements is that vaccination must be used only as a supplement, rather than replacement, to the existing test-and-slaughter policy. But use of vaccination as a supplement means that a successful vaccine which reduces the overall burden and transmission of disease may nonetheless provide only limited benefit for farmers – false positives could still result in animals being slaughtered and restrictions being placed on a farm.

The second of the EFSA requirements is that field trials must demonstrate the impact of vaccination on transmission rather than just protecting individual animals.

The team’s models suggest that a three year trial with 100 herds should provide sufficient to demonstrate that vaccination protects individual cattle. Such a trial would be viable within the UK. However, demonstrating the impact on vaccination on transmission would be almost impossible because the spread of bovine TB in the UK is slow and unpredictable.

If BCG were to be licensed for use in cattle in the UK, vaccination would be at the discretion of individual farmers. Farmers would have to bear the costs of vaccination and testing, as well as the period of time under restrictions if animals test positive. This means that they would be less interested in the benefit to individual cattle and more interested in the benefits at the herd level. Herd immunity is such that, even if the vaccine is not 100% effective in every individual animal, the vaccine has an overall protective effect on the herd.

Trying to demonstrate an economic benefit for farmers would prove challenging. Using their models, the researchers show that herd-level effectiveness would be exceptionally difficult to estimate from partially-vaccinated herds, requiring a sample size in excess of 2,000 herds. The number of herds required could be reduced by a ‘three arm design’ that includes fully-vaccinated, partially-vaccinated and unvaccinated control herds; however, such a design would still require around 500 fully-vaccinated herds and controls – presenting potential logistical and financial barriers – yet would still have a high risk of failure.

Instead, the researchers propose a natural transmission experiment involving housing a mixture of vaccinated and unvaccinated cattle with a number of infected cattle. Such an experiment, they argue, could provide robust evaluation of both the efficacy and mode of action of vaccination using as few as 200 animals. This would help screen any prospective vaccines before larger, more expensive and otherwise riskier trials in the field.

“We already know that the BCG vaccine has the potential to protect cattle from bovine TB infection,” says Dr Andrew Conlan from the Department of Veterinary Medicine at the University of Cambridge, the study’s first author. “Our results highlight the enormous scale of trials that would be necessary to evaluate BCG alongside continuing testing in the field.

“Such trials would be hugely expensive, and it isn’t even clear whether enough farms could be recruited. This scale could be dramatically reduced by using smaller scale natural transmission studies.”

Based on current knowledge of the likely efficacy of BCG, the researchers say their models do not predict a substantial benefit of vaccination at the herd level when used as a supplement to ongoing test-and-slaughter. Ruling out the use of vaccination as a replacement, rather than a supplement, to test-and-slaughter will inevitably limit the effectiveness and perceived benefits for farmers.

“If we could consider replacing test-and-slaughter with vaccination, then the economics becomes much more attractive, particularly those in lower income countries,” says Professor James Wood, Head of Cambridge’s Department of Veterinary Medicine. “Then, we would no longer need to carry out expensive testing, but could instead rely on passive surveillance through the slaughterhouses.”

The study was funded by the UK Department for Environment, Food and Rural Affairs (Defra) and the Alborada Trust

Reference
Conlan, AJK, et al. The intractable challenge of evaluating cattle vaccination as a control for bovine Tuberculosis. eLife; 5 June 2018; DOI: 10.7554/eLife.27694.001


Researcher Profile: Dr Andrew Conlan

It may seem surprising to find a physicist in the Department of Veterinary Medicine, but this was how Dr Andrew Conlan began his career at the University of Edinburgh. He is now an applied mathematician and statistician at in Cambridge’s Disease Dynamics Unit, engaged in work which he describes as “intensively multi-disciplinary”, requiring him to work within multiple environments with medics, veterinarians, farmers, policymakers – and even school children.

Andrew’s research sets out to use mathematics to predict the spread of infectious disease within populations and provide evidence to inform policy on the control of infectious diseases in humans and animals. His work centres around controlling the spread of diseases such as bovine TB and human diseases including, measles, whooping cough, scarlet fever, norovirus and meningitis.

“Policy decisions on the control of infectious diseases often have to be made quickly based on limited information and data,” he says. “I believe that government policy on infectious disease control should be based on evidence and good science.”

Although much of his research is office-based, involving analysing data, writing computational models and occasionally pen-and-paper work, he also does a lot of work with schools, working with pupils on research projects and delivering lessons on disease transmission.

“I’ve been involved in running citizen science projects for many years now, which have led to several peer reviewed papers on how social contact networks in schools could be useful to predict the spread of infectious disease,” he explains (while, ironically, nursing a cold picked up from his son, who had in turn picked it up at nursery). “I dreamed it up over a tea break with my colleague Ken Eames. At the time very little work had been down on contact patterns in school age children as they are a potentially vulnerable population that is difficult to access. We thought that getting them to do the research themselves and take ownership would be a way to address it – and it worked!”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Scientists Create ‘Genetic Atlas’ of Proteins In Human Blood

Scientists create ‘genetic atlas’ of proteins in human blood

source: www,cam.ac.uk

An international team of researchers led by scientists at the University of Cambridge and MSD has created the first detailed genetic map of human proteins, the key building blocks of biology. These discoveries promise to enhance our understanding of a wide range of diseases and aid development of new drugs.

Compared to genes, proteins have been relatively understudied in human blood, even though they are the ‘effectors’ of human biology, are disrupted in many diseases, and are the targets of most medicines

Adam Butterworth

The study, published today in the journal Nature, characterised the genetic underpinnings of the human plasma ‘proteome’, identifying nearly 2,000 genetic associations with almost 1,500 proteins. Previously, there was only a small fraction of this knowledge, mainly because researchers could measure only a few blood proteins simultaneously in a robust manner.

The researchers used a new technology (“SOMAscan”) developed by a company, SomaLogic, to measure 3,600 proteins in the blood of 3,300 people. They then analysed the DNA of these individuals to see which regions of their genomes were associated with protein levels, yielding a four-fold increase on previous knowledge.

“Compared to genes, proteins have been relatively understudied in human blood, even though they are the ‘effectors’ of human biology, are disrupted in many diseases, and are the targets of most medicines,” says Dr Adam Butterworth from the Department of Public Health and Primary Care at the University of Cambridge, a senior author of the study. “Novel technologies are now allowing us to start addressing this gap in our knowledge.”

One of the uses for this genetic map is to identify particular biological pathways that cause disease, exemplified in the paper by pinpointing specific pathways that lead to Crohn’s disease and eczema.

“Thanks to the genomics revolution over the past decade, we’ve been good at finding statistical associations between the genome and disease, but the difficulty has been then identifying the disease-causing genes and pathways,” says Dr James Peters, one of the study’s principal authors. “Now, by combining our database with what we know about associations between genetic variants and disease, we are able to say a lot more about the biology of disease.”

In some cases, the researchers identified multiple genetic variants influencing levels of a protein. By combining these variants into a ‘score’ for that protein, they were able to identify new associations between proteins and disease. For example, MMP12, a protein previously associated with lung disease was found to be also related to heart disease – however, whereas higher levels of MMP12 are associated with lower risk of lung disease, the opposite is true in heart disease and stroke; this could be important as drugs developed to target this protein for treating lung disease patients could inadvertently increase the risk of heart disease.

MSD scientists were instrumental in highlighting how the proteomic genetic data could be used for drug discovery. For example, in addition to highlighting potential side-effects, findings of the study can further aid drug development through novel insights on protein targets of new and existing drugs. By linking drugs, proteins, genetic variation and diseases, the team has suggested existing drugs that could potentially also be used to treat a different disease, and increased confidence that certain drugs currently in development might be successful in clinical trials.

The researchers are making all of their results openly available for the global community to use.

“Our database is really just a starting point,” says first author Benjamin Sun, also from the Department of Public Health and Primary Care. “We’ve given some examples in this study of how it might be used, but now it’s over to the research community to begin using it and finding new applications.”

Caroline Fox MD, Vice President and Head of Genetics and Pharmacogenomics at MSD, adds: “We are so pleased to participate in this collaboration, as it is a great example of how a public private partnership can be leveraged for research use in the broader scientific community.”

The research was funded by MSD*, National Institute for Health Research, NHS Blood and Transplant, British Heart Foundation, Medical Research Council, UK Research and Innovation, and SomaLogic.

Professor Metin Avkiran, Associate Medical Director at the British Heart Foundation, said: “Although our DNA provides our individual blueprint, it is the variations in the structure, function and amount of the proteins encoded by our genes which determine our susceptibility to disease and our response to medicines. This study provides exciting new insight into how proteins in the blood are controlled by our genetic make-up and opens up opportunities for developing new treatments for heart and circulatory disease.”

* MSD (trademark of Merck & Co., Inc., Kenilworth, NJ USA)

Reference
Sun, BB et al. Genomic atlas of the human plasma proteome. Nature; 7 June 2018; DOI: 10.1038/s41586-018-0175-2


Researcher Profile: Benjamin Sun

“My work involves analysing big ‘omic’ data,” says Benjamin Sun, a clinical medical student on the MB-PhD programme at Cambridge. By this, he means data from genomic and proteomic studies, for example – terabytes of ‘big data’ that require the use of supercomputer clusters to analyse.

“My aim is to understand how variation in the human genome affects protein levels in blood, which I hope will allow us to better understand processes behind diseases and help inform drug targeting.”

Benjamin did pre-clinical training at Cambridge before intercalating – taking time out of his medical training to study a PhD, funded by an MRC-Sackler Scholarship, at the Department of Public Health and Primary Care.

“Having completed my PhD, I am currently spending the final two years of my programme at the Clinical School to complete my medical degree. My aim is to become an academic clinician like many of the inspiring figures here at the Cambridge. Balancing clinical work with research can sometimes be tough but definitely highly rewarding.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

‘Carbon Bubble’ Coming That Could Wipe Trillions From The Global Economy – Study

‘Carbon bubble’ coming that could wipe trillions from the global economy – study

source: www.cam.ac.uk

Macroeconomic simulations show rates of technological change in energy efficiency and renewable power are likely to cause a sudden drop in demand for fossil fuels, potentially sparking a global financial crisis. Experts call for a “carefully managed” shift to low-carbon investments and policies to deflate this “carbon bubble”.

Individual nations cannot avoid the situation by ignoring the Paris Agreement or burying their heads in coal and tar sands

Jorge Viñuales

Fossil fuel stocks have long been a safe financial bet. With price rises projected until 2040* and governments prevaricating or rowing back on the Paris Agreement, investor confidence is set to remain high.

However, new research suggests that the momentum behind technological change in the global power and transportation sectors will lead to a dramatic decline in demand for fossil fuels in the near future.

The study indicates that this will now happen regardless of apparent market certainty or the adoption of climate policies – or lack thereof – by major nations.

Detailed simulations produced by an international team of economists and policy experts show this fall in demand has the potential to leave vast reserves of fossil fuels as “stranded assets”: abruptly shifting from high to low value sometime before 2035.

Such a sharp slump in fossil fuel price could cause a huge “carbon bubble” built on long-term investments to burst. According to the study, the equivalent of between one and four trillion US dollars could be wiped off the global economy in fossil fuel assets alone. A loss of US$0.25 trillion triggered the crash of 2008 by comparison.

Publishing their findings today in the journal Nature Climate Change, researchers from Cambridge University (UK), Radboud University (NL), the Open University (UK), Macau University, and Cambridge Econometrics, argue that there will be clear economic winners and losers as a consequence.

Japan, China and many EU nations currently rely on high-cost fossil fuel imports to meet energy needs. They could see national expenditure fall and – with the right investment in low-carbon technologies – a boost to Gross Domestic Product as well as increased employment in sustainable industries.

However, major carbon exporters with relatively high production costs, such as Canada, the United States and Russia, would see domestic fossil fuel industries collapse. Researchers warn that losses will only be exacerbated if incumbent governments continue to neglect renewable energy in favour of carbon-intensive economies.

The study repeatedly ran simulations to gauge the outcomes of numerous combinations of global economic and environmental change. It is the first time that the evolution of low-carbon technologies has been mapped from historical data and incorporated into ‘integrated assessment modeling’.

“Until now, observers mostly paid attention to the likely effectiveness of climate policies, but not to the ongoing and effectively irreversible technological transition,” said Dr Jean-François Mercure, study lead author from Cambridge University’s Centre for Environment, Energy and Natural Resource Governance (C-EENRG) and Radboud University.

Prof Jorge Viñuales, study co-author from Cambridge University and founder of C-EENRG, said: “Our analysis suggests that, contrary to investor expectations, the stranding of fossil fuels assets may happen even without new climate policies. This suggests a carbon bubble is forming and it is likely to burst.”

“Individual nations cannot avoid the situation by ignoring the Paris Agreement or burying their heads in coal and tar sands,” he said. “For too long, global climate policy has been seen as a prisoner’s dilemma game, where some nations can do nothing and get a ‘free ride’ on the efforts of others. Our results show this is no longer the case.”

However, one of the most alarming economic possibilities suggested by the study comes with a sudden push for climate policies – a ‘two-degree target’ scenario – combined with declines in fossil fuel demand but continued levels of production. This could see an initial US$4 trillion of fossil fuel assets vanish off the balance sheets.

“If we are to defuse this time-bomb in the global economy, we need to move promptly but cautiously,” said Hector Pollitt, study co-author from Cambridge Econometrics and C-EENRG. “The carbon bubble must be deflated before it becomes too big, but progress must also be carefully managed.”

One of the factors that may contribute to the tumult created by fossil fuel asset stranding is what’s known as a “sell-out” by OPEC (Organisation of the Petroleum Exporting Countries) nations in the Middle East.

“If OPEC nations maintain production levels as prices drop, they will crowd out the market,” said Pollitt. “OPEC nations will be the only ones able to produce fossil fuels at the low costs required, and exporters such as the US and Canada will be unable to compete.”

Viñuales observes that China is poised to gain most from fossil fuel stranding. “China is already a world leader in renewable energy technologies, and needs to deploy them domestically to tackle dangerous levels of pollution. Additionally, stranding would take a higher toll on some of its main geopolitical competitors. China has a strong incentive to push for climate policies.”

The study authors suggest that economic damage from adherence to fossil fuels may lead to political upheaval of the kind we are perhaps already seeing. “Mass unemployment from carbon-based industries could feed public disenchantment and populist politics,” Viñuales said.

The authors argue that initial actions should include the diversifying of energy supplies as well as investment portfolios. “Divestment from fossil fuels is both a prudential and necessary thing to do,” said Mercure. “Investment and pension funds need to evaluate how much of their money is in fossil fuel assets and reassess the risk they are taking.”

“A useful step would be to expand financial disclosure requirements, making companies and financial managers reveal assets at risk from fossil fuel decline, so that it becomes reflected in asset prices,” Mercure added.

*International Energy Agency. World Energy Outlook (OECD/IEA, 2016).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

First Peoples: Two Ancient Ancestries ‘Reconverged’ With Settling of South America

First Peoples: two ancient ancestries ‘reconverged’ with settling of South America

source: www.cam.ac.uk

New research using ancient DNA finds that a population split after people first arrived in North America was maintained for millennia before mixing again before or during the expansion of humans into the southern continent.

The lab-based science should only be a part of the research. We need to work with Indigenous communities in a more holistic way

Dr Christiana Scheib

Recent research has suggested that the first people to enter the Americas split into two ancestral branches, the northern and southern, and that the “southern branch” gave rise to all populations in Central and South America.

Now, a study shows for the first time that, deep in their genetic history, the majority – if not all – of the Indigenous peoples of the southern continent retain at least some DNA from the “northern branch”: the direct ancestors of many Native communities living today in the Canadian east.

The latest findings, published today in the journal Science, reveal that, while these two populations may have remained separate for millennia – long enough for distinct genetic ancestries to emerge – they came back together before or during the expansion of people into South America.

The new analyses of 91 ancient genomes from sites in California and Canada also provide further evidence that the first peoples separated into two populations between 18,000 and 15,000 years ago. This would have been during or after migrating across the now-submerged land bridge from Siberia along the coast.

Ancient genomes from sites in Southwest Ontario show that, after the split, Indigenous ancestors representing the northern branch migrated eastwards to the great lakes region. This population may have followed the retreating glacial edges as the Ice Age began to thaw, say researchers.

The study also adds to evidence that the prehistoric people associated with Clovis culture – named for 13,000-year-old stone tools found near Clovis, New Mexico, and once believed to be ancestral to all Native Americans – originated from ancient peoples representing the southern branch.

This southern population likely continued down the Pacific coast, inhabiting islands along the way. Ancient DNA from the Californian Channel Islands shows that initial populations were closely related to the Clovis people.

Yet contemporary Central and South American genomes reveal a “reconvergence” of these two branches deep in time. The scientific team, led by the universities of Cambridge, UK, and Illinois Urbana-Champaign, US, say there must have been one or a number of “admixture” events between the two populations around 13,000 years ago.

They say that the blending of lineages occurred either in North America prior to expansion south, or as people migrated ever deeper into the southern continent, most likely following the western coast down.

“It was previously thought that South Americans, and indeed most Native Americans, derived from one ancestry related to the Clovis people,” said Dr Toomas Kivisild, co-senior author of the study from Cambridge’s Department of Archaeology.

“We now find that all native populations in North, Central and South America also draw genetic ancestry from a northern branch most closely related to Indigenous peoples of eastern Canada. This cannot be explained by activity in the last few thousand years. It is something altogether more ancient,” he said.

Dr Ripan S. Malhi, co-senior author from Illinois Urbana-Champaign, said: “Working in partnership with Indigenous communities, we can now learn more about the intricacies of ancestral histories in the Americas through advances in paleogenomic technologies. We are starting to see that previous models of ancient populations were unrealistically simple.”

Present day Central and South American populations analysed in the study were found to have a genetic contribution from the northern branch ranging between 42% to as high as 71% of the genome.

Surprisingly, the highest proportion of northern branch genetics in South America was found way down in southern Chile, in the same area as the Monte Verde archeological site – one of the oldest known human settlements in the Americas (over 14,500 years old).

“It’s certainly an intriguing finding, although currently circumstantial – we don’t have ancient DNA to corroborate how early this northern ancestral branch arrived,” said Dr Christiana Scheib, first author of the study, who conducted the work while at the University of Cambridge.

“It could be evidence for a vanguard population from the northern branch deep in the southern continent that became isolated for a long time – preserving a genetic continuity.

“Prior to 13,000 years ago, expansion into the tip of South America would have been difficult due to massive ice sheets blocking the way. However, the area in Chile where the Monte Verde site is located was not covered in ice at this time,” she said.

“In populations living today across both continents we see much higher genetic proportions of the southern, Clovis-related branch. Perhaps they had some technology or cultural practice that allowed for faster expansion. This may have pushed the northern branch to the edges of the landmass, as well as leading to admixture encounters.”

While consultation efforts varied in this study from community-based partnerships to more limited engagement, the researchers argue that more must be done to include Indigenous communities in ancient DNA studies in the Americas.

The researchers say that genomic analysis of ancient people can have adverse consequences for linked Indigenous communities. Engagement work can help avoid unintended harm to the community and ensure that Indigenous peoples have a voice in research.

“The lab-based science should only be a part of the research. We need to work with Indigenous communities in a more holistic way,” added Schieb, who has recently joined the University of Tartu’s Institute of Genomics, where Kivisild also holds an affiliation.

“From the analysis of a single tooth, paleogenomics research can now offer information on ancient diet and disease as well as migration. By developing partnerships that incorporate ideas from Native communities, we can potentially generate results that are of direct interest and use to the Indigenous peoples involved,” she said.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Graphene Paves The Way To Faster High-Speed Communications

Graphene paves the way to faster high-speed communications

source: www.cam.ac.uk

Researchers have created a technology that could lead to new devices for faster, more reliable ultra-broad bandwidth transfers, and demonstrated how electrical fields boost the non-linear optical effects of graphene.

Graphene never ceases to surprise us when it comes to optics and photonics.

Andrea Ferrari

Graphene, among other materials, can capture particles of light called photons, combine them, and produce a more powerful optical beam. This is due to a physical phenomenon called optical harmonic generation, which is characteristic of nonlinear materials. Nonlinear optical effects can be exploited in a variety of applications, including laser technology, material processing and telecommunications.

Although all materials should demonstrate this behaviour, the efficiency of this process is typically small and cannot be controlled externally. Now, researchers from the University of Cambridge, Politecnico di Milano and IIT- Istituto Italiano di Tecnologia have demonstrated that graphene not only shows a good optical response but also how to control the strength of this effect using an electric field. Their results are reported in the journal Nature Nanotechnology. All three institutions are partners in the Graphene Flagship, a pan-European project dedicated to bringing graphene and related materials for commercial applications.

Graphene – a form of carbon just a single atom thick – has a unique combination of properties that make it useful for applications from flexible electronics and fast data communication, to enhanced structural materials and water treatments. It is highly electrically and thermally conductive, as well as strong and flexible.

Researchers envision the creation of new graphene optical switches, which could also harness new optical frequencies to transmit data along optical cables, increasing the amount of data that can be transmitted. Currently, most commercial devices using nonlinear optics are only used in spectroscopy. Graphene could pave the way towards the fabrication of new devices for ultra-broad bandwidth applications.

“Our work shows that the third harmonic generation efficiency in graphene can be increased by over 10 times by tuning an applied electric field,” said lead author Giancarlo Soavi, of the Cambridge Graphene Centre.

“The authors found again something unique about graphene: tuneability of third harmonic generation over a broad wavelength range,” said Professor Frank Koppens from the ICFO (The Institute of Photonic Sciences)in Barcelona and leader of one of the Graphene Flagship work packages. “As more and more applications are all-optical, this work paves the way to a multitude of technologies.”

Professor Andrea C. Ferrari, Science and Technology Officer of the Graphene Flagship, and Chair of its Management Panel, said: “Graphene never ceases to surprise us when it comes to optics and photonics. The Graphene Flagship has put significant investment to study and exploit the optical properties of graphene. This collaborative work could lead to optical devices working on a range of frequencies broader than ever before, thus enabling a larger volume of information to be processed or transmitted.”

Reference: 
Giancarlo Soavi et al. ‘Broadband, electrically tuneable, third harmonic generation in graphene.’ Nature Nanotechnology (2018). DOI: 10.1038/s41565-018-0145-8

Adapted from a Cambridge Graphene Centre press release


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Multiple Metals – and Possible Signs of Water – Found In Unique Exoplanet

Multiple metals – and possible signs of water – found in unique exoplanet

An international team of researchers have identified ‘fingerprints’ of multiple metals in one of the least dense exoplanets ever found.

The detection of a trace element such as lithium in a planetary atmosphere is a major breakthrough.

Nikku Madhusudhan

The team, from the University of Cambridge and the Instituto de Astrofísica de Canarias (IAC) in Spain used the Gran Telescopio Canarias (GTC) to observe WASP-127b, a giant gaseous planet with partly clear skies and strong signatures of metals in its atmosphere. The results have been accepted for publication in the journal Astronomy & Astrophysics.

WASP-127b has a radius 1.4 times larger than Jupiter but has only 20% of its mass. Such a planet has no analogue in our solar system and is rare even within the thousands of exoplanets discovered to date. It takes just over four days to orbit its parent star and its surface temperature is around 1400 K (1127° C).

The observations of WASP-127b reveal the presence of a large concentration of alkali metals in its atmosphere, allowing simultaneous detections of sodium, potassium and lithium for the first time in an exoplanet. The sodium and potassium absorptions are very broad, which is characteristic of relatively clear atmospheres. According to modelling work done by the researchers, the skies of WASP-127b are approximately 50% clear.

“The particular characteristics of this planet allowed us to perform a detailed study of its rich atmospheric composition,” said Dr Guo Chen, a postdoctoral researcher at IAC and the study’s first author. “The presence of lithium is important to understand the evolutionary history of the planetary system and could shed light on the mechanisms of planet formation.”

The planet’s host star, WASP-127, is also lithium rich, which could point to an AGB star – a bright red giant thousands of times brighter than the sun – or a supernova having enriched the cloud of material from which this system originated.

The researchers also found possible signs of water. “While this detection is not statistically significant, as water features are weak in the visible range, our data indicate that additional observations in the near-infrared should be able to detect it,” said co-author Enric Pallé, also from IAC.

The results demonstrate the potential of ground-based telescopes for the study of planetary atmospheres. “The detection of a trace element such as lithium in a planetary atmosphere is a major breakthrough and motivates new follow-up observations and detailed theoretical modelling to corroborate the findings,” said co-author Dr Nikku Madhusudhan, from Cambridge’s Institute of Astronomy.

We are just starting to probe the atmospheres of exoplanets with ground-based telescopes, but the authors believe that this will also be a reference exoplanet for future studies with space telescopes such as the James Webb Telescope, the successor to the Hubble Telescope. These future studies will reveal the detailed nature of WASP-127b as a benchmark for this new class of very low-density exoplanets.

The WASP-127b observations were conducted using the OSIRIS instrument of the GTC, from the Roque de los Muchachos Observatory, in Garafía (La Palma). The Observatories of the Instituto de Astrofísica de Canarias (IAC) and the Gran Telescopio CANARIAS (GTC) are part of the Spanish Unique Scientific and Technical Infrastructures (ICTS) network.

Reference:
G. Chen et al. ‘The GTC exoplanet transit spectroscopy survey. IX. Detection of Haze, Na, K, and Li in the super-Neptune WASP-127b.’ Astronomy & Astrophysics (in press). DOI:10.1051/0004-6361/201833033


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Cambridge and LMU Announce Plans For Strategic Partnership

Cambridge and LMU announce plans for strategic partnership

At the signing of the Memorandum of Understanding
source: www.cam.ac.uk

Two of Europe’s leading research universities have announced the first step towards plans for a unique ‘strategic partnership’ – underlining the vital and ongoing relationship between British universities and their peer institutions across the EU in a post-Brexit landscape.

Collaboration and openness to the world are essential to achieving our academic and civic missions.

Stephen Toope

The University of Cambridge and the Ludwig-Maximilians-Universität München (LMU) put pen to paper on a memorandum of understanding that will see the two institutions forge ever-closer links in education and research across a broad range of disciplines in the Sciences, Humanities and Medicine.

Senior leaders from Cambridge and LMU – which boast nearly 150 Nobel Laureates between them – came together over two days in Cambridge for meetings led by both the President of LMU, Professor Bernd Huber, and Cambridge Vice-Chancellor Professor Stephen Toope.

At the conclusion of the visit, officials from Cambridge and LMU signed the memorandum of understanding, which indicates the desire to develop a joint programme of strategic importance to both institutions. A full programme will be formulated by the end of the year, with a formal launch expected to take place in early 2019.

It is intended that the partnership will include joint research activities, the exchange of academic staff, postdoctoral and PhD candidates, as well as masters and undergraduate students, joint teaching initiatives, and training for the next generation of scholars. The partnership will be cross-disciplinary, covering broad areas in the Humanities and Cultural Studies, Law, Economics and Social Sciences, Natural Sciences, as well as Medicine, and will develop over the course of an initial five-year funding period.

Professor Chris Young, Head Elect of the School of Arts and Humanities, and Cambridge’s academic lead for the strategic partnership, said: “The LMU is Germany’s leading university in Germany’s leading city.

“Its outstanding scholarship and rich network of associated institutes and industrial partnerships make it the perfect bridge to Bavaria, Germany and Europe. There are already myriad collaborations between colleagues at both universities, and this exciting new partnership will intensify and augment these for years to come.”

Professor Thomas Ackermann, Dean of the Faculty of Law and LMU’s Director for the strategic partnership, said: “The University of Cambridge is one of the world’s leading institutions in education, learning, and research. The strategic partnership between our universities will pave the way towards a new level of cooperation. Together with my colleague, Chris Young, we will explore an interesting array of activities to ensure the program will be a great success for both universities.”

Cambridge Vice-Chancellor, Professor Stephen Toope, said: “No single institution can provide, on its own, the answers to the great challenges of these turbulent times. Collaboration and openness to the world are essential to achieving our academic and civic missions. Our partnership with LMU, one of Europe’s finest universities, creates exciting opportunities to work together to address tough issues and provide our students with a richer education.”

“The strategic partnership with the University of Cambridge, one of the leading universities in Europe and the world, will bring an exciting stimulus to research and learning at LMU,” said LMU President Professor Bernd Huber. “Our new partnership ensures that collaboration and exchange which are vital for academic innovation can continue to be pursued regardless of Brexit.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Online Atlas Explores North-South Divide In Childbirth And Child Mortality During Victorian Era

Online atlas explores north-south divide in childbirth and child mortality during Victorian era

source: www.cam.ac.uk

A new interactive online atlas, which illustrates when, where and possibly how fertility rates began to fall in England and Wales during the Victorian era has been made freely available from today.

In 1851, more than one in five children born in parts of Greater Manchester did not survive to their first birthday. In parts of Surrey and Sussex however, the infant mortality rate at the same time was less than a third that number.

Alice Reid

The Populations Past website is part of the Atlas of Victorian Fertility Decline research project based at the University of Cambridge, in collaboration with the University of Essex. It displays various demographic and socio-economic measures calculated from census data gathered between 1851 and 1911, a period which saw immense social and economic change as the population of the UK more than doubled, from just under 18 million to over 36 million, and industrialisation and urbanisation both increased rapidly.

The atlas allows users to select and view maps of a variety of measures including age structure, migration status, marriage, fertility, child mortality and household composition. Users can zoom in to an area on the map and compare side-by-side maps showing different years or measures.

The maps reveal often stark regional divides. “Geography plays a major role in pretty much every indicator we looked at,” said Dr Alice Reid from Cambridge’s Department of Geography, who led the project. “In 1851, more than one in five children born in parts of Greater Manchester did not survive to their first birthday. In parts of Surrey and Sussex however, the infant mortality rate at the same time was less than a third that number.”

While there are broad north-south divides in most of the maps, patterns at a local level were more complicated: in the northern urban-industrial centres such as Manchester, infant and child mortality were high, while many rural areas of the north had mortality rates as low as rural areas of the south. And in London, there is a sharp east/west divide in fertility, infant mortality, the number of live-in servants, and many other variables.

The researchers also found that different types of industry were often associated with different types of families: in coal mining areas where there was little available work for women, women married young and often ended up with large families. In contrast, women in the textile-producing areas of Lancashire and Yorkshire had more opportunities to earn a wage, and perhaps consequently, had fewer children on average.

There are also big differences over time. The period saw a sharp drop in the number of women who continued to work after marriage, for instance. In 1851, more than a third of married women were in work across large sections of the country, but by 1911, only a tiny fraction of married women worked outside the home, apart from the textile-producing areas of the Northwest.

“This might be associated with the rise of the culture of female domesticity: the idea that a woman’s place is in the home,” said Reid.

Across the Western world, fertility rates have declined over the past 150 years. Gaining a historical perspective of how and why these trends have developed can help improve understanding of the way in which modern societies are shaped.

Between 1851 and 1911, England and Wales changed from countries where there were variable fertility and mortality rates to countries where rates for both were low. Child mortality and fertility fell from the 1870s, together with a fall in illegitimacy, but infant mortality did not start to fall until the dawn of the twentieth century.

As part of the project on fertility decline, the researchers have investigated fertility in more detail. For the first time, they have been able to calculate age-specific fertility rates for more than 2000 sub-districts across England and Wales during this era, and their results challenge views on the way that fertility fell.

“It’s long been thought that the fall in fertility was achieved when couples decided how many children they wanted at the outset of their marriage, and stopped reproducing once they had reached that number,” said Reid. “While this may have happened in more recent fertility transitions, such as in South-East Asia and Latin America, when reliable contraception was widely available, it was not a realistic scenario in the Victorian era.”

“We don’t find age patterns of fertility which would be produced by this type of ‘stopping’ behaviour during the Victorian fertility decline,” said Reid’s collaborator Dr Eilidh Garrett from the University of Essex. “Such behaviour would show up as a larger reduction of fertility among older women, but instead, women of all ages appear to have been reducing their fertility.”

As well as the interactive maps, the Populations Past site provides a variety of resources for researchers, teachers and students at all levels. The research was funded by the Economic & Social Research Council and the Isaac Newton Trust.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Six Months Of Herceptin Could Be As Effective As 12 Months For Some Women

Six months of Herceptin could be as effective as 12 months for some women

source: www.cam.ac.uk

For women with HER2 positive early-stage breast cancer taking Herceptin for six months could be as effective as 12 months in preventing relapse and death, and can reduce side effects, finds new research.

We are confident that this will mark the first steps towards a reduction of Herceptin treatment to six months in many women with HER2-positive breast cancer.

Helena Earl

The PERSEPHONE trial, a £2.6 million study funded by the NIHR with translational research funded by Cancer Research UK, recruited over 4,000 women and compared a six month course of treatment of Herceptin with the current standard of 12 months for women with HER2-positive early-stage breast cancer.

This is the largest trial of its kind examining the impact of shortening the duration of Herceptin treatment. Over the last five years, the NIHR has invested £46.5 million in funding and supporting breast cancer research.

Herceptin, has been a major breakthrough, prolonging and saving the lives of women with breast cancers that carry the HER2 receptor on the surface of their cancer cells. Around 15 out of every 100 women with early breast cancers have HER2 positive disease.

Herceptin is a targeted therapy that works by attaching to the HER2 receptors preventing the cancer cells from growing and dividing. It has rapidly become standard of care and based on clinical research a 12 month treatment course was adopted. However, a further clinical study suggested a shorter duration could be as effective, significantly reducing side effects and cost both to patients and to the NHS.

The trial, led by a team from the University of Cambridge and Warwick Clinical Trials, the University of Warwick, found that 89.4% of patients taking six months treatment were free of disease after four years compared with 89.8% of patients taking treatment for 12 months. These results show that taking Herceptin for six months is as effective as 12 months for many women.

In addition, only 4% of women in the six month arm stopped taking the drug early because of heart problems, compared with 8% in the 12 month arm. Women also received chemotherapy (anthracycline-based, taxane-based or a combination of both) while enrolled in the trial.

Lead study author Professor Helena Earl, Professor of Clinical Cancer Medicine, University of Cambridge and Cancer Research UK Cambridge Centre, said: “The PERSEPHONE trials team, patient advocates who have worked with us on the study and our investigators are very excited by these results. We are confident that this will mark the first steps towards a reduction of Herceptin treatment to six months in many women with HER2-positive breast cancer.

“However, any proposed reduction in effective cancer treatment will always be complex and very challenging, and women currently taking the medication should not change their treatment without seeking advice from their doctor. There is more research to be done to define as precisely as possible the particular patients who could safely reduce their treatment duration. We are poised to do important translational research analysing blood and tissue samples collected within the trial to look for biomarkers to identify subgroups of different risk where shorter/longer durations might be tailored.”

Professor Hywel Williams, Director of the NIHR Health Technology Assessment Programme that funded the PERSEPHONE study said: “This is a hugely important clinical trial that shows that more is not always better. Women will now have the potential to avoid unnecessary side effects of longer treatment without losing any benefit. In turn, this should help save vital funds for the NHS and prompt more studies in other situations where the optimum duration of treatment is not known. It is unlikely that research like this would ever be done by industry, so I am delighted that the NIHR are able to fund valuable research that has a direct impact on patients.”

Professor Charles Swanton, Cancer Research UK’s chief clinician, said: “This is a critically important study that the breast cancer field has been eagerly awaiting. Targeted therapies, while effective, come at a huge health economic cost to the NHS as well as potentially causing side effects such as heart problems. Despite years of research, we haven’t been able to establish the optimal duration of Herceptin treatment, either to delay cancer coming back or to cure patients with early HER2+ breast cancer following surgery.

“The exciting early key findings from this study show that 6 months of Herceptin might be as effective as 12 months, and it may also be safer and with fewer side effects. By analysing tumour and blood samples, the researchers will now try to understand which patients can stop Herceptin at 6 months and which patients need extended therapy.”

Maggie Wilcox, President of Independent Cancer patients Voice (ICPV) who is the patient lead for the PERSEPHONE trial, said: “I am delighted to have been part of this landmark trial which is an important step to reduce the length of treatment whilst not changing effectiveness. Most trials add novel treatments to standard practice whilst this has set out to reduce duration of Herceptin. The collection of the patient reported experiences throughout the trial will greatly inform future practice and benefit patients. ICPV is working with the Persephone team to help disseminate these exciting results’.

The results of the trial, PERSEPHONE, will be presented at the upcoming 2018 ASCO Annual Meeting in Chicago. The full report, which will include analysis to determine the impact of treatment length on quality of life and a detailed cost effectiveness analysis, will publish in the NIHR journals library.

Press release from the NIHR


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Managed Hunting Can Help Maintain Animal Populations

Managed hunting can help maintain animal populations

source: www.cam.ac.uk

Researchers studying the hunting of ibex in Switzerland over the past 40 years have shown how hunts, when tightly monitored, can help maintain animal populations at optimal levels.

Our results emphasise the importance of continuous monitoring of hunting practices, especially in regions where hunters can choose animals based on certain traits.

Ulf Büntgen

The international team of researchers, led by the University of Cambridge and the Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), studied the hunt of Alpine ibex – a type of wild goat with long, curved horns – in the eastern Swiss canton of Graubünden by examining the horn size of more than 8,000 ibex harvested between 1978 and 2013, to determine whether average horn growth or body weight had changed over the last 40 years.

Their results, published in the Journal of Animal Ecology, reveal that unsurprisingly, ibex with longer-than-average horns are more likely to be shot than animals of the same age with shorter horns. However, due to tight controls placed on the hunt by the Swiss authorities, hunters tend to shoot as few animals as possible, to avoid violating the rules and incurring large fines.

Hunting for specific traits can place selective pressure on certain species, resulting in a negative evolutionary response. In their study, the researchers investigated whether the targeting of ibex with large horns would lead to a lower average horn size across the entire population.

They found that while even tightly-managed hunts cannot prevent hunters from targeting longer-horned animals, no long-term changes were found in the horn length of male ibex in Graubünden, which is most likely related to the fact that the numbers of ibex removed from the population by hunters is too small to have an evolutionary effect.

“Our most important finding is that ibex hunting over the last 40 years has not had a negative effect on the constitution of the animals,” said WSL’s Kurt Bollmann, the study’s senior author.

“The good news for hunting and nature conservation is that horn growth in Graubünden’s ibex has not reduced over the decades and their average body weight has also remained stable,” said Professor Ulf Büntgen from Cambridge’s Department of Geography, the study’s first author.

“We are happy that the knowledge gained in practice about our ibex herds has now been scientifically proven and that ibex hunting in Graubünden can be described as sustainable,” said co-author Hannes Jenny from the Graubünden Hunting and Fishing Authority.

While hunters often select animals based on their age and gender or the quality of the meat and their worthiness as a trophy, the hunting authorities would like to keep the size of individual herds at a level where the surrounding forests can provide them with enough food during the winter. Regardless of these conflicting interests, the most important point from a wildlife biology perspective is that hunting does not negatively affect the wild ibex population in the long term.

The Alpine ibex is a species that was formerly extinct and is now regarded as a major success story for Swiss conservation. Alpine ibex have a long lifespan (17 years on average) and relatively low reproductive performance, so the Swiss hunt is closely monitored to maintain the animal population. In Graubünden, where around 40% of Switzerland’s ibex live, each hunter may only bring down one female and one male in a particular age group every 10 years. If a hunter violates this requirement, for example by shooting an older animal with longer horns, they have to pay a fine and the animal is confiscated by the authorities.

“Our results also emphasise the importance of continuous monitoring of hunting practices, especially in regions where hunters can choose animals based on certain traits,” said Büntgen.

The researchers are currently developing a more comprehensive dataset, which will compare the evolutionary pressures on ibex in regions where hunting is allowed against regions where it is prohibited.

Reference:
Ulf Büntgen et al. ‘Horn growth variation and hunting selection of the Alpine ibex.’ Journal of Animal Ecology (2018). DOI: 10.1111/1365-2656.12839

Adapted from a WSL press release.

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways – as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.

Taming the Multiverse: Stephen Hawking’s Final Theory About the Big Bang

Taming the multiverse: Stephen Hawking’s final theory about the big bang

source: cam.ac.uk

Professor Stephen Hawking’s final theory on the origin of the universe, which he worked on in collaboration with Professor Thomas Hertog from KU Leuven, has been published in the Journal of High Energy Physics.

We are not down to a single, unique universe, but our findings imply a significant reduction of the multiverse, to a much smaller range of possible universes.

Stephen Hawking

The theory, which was submitted for publication before Hawking’s death earlier this year, is based on string theory and predicts the universe is finite and far simpler than many current theories about the big bang say.

Professor Hertog, whose work has been supported by the European Research Council, first announced the new theory at a conference at the University of Cambridge in July of last year, organised on the occasion of Professor Hawking’s 75th birthday.

Modern theories of the big bang predict that our local universe came into existence with a brief burst of inflation – in other words, a tiny fraction of a second after the big bang itself, the universe expanded at an exponential rate. It is widely believed, however, that once inflation starts, there are regions where it never stops. It is thought that quantum effects can keep inflation going forever in some regions of the universe so that globally, inflation is eternal. The observable part of our universe would then be just a hospitable pocket universe, a region in which inflation has ended and stars and galaxies formed.

“The usual theory of eternal inflation predicts that globally our universe is like an infinite fractal, with a mosaic of different pocket universes, separated by an inflating ocean,” said Hawking in an interview last autumn. “The local laws of physics and chemistry can differ from one pocket universe to another, which together would form a multiverse. But I have never been a fan of the multiverse. If the scale of different universes in the multiverse is large or infinite the theory can’t be tested. ”

In their new paper, Hawking and Hertog say this account of eternal inflation as a theory of the big bang is wrong. “The problem with the usual account of eternal inflation is that it assumes an existing background universe that evolves according to Einstein’s theory of general relativity and treats the quantum effects as small fluctuations around this,” said Hertog. “However, the dynamics of eternal inflation wipes out the separation between classical and quantum physics. As a consequence, Einstein’s theory breaks down in eternal inflation.”

“We predict that our universe, on the largest scales, is reasonably smooth and globally finite. So it is not a fractal structure,” said Hawking.

The theory of eternal inflation that Hawking and Hertog put forward is based on string theory: a branch of theoretical physics that attempts to reconcile gravity and general relativity with quantum physics, in part by describing the fundamental constituents of the universe as tiny vibrating strings. Their approach uses the string theory concept of holography, which postulates that the universe is a large and complex hologram: physical reality in certain 3D spaces can be mathematically reduced to 2D projections on a surface.

Hawking and Hertog developed a variation of this concept of holography to project out the time dimension in eternal inflation. This enabled them to describe eternal inflation without having to rely on Einstein’ theory. In the new theory, eternal inflation is reduced to a timeless state defined on a spatial surface at the beginning of time.

“When we trace the evolution of our universe backwards in time, at some point we arrive at the threshold of eternal inflation, where our familiar notion of time ceases to have any meaning,” said Hertog.

Hawking’s earlier ‘no boundary theory’ predicted that if you go back in time to the beginning of the universe, the universe shrinks and closes off like a sphere, but this new theory represents a step away from the earlier work. “Now we’re saying that there is a boundary in our past,” said Hertog.

Hertog and Hawking used their new theory to derive more reliable predictions about the global structure of the universe. They predicted the universe that emerges from eternal inflation on the past boundary is finite and far simpler than the infinite fractal structure predicted by the old theory of eternal inflation.

Their results, if confirmed by further work, would have far-reaching implications for the multiverse paradigm. “We are not down to a single, unique universe, but our findings imply a significant reduction of the multiverse, to a much smaller range of possible universes,” said Hawking.

This makes the theory more predictive and testable.

Hertog now plans to study the implications of the new theory on smaller scales that are within reach of our space telescopes. He believes that primordial gravitational waves – ripples in spacetime – generated at the exit from eternal inflation constitute the most promising “smoking gun” to test the model. The expansion of our universe since the beginning means such gravitational waves would have very long wavelengths, outside the range of the current LIGO detectors. But they might be heard by the planned European space-based gravitational wave observatory, LISA, or seen in future experiments measuring the cosmic microwave background.

Reference:
S.W. Hawking and Thomas Hertog. ‘A Smooth Exit from Eternal Inflation?’’ Journal of High-Energy Physics (2018). DOI: 10.1007/JHEP04(2018)147


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Greenhouse Gas ‘Feedback Loop’ Discovered in Freshwater Lakes

Greenhouse gas ‘feedback loop’ discovered in freshwater lakes

source: cam.ac.uk

Latest research finds plant debris in lake sediment affects methane emissions. The flourishing reed beds created by changing climates could threaten to double the already significant methane production of the world’s northern lakes.

The warming climates that promote the growth of aquatic plants have the potential to trigger a damaging feedback loop in natural ecosystems

Andrew Tanentzap

A new study of chemical reactions that occur when organic matter decomposes in freshwater lakes has revealed that the debris from trees suppresses production of methane – while debris from reed beds actually promotes this harmful greenhouse gas.

As vegetation in and around bodies of water continues to change, with forest cover being lost while global warming causes wetland plants to thrive, the many lakes of the northern hemisphere – already a major source of methane – could almost double their emissions in the next fifty years.

The researchers say that the findings suggest the discovery of yet another “feedback loop” in which environmental disruption and climate change trigger the release of ever more greenhouse gas that further warms the planet, similar to the concerns over the methane released by fast-melting arctic permafrost.

“Methane is a greenhouse gas at least twenty-five times more potent than carbon dioxide. Freshwater ecosystems already contribute as much as 16% of the Earth’s natural methane emissions, compared to just 1% from all the world’s oceans,” said study senior author Dr Andrew Tanentzap, from the University of Cambridge’s Department of Plant Sciences.

“We believe we have discovered a new mechanism that has the potential to cause increasingly more greenhouse gases to be produced by freshwater lakes. The warming climates that promote the growth of aquatic plants have the potential to trigger a damaging feedback loop in natural ecosystems.”

The researchers point out that the current methane emissions of freshwater ecosystems alone offsets around a quarter of all the carbon soaked up by land plants and soil: the natural ‘carbon sink’ that drains and stores CO2 from the atmosphere.

Up to 77% of the methane emissions from an individual lake are the result of the organic matter shed primarily by plants that grow in or near the water. This matter gets buried in the sediment found toward the edge of lakes, where it is consumed by communities of microbes. Methane gets generated as a byproduct, which then bubbles up to the surface.

Working with colleagues from Canada and Germany, Tanentzap’s group found that the levels of methane produced in lakes varies enormously depending on the type of plants contributing their organic matter to the lake sediment. The study, funded by the UK’s Natural Environment Research Council, is published today in the journal Nature Communications.

To test how organic matter affects methane emissions, the scientists took lake sediment and added three common types of plant debris: deciduous trees that shed leaves annually, evergreen pine-shedding coniferous trees, and cattails (often known in the UK as ‘bulrushes’) – a common aquatic plant that grows in the shallows of freshwater lakes.

These sediments were incubated in the lab for 150 days, during which time the scientists siphoned off and measured the methane produced. They found that the bulrush sediment produced over 400 times the amount of methane as the coniferous sediment, and almost 2,800 times the methane than that of the deciduous.

Unlike the cattail debris, the chemical makeup of the organic matter from trees appears to trap large quantities of carbon within the lake sediment – carbon that would otherwise combine with hydrogen and get released as methane into the atmosphere.

To confirm their findings, the researchers also “spiked” the three samples with the microbes that produce methane to gauge the chemical reaction. While the forest-derived sediment remained unchanged, the sample containing the bulrush organic matter doubled its methane production.

“The organic matter that runs into lakes from the forest trees acts as a latch that suppresses the production of methane within lake sediment. These forests have long surrounded the millions of lakes in the northern hemisphere, but are now under threat,” said Dr Erik Emilson, first author of the study, who has since left Cambridge to work at Natural Resources Canada.

“At the same time, changing climates are providing favourable conditions for the growth and spread of aquatic plants such as cattails, and the organic matter from these plants promotes the release of even more methane from the freshwater ecosystems of the global north.”

Using species distribution models for the Boreal Shield, an area that covers central and eastern Canada and “houses more forests and lakes than just about anywhere on Earth”, the researchers calculated that the number of lakes colonised by just the common cattail (Typha latifolia) could double in the next fifty years – causing current levels of lake-produced methane to increase by at least 73% in this part of the world alone.

Added Tanentzap: “Accurately predicting methane emissions is vital to the scientific calculations used to try and understand the pace of climate change and the effects of a warmer world. We still have limited understanding of the fluctuations in methane production from plants and freshwater lakes.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Brain Cholesterol Associated with Increased Risk of Alzheimer’s Disease

Brain cholesterol associated with increased risk of Alzheimer’s disease

source: cam.ac.uk

Researchers have shown how cholesterol – a molecule normally linked with cardiovascular diseases – may also play an important role in the onset and progression of Alzheimer’s disease.

The question for us now is not how to eliminate cholesterol from the brain, but about how to control cholesterol’s role in Alzheimer’s disease through the regulation of its interaction with amyloid-beta.

Michele Vendruscolo

The international team, led by the University of Cambridge, have found that in the brain, cholesterol acts as a catalyst which triggers the formation of the toxic clusters of the amyloid-beta protein, which is a central player in the development of Alzheimer’s disease.

The results, published in the journal Nature Chemistry, represent another step towards a possible treatment for Alzheimer’s disease, which affects millions worldwide. The study’s identification of a new pathway in the brain where amyloid-beta sticks together, or aggregates, could represent a new target for potential therapeutics.

It is unclear if the results have any implications for dietary cholesterol, as cholesterol does not cross the blood-brain barrier. Other studies have also found an association between cholesterol and the condition, since some genes which process cholesterol in the brain have been associated with Alzheimer’s disease, but the mechanism behind this link is not known.

The Cambridge researchers found that cholesterol, which is one of the main components of cell walls in neurons, can trigger amyloid-beta molecules to aggregate. The aggregation of amyloid-beta eventually leads to the formation of amyloid plaques, in a toxic chain reaction that leads to the death of brain cells.

While the link between amyloid-beta and Alzheimer’s disease is well-established, what has baffled researchers to date is how amyloid-beta starts to aggregate in the brain, as it is typically present at very low levels.

“The levels of amyloid-beta normally found in the brain are about a thousand times lower than we require to observe it aggregating in the laboratory – so what happens in the brain to make it aggregate?” said Professor Michele Vendruscolo of Cambridge’s Centre for Misfolding Diseases, who led the research.

Using a kinetic approach developed over the last decade by the Cambridge team and their collaborators at Lund University in Sweden, the researchers found in in vitro studies that the presence of cholesterol in cell membranes can act as a trigger for the aggregation of amyloid-beta.

“It’s exciting to see that the kinetic analysis approach that we have developed over the past few years is now allowing us to explore increasingly complex systems, including protein-lipid interactions which are likely to be central for the initiation of aberrant protein aggregation,” said co-author Professor Tuomas Knowles.

Since amyloid-beta is normally present in such small quantities in the brain, the molecules don’t normally find each other and stick together. Amyloid-beta does attach itself to lipid molecules, however, which are sticky and insoluble. In the case of Alzheimer’s disease, the amyloid-beta molecules stick to the lipid cell membranes that contain cholesterol. Once stuck close together on these cell membranes, the amyloid-beta molecules have a greater chance to come into contact with each other and start to aggregate – in fact, the researchers found that cholesterol speeds up the aggregation of amyloid-beta by a factor of 20.

So what, if anything, can be done to control cholesterol in the brain? According to Vendruscolo, it’s not cholesterol itself that is the problem. “The question for us now is not how to eliminate cholesterol from the brain, but about how to control cholesterol’s role in Alzheimer’s disease through the regulation of its interaction with amyloid-beta,” he said. “We’re not saying that cholesterol is the only trigger for the aggregation process, but it’s certainly one of them.”

Since it is insoluble, while travelling towards its destination in lipid membranes, cholesterol is never left around by itself, either in the blood or the brain: it has to be carried around by certain dedicated proteins, such as ApoE, a mutation of which has already been identified as a major risk factor for Alzheimer’s disease. As we age, these protein carriers, as well as other proteins that control the balance, or homeostasis, of cholesterol in the brain become less effective. In turn, the homeostasis of amyloid-beta and hundreds of other proteins in the brain is broken. By targeting the newly-identified link between amyloid-beta and cholesterol, it could be possible to design therapeutics which maintain cholesterol homeostasis, and consequently amyloid-beta homeostasis, in the brain.

“This work has helped us narrow down a specific question in the field of Alzheimer’s research,” said Vendruscolo. “We now need to understand in more detail how the balance of cholesterol is maintained in the brain in order to find ways to inactivate a trigger of amyloid-beta aggregation.”

Co-author Professor Chris Dobson, also a member of the Centre for Misfolding Diseases and Master of St John’s College, added “This study has added significantly to our understanding of the molecular basis of aggregation of amyloid-beta, which is associated with Alzheimer’s disease. It shows how interdisciplinary studies fostered by the Centre for Misfolding Diseases, and our international collaborators can play a major part in working out how to develop potential therapeutic strategies to reduce the risk of the onset and progression of this highly debilitating and increasingly common disease.”

Dr Tim Shakespeare of the Alzheimer’s Society said: “Previous research has shown people with high cholesterol levels in mid-life are slightly more likely to develop dementia, but until now we didn’t know why. This study has demystified the link. The findings suggest managing cholesterol levels in the brain could be a target for future treatments, but it’s still unclear whether there’s any effect from our diet.”

Dr David Reynolds of Alzheimer’s Research UK, said: “Around 20 per cent of the body’s total cholesterol is found in the brain. Cholesterol in our diet can have a big impact on heart health and maintaining a healthy blood supply to the brain can help to keep dementia risk as low as possible.”

Reference:
Johnny Habchi et al. ‘Cholesterol catalyses amyloid-β42 aggregation through a heterogeneous nucleation pathway in the presence of lipid membranes.’ Nature Chemistry (2018). DOI: 10.1038/s41557-018-0031-x


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.