All posts by Admin

‘Fourth Strand’ of European Ancestry Originated With Hunter-Gatherers Isolated By Ice Age

‘Fourth strand’ of European ancestry originated with hunter-gatherers isolated by Ice Age

source: www.cam.ac.uk

Populations of hunter-gatherers weathered Ice Age in apparent isolation in Caucasus mountain region for millennia, later mixing with other ancestral populations, from which emerged the Yamnaya culture that would bring this Caucasus hunter-gatherer lineage to Western Europe.

This Caucasus pocket is the fourth major strand of ancient European ancestry, one that we were unaware of until now

Andrea Manica

The first sequencing of ancient genomes extracted from human remains that date back to the Late Upper Palaeolithic period over 13,000 years ago has revealed a previously unknown “fourth strand” of ancient European ancestry.

This new lineage stems from populations of hunter-gatherers that split from western hunter-gatherers shortly after the ‘out of Africa’ expansion some 45,000 years ago and went on to settle in the Caucasus region, where southern Russia meets Georgia today.

Here these hunter-gatherers largely remained for millennia, becoming increasingly isolated as the Ice Age culminated in the last ‘Glacial Maximum’ some 25,000 years ago, which they weathered in the relative shelter of the Caucasus mountains until eventual thawing allowed movement and brought them into contact with other populations, likely from further east.

This led to a genetic mixture that resulted in the Yamnaya culture: horse-borne Steppe herders that swept into Western Europe around 5,000 years ago, arguably heralding the start of the Bronze Age and bringing with them metallurgy and animal herding skills, along with the Caucasus hunter-gatherer strand of ancestral DNA – now present in almost all populations from the European continent.

The research was conducted by an international team led by scientists from Cambridge University, Trinity College Dublin and University College Dublin. The findings are published today in the journal Nature Communications.

“The question of where the Yamnaya come from has been something of a mystery up to now,” said one of the lead senior authors Dr Andrea Manica, from Cambridge’s Department of Zoology.

“We can now answer that as we’ve found that their genetic make-up is a mix of Eastern European hunter-gatherers and a population from this pocket of Caucasus hunter-gatherers who weathered much of the last Ice Age in apparent isolation. This Caucasus pocket is the fourth major strand of ancient European ancestry, one that we were unaware of until now,” he said

Professor Daniel Bradley, leader of the Trinity team, said: “This is a major new piece in the human ancestry jigsaw, the influence of which is now present within almost all populations from the European continent and many beyond.”

Previously, ancient Eurasian genomes had revealed three ancestral populations that contributed to contemporary Europeans in varying degrees, says Manica.

Following the ‘out of Africa’ expansion, some hunter-gatherer populations migrated north-west, eventually colonising much of Europe from Spain to Hungary, while other populations settled around the eastern Mediterranean and Levant, where they would develop agriculture around 10,000 years ago. These early farmers then expanded into and colonised Europe.

Lastly, at the start of the Bronze Age around 5,000 years ago, there was a wave of migration from central Eurasia into Western Europe – the Yamnaya.

However, the sequencing of ancient DNA recovered from two separate burials in Western Georgia – one over 13,000 years old, the other almost 10,000 years old – has enabled scientists to reveal that the Yamnaya owed half their ancestry to previously unknown and genetically distinct hunter-gatherer sources: the fourth strand.

By reading the DNA, the researchers were able to show that the lineage of this fourth Caucasus hunter-gatherer strand diverged from the western hunter-gatherers just after the expansion of anatomically modern humans into Europe from Africa.

The Caucasus hunter-gatherer genome showed a continued mixture with the ancestors of the early farmers in the Levant area, which Manica says makes sense given the relative proximity. This ends, however, around 25,000 years ago – just before the time of the last glacial maximum, or peak Ice Age.

At this point, Caucasus hunter-gatherer populations shrink as the genes homogenise, a sign of breeding between those with increasingly similar DNA. This doesn’t change for thousands of years as these populations remain in apparent isolation in the shelter of the mountains – possibly cut off from other major ancestral populations for as long as 15,000 years – until migrations began again as the Glacial Maximum recedes, and the Yamnaya culture ultimately emerges. 

“We knew that the Yamnaya had this big genetic component that we couldn’t place, and we can now see it was this ancient lineage hiding in the Caucasus during the last Ice Age,” said Manica.

While the Caucasus hunter-gatherer ancestry would eventually be carried west by the Yamnaya, the researchers found it also had a significant influence further east. A similar population must have migrated into South Asia at some point, says Eppie Jones, a PhD student from Trinity College who is the first author of the paper.

“India is a complete mix of Asian and European genetic components. The Caucasus hunter-gatherer ancestry is the best match we’ve found for the European genetic component found right across modern Indian populations,” Jones said. Researchers say this strand of ancestry may have flowed into the region with the bringers of Indo-Aryan languages.

The widespread nature of the Caucasus hunter-gatherer ancestry following its long isolation makes sense geographically, says Professor Ron Pinhasi, a lead senior author from University College Dublin. “The Caucasus region sits almost at a crossroads of the Eurasian landmass, with arguably the most sensible migration routes both west and east in the vicinity.”

He added: “The sequencing of genomes from this key region will have a major impact on the fields of palaeogeneomics and human evolution in Eurasia, as it bridges a major geographic gap in our knowledge.”

David Lordkipanidze, Director of the Georgian National Museum and co-author of the paper, said: “This is the first sequence from Georgia – I am sure soon we will get more palaeogenetic information from our rich collections of fossils.”

Inset image: the view from the Satsurblia cave in Western Georgia, where a human right temporal bone dating from over 13,000 years ago was discovered. DNA extracted from this bone was used in the new research.

Reference:
E.R. Jones et. al. ‘Upper Palaeolithic genomes reveal deep roots of modern Eurasians.’ Nature Communications (2015). DOI: 10.1038/ncomms9912


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/fourth-strand-of-european-ancestry-originated-with-hunter-gatherers-isolated-by-ice-age#sthash.ingVVJtK.dpuf

Climate Change Sentiment Could Hit Global Investment Portfolios in the Short Term

Climate change sentiment could hit global investment portfolios in the short term

source: www.cam.ac.uk

A new report by the University of Cambridge Institute for Sustainability Leadership (CISL) reveals that global investment portfolios could lose up to 45 per cent as a consequence of short-term shifts in climate change sentiment.

No investor is immune from the risks posed by climate change, even in the short run

Jake Reynolds

The report, “Unhedgeable Risk: How climate change sentiment impacts investment,” concluded that about half of this potential loss could be avoided through portfolio reallocation, while the other half is “unhedgeable”, meaning that investors cannot necessarily protect themselves from losses unless action on climate change is taken at a system level.

“This new research indicates that no investor is immune from the risks posed by climate change, even in the short run,” said Jake Reynolds, Director, Sustainable Economy at the Cambridge Institute for Sustainability Leadership. “However, it is surprisingly difficult to distinguish between risks that can be addressed by an individual investor through smart hedging strategies, and ones that are systemic and require much deeper transformations in the economy to deal with. That’s what this report attempts to do.”

While existing studies have analysed the direct, physical effects of climate change on long-term economic performance, this new report, commissioned by CISL and the Investment Leaders Group, looks at the short-term risks stemming from how investors react to climate-related information, from policy decisions and technology uptake, to market confidence and weather events.

Reynolds continued, “What’s new about this study is its focus on the potential short-term impacts which could surface at any time. Major events, such as the outcome of the upcoming United Nations climate talks in Paris in December, can send signals which drive market sentiment – sometimes slowly, sometimes rapidly – and this study allows us to model the implications.”

The study modelled the impact of three sentiment scenarios on four typical investment portfolios.

The scenarios tested were:

1. Two Degrees, limiting average temperature increase to two degrees Celsius (as recommended by the Intergovernmental Panel on Climate Change [IPCC]) and collectively making relatively good progress towards sustainability, and future socio-economic development goals.

2. Baseline, where past trends continue (i.e. the business-as-usual BAU scenario) and where there is no significant change in the willingness of governments to step up actions on climate change.

3. No Mitigation, oriented towards economic growth without any special consideration of environmental challenges, rather the hope that pursuing self-interest will allow adaptive responses to any climate change impacts as they arise.

The portfolio structures modelled were:

1. High Fixed Income, comprising 84 per cent fixed income, 12 per cent equity, four per cent cash; mimicking the strategies of insurance companies.

2. Conservative, comprising 60 per cent sovereign and corporate bonds, 40 per cent equity; mimicking certain pension funds.

3. Balanced, comprising 50 per cent equity, 47 per cent fixed income, three per cent commodities; mimicking certain pension funds.

4. Aggressive, comprising 60 per cent equity, 35 per cent fixed income, five per cent commodities; mimicking certain pension funds.

Each scenario was linked to a series of economic and market confidence factors used to explore macroeconomic effects within a global economic model. In turn these were cascaded down to portfolio level through an industry sector analysis. The factors included alternative levels of carbon taxation, fossil energy investment, green investment, energy and food prices, energy demand, market confidence, and housing prices.

The study found that shifts in climate change sentiment could cause global economic growth to reduce over a 5-10 year period in both the Two Degree and No Mitigation scenarios as a consequence of economic adjustment. In the longer-term, however, the study found that economic growth picks up most quickly along a Two Degrees (low carbon) pathway, with annual growth rates of 3.5 per cent not only exceeding the baseline (2.9 per cent), but significantly exceeding the No Mitigation scenario (2.0 per cent).

This is consistent with recent comments by the Governor of the Bank of England about the risk of “potentially huge” losses to financial markets due to climate change in the short term, and the “merit” of stress testing elements of the financial system to understand and deal with climate risks.

Urban Angehrn, Chief Investment Officer of Zurich Insurance Group and member of the Investment Leaders Group, echoed this view: “As an insurer we understand that the potential human impact and economic cost of extreme weather and climate events are vast. Multiplied by population growth, coastal migration and urbanisation, the threat seems even larger. We see it as our responsibility to help our customers and communities to build resilience against such events. As investors, the tools to help us translate that threat into investment decisions are – at present – limited. This report provides us with a meaningful basis to discuss investment strategies that tackle climate risk. It will help us go beyond the significant commitments that Zurich has already made.”

Under the Two Degrees scenario, the Aggressive portfolio suffers the largest loss in the short term, but it recovers relatively quickly and generates returns above and beyond the baseline projection levels by the end of the modelling period. In contrast, under a No Mitigation scenario, a

Conservative portfolio with a 40 per cent weighting to equities (typical of a pension fund) could suffer permanent losses of more than 25 per cent within five years after the shock is experienced.

“Far from being a lost cause, investors can ‘climate proof’ their investments to a significant extent by understanding how climate change sentiment could filter through to returns,” said Scott Kelly, research associate at the Centre for Risk Studies, University of Cambridge Judge Business School, and one of the authors of the report. “However, almost half the risk is “unhedgeable” in the sense that it cannot be addressed by individual investors. System-wide action is necessary to deal with this in the long-term interests of savers.”

The report offers a series of insights for investors, regulators and policy makers including:

  • Seeing climate change as a short-term financial risk as well as a long-term economic threat.
  • Recognising the value of “stress testing” investment portfolios for a wide range of sustainability risks (not only climate risks) to understand their financial impacts, and how to manage them.
  • Pinpointing areas of “unhedgeable” risk where system-wide action is required to address risks that cannot be escaped by individual investors.
  • The importance of using capital flows to improve the resilience and carbon profile of the asset base, especially in emerging markets.
  • Identifying significant gaps in knowledge where new research is required, including of an interdisciplinary nature.

Originally published on the CISL website.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/climate-change-sentiment-could-hit-global-investment-portfolios-in-the-short-term#sthash.seuYdSNr.dpuf

Ketchup and Traffic Jams: the Maths of Soft Matter

Ketchup and traffic jams: the maths of soft matter

source: www.cam.ac.uk

The class of materials known as soft matter – which includes everything from mayonnaise to molten plastic – is the subject of the inaugural lecture by Michael Cates, Cambridge’s Lucasian Professor of Mathematics.

Having now understood what’s going on in these active systems, we hope to design better versions that can be used to create a wide range of new materials

Michael Cates

Good things come to those who wait – according to a marketing slogan for Heinz ketchup from the 1980s. But why is the ketchup so difficult to get out of the bottle? The reason is that ketchup is in two minds: whether to pour like a liquid or stay put like a solid. It is one example of soft matter – a huge class of materials which behave in complex and nonlinear ways.

We interact with soft matter every day: toothpaste, chocolate, shampoo and mayonnaise are all examples, which can behave either as liquids or solids depending on the circumstances. Soft matter can also be found in laptop screens, advanced batteries, and in the processing of functional ceramics and plastic LEDs. Cambridge researchers have developed new mathematical models to describe why these materials behave the way they do, which could help improve them for both domestic and high-tech applications.

Soft matter is the focus of the inaugural lecture by Professor Michael Cates, who was elected as the University of Cambridge’s 19th Lucasian Professor of Mathematics earlier this year. His lecture, which will be held on Wednesday 4 November, will cover how mathematical models can explain how soft materials can suddenly convert from liquid-like to solid-like behaviour, through a process resembling an internal traffic jam.

Cates’ research aims to understand better why these materials behave as they do, allowing improved control for a range of future applications, including the design of entirely new materials with tailored properties.

In his lecture, Cates will discuss the ‘jamming’ behaviour of colloids and dense suspensions. Both are types of soft matter with an internal structure something like tiny ping-pong balls dispersed in a liquid. Recently, researchers have created ‘active’ colloids in which the ping-pong balls are self-propelled, like tiny rockets. When their propulsion is switched on, these particles form tight clusters, despite the fact that there are no attractive forces between them.

“The question in this case is what causes the clustering? More generally, how does the internal structure of various types of soft matter affect the way they behave?” said Cates. After considering other explanations – including the idea that the clusters arise by a process like the flocking of birds – Cates concluded that each cluster is effectively a sort of traffic jam.

As every driver knows, a smooth distribution of moving cars becomes unstable at high density, leading to the formation of traffic jams. These can be triggered by even a single driver lightly tapping the brakes, and the new mathematical model explains the spontaneous ‘clumping’ of active colloids in very similar terms.

“Having now understood what’s going on in these active systems, we hope to design better versions that can be used to create a wide range of new materials,” Cates said.

Cates and his colleagues have also looked at very dense suspensions, such as paints, molten chocolate or wet sand. Previous mathematical models have assumed that the particles in a dense suspension are hard and smooth, like ball bearings.

“The approximation of hard, smooth particles – though it has served us well for 25 years – does not predict the observed behaviour in these cases,” said Cates. “So we needed to figure out what physics was missing. And we’ve found the answer: a better description of friction between the particles.”

When a dense suspension flows in response to stress, the particles have to push past each other. So long as the stress is low, they easily slide past, with little friction between them. But when stress is increased, friction between the particles also increases. This smooth change in friction can trigger another jamming transition: the suspension suddenly gets much thicker when pushed too hard.

“In many dense suspensions, the aim is to maximise the amount of solids they contain without losing the ability to flow,” said Cates. “In paints, for example, this reduces both drying time and solvent vapour emissions. Now that we know how much friction matters, we can think of new ways to improve flow by reducing friction, so that we can pack more particles in. Allowing the particles to glide past each other by reducing friction is like solving the age-old problem of getting the ketchup out of the glass bottle.”

The Lucasian Professorship has an exceptionally long and distinguished history, established in 1663. Previous holders include Isaac Newton (1669-1702), and, more recently, Paul Dirac (1932-1969), James Lighthill (1969-1979), Stephen Hawking (1979-2009) and Michael Green (2009-2013).

Professor Cates’ lecture will take place at 5pm on Wednesday 4 November at the Department of Applied Mathematics and Theoretical Physics.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Bringing Ukraine to the Screen

Bringing Ukraine to the screen

Source: www.cam.ac.uk

Over the past eight years, the University of Cambridge has become Britain’s pre-eminent showcase for documentary and feature films from and about Ukraine.

Documentary cinema fosters an open dialogue about human rights and social justice in Ukraine and around the world.

Rory Finnin

Today and tomorrow (November 6/7), the Annual Cambridge Festival of Ukrainian Film once again offers UK audiences a unique opportunity to experience some of the best of Ukrainian cinema. Free and open to the public, the event is organised by Cambridge Ukrainian Studies, an academic centre in the Department of Slavonic Studies at Cambridge.

Since 2008 the Festival has premiered prize-winning new releases as well as provocative forgotten masterpieces; invigorated silent classics with live piano accompaniments; made world headlines with a documentary about Stalin’s man-made famine of 1932-33; and hosted contemporary Ukrainian filmmakers, film scholars, preservationists and musicians who have educated and engaged with well over a thousand attendees.

This year Cambridge Ukrainian Studies is partnering with the Docudays UA International Documentary Human Rights Film Festival to bring six powerful new documentaries to local audiences. DocuDays UA was launched in Kyiv in 2003 as a non-profit organisation dedicated to the development of documentary cinema and to the flourishing of democratic civil society in Ukraine.

Many of the films in the Festival programme confront the tumult of revolution and war in today’s Ukraine with an uncommon honesty, sensitivity and maturity. They avail the viewer of the perspectives of the volunteer doctor, the wounded veteran, the soldier preparing to leave home for war. Other films in the programme meditate upon the passing of generations in a Ukraine very far from today’s headlines: the village and countryside.

“We are very proud and very honoured to collaborate with DocuDays UA in this year’s Cambridge Film Festival of Ukrainian Film”, said Dr Rory Finnin, Head of the Department of Slavonic Studies and Director of the Cambridge Ukrainian Studies programme. “We share their passion for documentary cinema and their belief in its ability to foster an open dialogue about human rights and social justice in Ukraine and around the world.”

“For the Cambridge Festival of Ukrainian Film we have chosen both full-length and short documentaries produced during the last two years,” explained Darya Bassel, Docudays Programme Coordinator. “With these screenings we hope to bring Ukraine and its documentary scene closer to international audiences and to create space for a discussion of problems relevant not only for Ukraine but for the whole world.”

Admission to the Eighth Annual Cambridge Festival of Ukrainian Film on 6-7 November 2015 is free and open to the public, but online registration is required. The screenings of Maidan Is Everywhere; The Medic Leaves Last; Living Fire; Post Maidan; This Place We Call Home; and Twilight take place in the Winstanley Theatre of Trinity College, Cambridge. Wine receptions follow both the November 6 and 7 screenings.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/bringing-ukraine-to-the-screen#sthash.MbunO5D3.dpuf

Graphene Means Business – Two-Dimensional Material Moves From the Lab to the UK Factory Floor

Graphene means business – two-dimensional material moves from the lab to the UK factory floor

www.cam.ac.uk

A major showcase of companies developing new technologies from graphene and other two-dimensional materials took place this week at the Cambridge Graphene Centre.

Cambridge is very well-placed in the network of UK, European and global initiatives targeting the development of new products and devices based on graphene and related materials

Andrea Ferrari

More than 40 companies, mostly from the UK, are in Cambridge this week to demonstrate some of the new products being developed from graphene and other two-dimensional materials.

Graphene is a two-dimensional material made up of sheets of carbon atoms. With its combination of exceptional electrical, mechanical and thermal properties, graphene has the potential to revolutionise industries ranging from healthcare to electronics.

On Thursday, the Cambridge Graphene Technology Day – an exhibition of graphene-based technologies organised by the Cambridge Graphene Centre, together with its partner companies – took place, showcasing new products based on graphene and related two-dimensional materials.

Some of the examples of the products and prototypes on display included flexible displays, printed electronics, and graphene-based heaters, all of which have potential for consumer applications. Other examples included concrete and road surfacing incorporating graphene, which would mean lighter and stronger infrastructure, and roads that have to be resurfaced far less often, greatly lowering the costs to local governments.

“At the Cambridge Graphene Technology Day we saw several real examples of graphene making its way from the lab to the factory floor – creating jobs and growth for Cambridge and the UK,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre and of the EPSRC Centre for Doctoral Training in Graphene Technology. “Cambridge is very well-placed in the network of UK, European and global initiatives targeting the development of new products and devices based on graphene and related materials.”

Cambridge has a long history of research and application into carbon-based materials, since the identification of the graphite structure in 1924, moving through to diamond, diamond-like carbon, conducting polymers, and carbon nanotubes, with a proven track-record in taking carbon research from the lab to the factory floor.

Cambridge is also one of the leading centres in graphene technology. Dr Krzysztof Koziol from the Department of Materials Science & Metallurgy sits on the management board of the EPSRC Centre for Doctoral Training in Graphene Technology. He is developing hybrid electrical wires made from copper and graphene in order to improve the amount of electric current they can carry, functional graphene heaters, anti-corrosion coatings, and graphene inks which can be used to draw printed circuit boards directly onto paper and other surfaces.

Koziol has established a spin-out company, Cambridge Nanosystems, which produces high volume amounts of graphene for industrial applications. The company, co-founded by recent Cambridge graduate Catharina Paulkner, has recently established a partnership with a major auto manufacturer to start developing graphene-based applications for cars.

Other researchers affiliated with the Cambridge Graphene Centre include Professor Clare Grey of the Department of Chemistry, who is part of the Cambridge Graphene Centre Management Board. She is incorporating graphene and related materials into next-generation batteries and has recently demonstrated a breakthrough in Lithium air batteries by exploiting graphene. Professor Mete Atature from the Department of Physics, is one of the supervisors of the Centre for Doctoral Training in Graphene Technology. He uses two-dimensional materials for research in quantum optics, including the possibility of a computer network based on quantum mechanics, which would be far more secure and more powerful than classical computers.

“The Cambridge Graphene Centre is a great addition to the Cambridge technology and academic cluster,” said Chuck Milligan, CEO of FlexEnable, which is developing technology for flexible displays and other electronic components. “We are proud to be a partner of the Centre and support its activities. Graphene and other two dimensional materials are very relevant to flexible electronics for displays and sensors, and we are passionate about taking technology from labs to the factory floor. Our unique manufacturing processes for flexible electronics, together with the exponential growth expected in the flexible display and Internet of Things sensor markets, provide enormous opportunity for this exciting class of materials. It is for this reason that today we placed in the Cambridge Graphene Centre Laboratories a semi-automatic, large area EVG Spray coater. This valuable tool, donated to the University, will be a good match between the area of research of solution processable graphene and Flexenable long term technological vision.”

FlexEnable is supporting efforts to scale the graphene technology for use in tomorrow’s factories. The company has donated a large area deposition machine to the University, which is used for depositing large amounts of graphene onto various substrates.

“The University is at the heart of the largest, most vibrant technology cluster in Europe,” said Professor Sir Leszek Borysiewicz, the University’s Vice-Chancellor. “Our many partnerships with industry support the continued economic success of the region and the UK more broadly, and the Cambridge Graphene Centre is an important part of that – working with industry to bring these promising materials to market.”

Professor David Cardwell, Head of the Cambridge Engineering Department, indicated the planned development in Cambridge of a scale-up centre, where research will be nurtured towards higher technology readiness levels in collaboration with UK industry. “The Cambridge Graphene Centre is a direct and obvious link to this scale-up initiative, which will offer even more exciting opportunities for industry university collaborations,” he said.

Among the many local companies with an interest in graphene technologies are FlexEnable, the R&D arm of global telecommunications firm Nokia, printed electronics pioneer Novalia, Cambridge Nanosystems, Cambridge Graphene, and Aixtron, which specialises in the large-scale production of graphene powders, inks and films for a variety of applications.

Underpinning this commercial R&D effort in Cambridge and the East of England is public and private investment in the Cambridge Graphene Centre via the Graphene Flagship, part funded by the European Union. The flagship is a pan-European consortium, with a fast-growing number of industrial partners and associate members.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/graphene-means-business-two-dimensional-material-moves-from-the-lab-to-the-uk-factory-floor#sthash.wCxwzZVH.dpuf

First Evidence of ‘Ghost Particles’

First evidence of ‘ghost particles’

source: www.cam.ac.uk

Major international collaboration has seen its first neutrinos – so-called ‘ghost particles’ – in the experiment’s newly built detector.

This is an important step towards the much larger Deep Underground Neutrino Experiment (DUNE)

Mark Thomson

An international team of scientists at the MicroBooNE physics experiment in the US, including researchers from the University of Cambridge, detected their first neutrino candidates, which are also known as ‘ghost particles’. It represents a milestone for the project, involving years of hard work and a 40-foot-long particle detector that is filled with 170 tons of liquid argon.

Neutrinos are subatomic, almost weightless particles that only interact via gravity or nuclear decay. Because they don’t interact with light, they can’t be seen. Neutrinos carry no electric charge and travel through the universe almost entirely unaffected by natural forces. They are considered a fundamental building block of matter. The 2015 Nobel Prize in physics was awarded for neutrino oscillations, a phenomenon that is of great important to the field of elementary particle physics.

“It’s nine years since we proposed, designed, built, assembled and commissioned this experiment,” said Bonnie Fleming, MicroBooNE co-spokesperson and a professor of physics at Yale University. “That kind of investment makes seeing first neutrinos incredible.”

Following a 13-week shutdown for maintenance, Fermilab’s accelerator complex near Chicago delivered a proton beam on Thursday, which is used to make the neutrinos, to the laboratory’s experiments. After the beam was turned on, scientists analysed the data recorded by MicroBooNE’s particle detector to find evidence of its first neutrino interactions.

Scientists at the University of Cambridge have been working on advanced image reconstruction techniques that contributed to the ability to identify the rare neutrino interactions in the MicroBooNE data.

The MicroBooNE experiment aims to study how neutrinos interact and change within a distance of 500 meters. The detector will help scientists reconstruct the results of neutrino collisions as finely detailed, three-dimensional images. MicroBooNE findings also will be relevant for the forthcoming Deep Underground Neutrino Experiment (DUNE), which will examine neutrino transitions over longer distances.

“Future neutrino experiments will use this technology,” said Sam Zeller, Fermilab physicist and MicroBooNE co-spokesperson. “We’re learning a lot from this detector. It’s important not just for us, but for the whole physics community.”

“This is an important step towards the much larger Deep Underground Neutrino Experiment (DUNE)”, said Professor Mark Thomson of Cambridge’s Cavendish Laboratory, co-spokesperson of the DUNE collaboration and member of MicroBooNE. “It is the first time that fully automated pattern recognition software has been used to identify neutrino interactions from the complex images in a detector such as MicroBooNE and the proposed DUNE detector.”

Adapted from a Fermilab press release.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above

– See more at: http://www.cam.ac.uk/research/news/first-evidence-of-ghost-particles#sthash.lr5o0H6c.dpuf

Cambridge Chemists Make Breakthrough With “Ultimate” Battery Which Can Power a Car From London to Edinburgh

Cambridge chemists make breakthrough with “ultimate” battery which can power a car from London to Edinburgh

source: http://www.independent.co.uk/

Scientists have made a breakthrough at Cambridge University by solving issues related to a battery that, in theory, could enable a car to drive from London to Edinburgh on a single charge.

A research paper published in the journal Science details how the team at Cambridge University overcome obstacles in the development of lithium-air batteries. The batteries, touted as the “ultimate battery” theoretically have the ability to store ten times more energy than lithium-ion batteries.

But until now, unwanted chemical reactions and problems with efficiency associated with lithium-air batteries have plagued efforts by scientists to develop them.

The researchers at Cambridge are claiming to have solved a number of the issues and if the team’s laboratory experiment can be turned into a commercial product it will enable a car, on a single charge, to drive from London to Edinburgh.

electric-car-3379965.jpg
A driver demonstrates a miniature electric car, in 1985

Professor Clare Grey, one of the paper’s senior authors, said: “What we’ve achieved is a significant advance for this technology and suggests whole new areas for research – we haven’t solved all the problems inherent to this chemistry, but our results do show routes forward towards a practical device.”

But the report’s authors do warn that a practical lithium-air battery still remains at least a decade away – there are several practical challenges that need the batteries become a viable alternative to gasoline.

Prof Grey added: “While there are still plenty of fundamental studies that remain to be done, to iron out some of the mechanistic details, the current results are extremely exciting – we are still very much at the development stage, but we’ve shown that there are solutions to some of the tough problems associated with this technology.”

Breaking the Mould: Untangling the Jelly-Like Properties of Diseased Proteins

Breaking the mould: Untangling the jelly-like properties of diseased proteins

source: www.cam.ac.uk

Scientists at the University of Cambridge have identified a new property of essential proteins which, when it malfunctions, can cause the build up, or ‘aggregation’, of misshaped proteins and lead to serious diseases.

Our approach shows the importance of considering the mechanisms of diseases as not just biological, but also physical processes

Peter St George-Hyslop

A common characteristic of neurodegenerative diseases – such as Alzheimer’s, Parkinson’s and Huntington’s disease – is the build-up of ‘misfolded’ proteins, which cause irreversible damage to the brain. For example, Alzheimer’s disease sees the build-up of beta-amyloid ‘plaques’ and tau ‘tangles’.

In the case of some forms of motor neurone disease (also known as amyotrophic lateral sclerosis, or ALS) and frontotemporal dementia, it is the build up of ‘assemblies’ of misshapen FUS protein and several other RNA-binding proteins that is associated with disease. However, the assembly of these RNA binding proteins has several differences to conventional protein aggregates seen in Alzheimer’s disease and Parkinson’s disease and as a result, the significance of the build-up of these proteins and how it occurs has until now been unclear.

FUS is an RNA-binding protein, which has a number of important functions in regulating RNA transcription (the first step in DNA expression) and splicing in the nucleus of cells. FUS also has functions in the cytoplasm of cells involved in regulating the translation of RNA into proteins. There are several other similar RNA binding proteins: a common feature of all of them is that in addition to having domains to bind RNA they also have domains where the protein appears to be unfolded or unstructured.

In a study published today in the journal Neuron, scientists at the University of Cambridge examined FUS’s physical properties to demonstrate how the protein’s unfolded domain enables it to undergo reversible ‘phase transitions’. In other words, it can change back and forth from a fully soluble ‘monomer’ form into distinct localised accumulations that resemble liquid droplets and then further condense into jelly-like structures that are known as hydrogels. During these changes, the protein ‘assemblies’ capture and release RNA and other proteins. In essence this process allows cellular machinery for RNA transcription and translation to be condensed in high concentrations within restricted three-dimensional space without requiring a limiting membrane, thereby helping to easily regulate these vital cellular processes.

Using the nematode worm C. elegans as a model of ALS and frontotemporal dementia, the team was then able to also show that this process can become irreversible. Mutated FUS proteins cause the condensation process to go too far, forming thick gels that are unable to return to their soluble state. As a result, these irreversible gel-like assemblies trap other important proteins, preventing them carrying out their usual functions. One consequence is that it affects the synthesis of new proteins in nerve cell axons (the trunk of a nerve cell).

Importantly, the researchers also showed that by disrupting the formation of these irreversible assemblies (for example, by targeting with particular small molecules), it is possible to rescue the impaired motility and prolong the worm’s lifespan.

Like jelly on a plate

The behaviour of FUS can be likened to that of a jelly, explains Professor Peter St George Hyslop from the Cambridge Institute for Medical Research.

When first made, jelly is runny, like a liquid. As it cools the fridge, it begins to set, initially becoming slightly thicker than water, but still runny as the gelatin molecules forms into longer, fibre-like chains known as fibrils. If you dropped a droplet of this nearly-set jelly into water, it would (at least briefly) remain distinct from the surrounding water – a ‘liquid droplet’ within a liquid.

As the jelly cools further in the fridge, the gelatin fibres condense more, and it eventually becomes a firmly set jelly that can be flipped out of the mould onto a plate. This set jelly is a ‘hydrogel’, a loose meshwork of protein (gelatin) fibrils that is dense enough to hold the water inside the spaces between its fibres. The set jelly holds the water in a constrained 3D space – and depending on the recipe, there may be some other ‘cargo’ suspended within the jelly, such as bits of fruit (in the case of FUS this ‘cargo’ might be ribosomes, other proteins, enzymes or RNA, for example).

When the jelly is stored in a cool room, the fruit is retained in the jelly. This means the fruit (or ribosomes, etc) can be moved around the house and eventually put on the dinner table (or in the case of FUS, be transported to parts of a cell with unique protein synthesis requirements).

If the jelly is re-warmed, it melts and releases its fruit, which then float off‎. But if the liquid molten jelly is put back in the fridge and re-cooled, it re-makes a firm hydrogel again, and the fruit is once again trapped. In theory, this cycle of gel-melt-gel-melt can be repeated endlessly.

However, if the jelly is left out, the water will slowly evaporate, and the jelly condenses down, changing from a soft, easily-melted jelly to a thick, rubbery jelly.  (In fact, jelly is often sold as a dense cube like this.) In this condensed jelly, the meshwork of protein fibrils are much closer together and it becomes increasingly difficult to get the condensed jelly to melt (you would have to pour boiling water on it to get it to melt). Because the condensed jelly is not easily meltable when it gets to this state, any cargo (fruit, ribosomes, etc.) within the jelly essentially becomes irreversibly trapped.

In the case of FUS and other RNA binding proteins, the ‘healthy’ proteins only very rarely spontaneously over-condense. However, disease-causing mutations make these proteins much more prone to spontaneously ‎condense down into thick fibrous gels, trapping their cargo (in this case the ribosomes, etc), which then become unavailable for use.

So essentially, this new research shows that the ability of some proteins to self-assemble into liquid droplets and (slightly more viscous) jellies/hydrogel is a useful property that allows cells to transiently concentrate cellular machinery into a constrained 3D space in order to perform key tasks, and then disassemble and disperse the machinery when not needed. It is probably faster and less energy-costly than doing the same thing inside intracellular membrane-bound vesicles – but that same property can go too far, leading to disease.

Professor St George Hyslop says: “We’ve shown that a particular group of proteins can regulate vital cellular processes by their distinct ability to transition between different states. But this essential property also makes them vulnerable to forming more fixed structures if mutated, disrupting their normal function and causing disease.

“The same principles are likely to be at play in other more common forms of these diseases due to mutation in other related binding proteins. Understanding what is in these assemblies should provide further targets for disease treatments.

“Our approach shows the importance of considering the mechanisms of diseases as not just biological, but also physical processes. By bringing together people from the biological and physical sciences, we’ve been able to better understand how misshapen proteins build up and cause disease.”

The research was funded by in the UK by the Wellcome Trust, Medical Research Council and National Institutes of Health Research, in Canada by Canadian Institutes of Health Research, and in the US by National Institutes of Health.

Reference
Murakami, T et al. ALS/FTD mutation-induced phase transition of FUS liquid droplets and reversible hydrogels into irreversible hydrogels impairs RNP granule function. Neuron; 29 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

New Design Points a Path To The ‘Ultimate’ Battery

New design points a path to the ‘ultimate’ battery

source: www.cam.ac.uk

Researchers have successfully demonstrated how several of the problems impeding the practical development of the so-called ‘ultimate’ battery could be overcome.

What we’ve achieved is a significant advance for this technology and suggests whole new areas for research

Clare Grey

Scientists have developed a working laboratory demonstrator of a lithium-oxygen battery which has very high energy density, is more than 90% efficient, and, to date, can be recharged more than 2000 times, showing how several of the problems holding back the development of these devices could be solved.

Lithium-oxygen, or lithium-air, batteries have been touted as the ‘ultimate’ battery due to their theoretical energy density, which is ten times that of a lithium-ion battery. Such a high energy density would be comparable to that of gasoline – and would enable an electric car with a battery that is a fifth the cost and a fifth the weight of those currently on the market to drive from London to Edinburgh on a single charge.

However, as is the case with other next-generation batteries, there are several practical challenges that need to be addressed before lithium-air batteries become a viable alternative to gasoline.

Now, researchers from the University of Cambridge have demonstrated how some of these obstacles may be overcome, and developed a lab-based demonstrator of a lithium-oxygen battery which has higher capacity, increased energy efficiency and improved stability over previous attempts.

Their demonstrator relies on a highly porous, ‘fluffy’ carbon electrode made from graphene (comprising one-atom-thick sheets of carbon atoms), and additives that alter the chemical reactions at work in the battery, making it more stable and more efficient. While theresults, reported in the journal Science, are promising, the researchers caution that a practical lithium-air battery still remains at least a decade away.

“What we’ve achieved is a significant advance for this technology and suggests whole new areas for research – we haven’t solved all the problems inherent to this chemistry, but our results do show routes forward towards a practical device,” said Professor Clare Grey of Cambridge’s Department of Chemistry, the paper’s senior author.

Many of the technologies we use every day have been getting smaller, faster and cheaper each year – with the notable exception of batteries. Apart from the possibility of a smartphone which lasts for days without needing to be charged, the challenges associated with making a better battery are holding back the widespread adoption of two major clean technologies: electric cars and grid-scale storage for solar power.

“In their simplest form, batteries are made of three components: a positive electrode, a negative electrode and an electrolyte,’’ said Dr Tao Liu, also from the Department of Chemistry, and the paper’s first author.

In the lithium-ion (Li-ion) batteries we use in our laptops and smartphones, the negative electrode is made of graphite (a form of carbon), the positive electrode is made of a metal oxide, such as lithium cobalt oxide, and the electrolyte is a lithium salt dissolved in an organic solvent. The action of the battery depends on the movement of lithium ions between the electrodes. Li-ion batteries are light, but their capacity deteriorates with age, and their relatively low energy densities mean that they need to be recharged frequently.

Over the past decade, researchers have been developing various alternatives to Li-ion batteries, and lithium-air batteries are considered the ultimate in next-generation energy storage, because of their extremely high energy density. However, previous attempts at working demonstrators have had low efficiency, poor rate performance, unwanted chemical reactions, and can only be cycled in pure oxygen.

What Liu, Grey and their colleagues have developed uses a very different chemistry than earlier attempts at a non-aqueous lithium-air battery, relying on lithium hydroxide (LiOH) instead of lithium peroxide (Li2O2). With the addition of water and the use of lithium iodide as a ‘mediator’, their battery showed far less of the chemical reactions which can cause cells to die, making it far more stable after multiple charge and discharge cycles.

By precisely engineering the structure of the electrode, changing it to a highly porous form of graphene, adding lithium iodide, and changing the chemical makeup of the electrolyte, the researchers were able to reduce the ‘voltage gap’ between charge and discharge to 0.2 volts. A small voltage gap equals a more efficient battery – previous versions of a lithium-air battery have only managed to get the gap down to 0.5 – 1.0 volts, whereas 0.2 volts is closer to that of a Li-ion battery, and equates to an energy efficiency of 93%.

The highly porous graphene electrode also greatly increases the capacity of the demonstrator, although only at certain rates of charge and discharge. Other issues that still have to be addressed include finding a way to protect the metal electrode so that it doesn’t form spindly lithium metal fibres known as dendrites, which can cause batteries to explode if they grow too much and short-circuit the battery.

Additionally, the demonstrator can only be cycled in pure oxygen, while the air around us also contains carbon dioxide, nitrogen and moisture, all of which are generally harmful to the metal electrode.

“There’s still a lot of work to do,” said Liu. “But what we’ve seen here suggests that there are ways to solve these problems – maybe we’ve just got to look at things a little differently.”

“While there are still plenty of fundamental studies that remain to be done, to iron out some of the mechanistic details, the current results are extremely exciting – we are still very much at the development stage, but we’ve shown that there are solutions to some of the tough problems associated with this technology,” said Grey.

The authors acknowledge support from the US Department of Energy, the Engineering and Physical Sciences Research Council (EPSRC), Johnson Matthey and the European Union via Marie Curie Actions and the Graphene Flagship. The technology has been patented and is being commercialised through Cambridge Enterprise, the University’s commercialisation arm.

Reference:
Liu, T et. al. ‘Cycling Li-O2 Batteries via LiOH Formation and Decomposition.’ Science (2015). DOI: 10.1126/science.aac7730


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-design-points-a-path-to-the-ultimate-battery#sthash.g1djETiB.dpuf

Entanglement at Heart of ‘Two-For-One’ Fission in Next-Generation Solar Cells

Entanglement at heart of ‘two-for-one’ fission in next-generation solar cells

source: www.cam.ac.uk

The mechanism behind a process known as singlet fission, which could drive the development of highly efficient solar cells, has been directly observed by researchers for the first time.

Harnessing the process of singlet fission into new solar cell technologies could allow tremendous increases in energy conversion efficiencies in solar cells

Alex Chin

An international team of scientists have observed how a mysterious quantum phenomenon in organic molecules takes place in real time, which could aid in the development of highly efficient solar cells.

The researchers, led by the University of Cambridge, used ultrafast laser pulses to observe how a single particle of light, or photon, can be converted into two energetically excited particles, known as spin-triplet excitons, through a process called singlet fission. If singlet fission can be controlled, it could enable solar cells to double the amount of electrical current that can be extracted.

In conventional semiconductors such as silicon, when one photon is absorbed it leads to the formation of one free electron that can be harvested as electrical current. However certain materials undergo singlet fission instead, where the absorption of a photon leads to the formation of two spin-triplet excitons.

Working with researchers from the Netherlands, Germany and Sweden, the Cambridge team confirmed that this ‘two-for-one’ transformation involves an elusive intermediate state in which the two triplet excitons are ‘entangled’, a feature of quantum theory that causes the properties of each exciton to be intrinsically linked to that of its partner.

By shining ultrafast laser pulses – just a few quadrillionths of a second – on a sample of pentacene, an organic material which undergoes singlet fission, the researchers were able to directly observe this entangled state for the first time, and showed how molecular vibrations make it both detectable and drive its creation through quantum dynamics. Theresults are reported today (26 October) in the journal Nature Chemistry.

“Harnessing the process of singlet fission into new solar cell technologies could allow tremendous increases in energy conversion efficiencies in solar cells,” said Dr Alex Chin from the University’s Cavendish Laboratory, one of the study’s co-authors. “But before we can do that, we need to understand how exciton fission happens at the microscopic level. This is the basic requirement for controlling this fascinating process.”

The key challenge for observing real-time singlet fission is that the entangled spin-triplet excitons are essentially ‘dark’ to almost all optical probes, meaning they cannot be directly created or destroyed by light. In materials like pentacene, the first stage – which can be seen – is the absorption of light that creates a single, high-energy exciton, known as a spin singlet exciton. The subsequent fission of the singlet exciton into two less energetic triplet excitons gives the process its name, but the ability to see what is going on vanishes as the process take place.

To get around this, the team employed a powerful technique known as two-dimensional spectroscopy, which involves hitting the material with a co-ordinated sequence of ultrashort laser pulses and then measuring the light emitted by the excited sample. By varying the time between the pulses in the sequence, it is possible to follow in real time how energy absorbed by previous pulses is transferred and transformed into different states.

Using this approach, the team found that when the pentacene molecules were vibrated by the laser pulses, certain changes in the molecular shapes cause the triplet pair to become briefly light-absorbing, and therefore detectable by later pulses. By carefully filtering out all but these frequencies, a weak but unmistakable signal from the triplet pair state became apparent.

The authors then developed a model which showed that when the molecules are vibrating, they possess new quantum states that simultaneously have the properties of both the light-absorbing singlet exciton and the dark triplet pairs. These quantum ‘super positions’, which are the basis of Schrödinger’s famous thought experiment in which a cat is – according to quantum theory – in a state of being both alive and dead at the same time, not only make the triplet pairs visible, they also allow fission to occur directly from the moment light is absorbed.

“This work shows that optimised fission in real materials requires us to consider more than just the static arrangements and energies of molecules; their motion and quantum dynamics are just as important,” said Dr Akshay Rao, from the University’s Cavendish Laboratory. “It is a crucial step towards opening up new routes to highly efficiency solar cells.”

The research was supported by the European LaserLab Consortium, Royal Society, and the Netherlands Organization for Scientific Research. The work at Cambridge forms part of a broader initiative to harness high tech knowledge in the physical sciences to tackle global challenges such as climate change and renewable energy. This initiative is backed by the UK Engineering and Physical Sciences Research Council (EPSRC) and the Winton Programme for the Physics of Sustainability.

Reference:
Bakulin, Artem et. al. ‘Real-time observation of multiexcitonic states in ultrafast singlet fission using coherent 2D electronic spectroscopy.’ Nature Chemistry (2015). DOI: 10.1038/nchem.2371


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Social Yeast Cells Prefer to Work With Close Relatives to Make Our Beer, Bread & Wine

Social yeast cells prefer to work with close relatives to make our beer, bread & wine

source: www.cam.ac.uk

Baker’s yeast cells living together in communities help feed each other, but leave incomers from the same species to die from starvation, according to new research from the University of Cambridge.

The cell-cell cooperation we uncovered plays a significant role in allowing yeast to help us to produce our food, beer and wine

Kate Campbell

The findings, published today in the open access journal eLife, could lead to new biotechnological production systems based on metabolic cooperation. They could also be used to inhibit cell growth by blocking the exchange of metabolites between cells. This could be a new strategy to combat fungal pathogens or tumour cells.

“The cell-cell cooperation we uncovered plays a significant role in allowing yeast to help us to produce our food, beer and wine,” says first author Kate Campbell.

“It may also be crucial for all eukaryotic life, including animals, plants and fungi.”

Yeast metabolism has been exploited for thousands of years by mankind for brewing and baking. Yeast metabolizes sugar and secretes a wide array of small molecules during their life cycle, from alcohols and carbon dioxide to antioxidants and amino acids. Although much research has shown yeast to be a robust metabolic work-horse, only more recently has it become clear that these single-cellular organisms assemble in communities, in which individual cells may play a specialised function.

For the new study funded by the Wellcome Trust and European Research Council, researchers at the University of Cambridge and the Francis Crick Institute found cells to be highly efficient at exchanging some of their essential building blocks (amino acids and nucleobases, such as the A, T, G and C constituents of DNA) in what they call metabolic cooperation. However, they do not do so with every kind of yeast cell: they share nutrients with cells descendant from the same ancestor, but not with other cells from the same species when they originate from another community.

Using a synthetic biology approach, the team led by Dr Markus Ralser at the Department of Biochemistry started with a metabolically competent yeast mother cell, genetically manipulated so that its daughters progressively loose essential metabolic genes. They used it to grow a heterogeneous population of yeast with multiple generations, in which individual cells are deficient for various nutrients.

Campbell then tested whether cells lacking a metabolic gene can survive by sharing nutrients with their family members. When living within their community setting, these cells could continue to grow and survive. This meant that cells were being kept alive by neighbouring cells, which still had their metabolic activity intact, providing them with a much needed nutrient supply. Eventually, the colony established a composition where the majority of cells did help each other out. When cells of the same species but derived from another community were introduced, social interactions did not establish and the foreign cells died from starvation.

When the successful community was compared to other yeast strains, which had no metabolic deficiencies, the researchers found no pronounced differences in how both communities grew and produced biomass. This is implies that sharing was so efficient that any disadvantage was cancelled out.

The implications of these results may therefore be substantial for industries in which yeast are used to produce biomolecules of interest. This includes biofuels, vaccines and food supplements. The research might also help to develop therapeutic strategies against pathogenic fungi, such as the yeast Candida albicans, which form cooperative communities to overcome our immune system.

Reference

Kate Campbell, Jakob Vowinckel, Michael Muelleder, Silke Malmsheimer, Nicola Lawrence, Enrica Calvani, Leonor Miller-Fleming, Mohammad T. Alam, Stefan Christen, Markus A. Keller, and Markus Ralser

Self-establishing communities enable cooperative metabolite exchange in a eukaryote eLife 2015, http://dx.doi.org/10.7554/eLife.09943


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Plague in Humans ‘Twice as Old’ But Didn’t Begin as Flea-Borne,Ancient DNA Reveal

Plague in humans ‘twice as old’ but didn’t begin as flea-borne, ancient DNA reveals

source: www.cam.ac.uk

New research dates plague back to the early Bronze Age, showing it had been endemic in humans across Eurasia for millennia prior to first recorded global outbreak, and that ancestral plague mutated into its bubonic, flea-borne form between the 2nd and 1st millennium BC.

These results show that the ancient DNA has the potential not only to map our history and prehistory, but also discover how disease may have shaped it

Eske Willerslev

New research using ancient DNA has revealed that plague has been endemic in human populations for more than twice as long as previously thought, and that the ancestral plague would have been predominantly spread by human-to-human contact – until genetic mutations allowedYersinia pestis (Y. pestis), the bacteria that causes plague, to survive in the gut of fleas.

These mutations, which may have occurred near the turn of the 1st millennium BC, gave rise to the bubonic form of plague that spreads at terrifying speed through flea – and consequently rat – carriers. The bubonic plague caused the pandemics that decimated global populations, including the Black Death, which wiped out half the population of Europe in the 14th century.

Before its flea-borne evolution, however, researchers say that plague was in fact endemic in the human populations of Eurasia at least 3,000 years before the first plague pandemic in historical records (the Plague of Justinian in 541 AD).

They say the new evidence that Y. pestis bacterial infection in humans actually emerged around the beginning of the Bronze Age suggests that plague may have been responsible for major population declines believed to have occurred in the late 4th and early 3rd millennium BC.

The work was conducted by an international team including researchers from the universities of Copenhagen, Denmark, and Cambridge, UK, and the findings are published today in the journal Cell.

“We found that the Y. pestis lineage originated and was widespread much earlier than previously thought, and we narrowed the time window as to when and how it developed,” said senior author Professor Eske Willerslev, who recently joined Cambridge University’s Department of Zoology from the University of Copenhagen.

“The underlying mechanisms that facilitated the evolution of Y. pestis are present even today. Learning from the past may help us understand how future pathogens may arise and evolve,” he said.

Researchers analysed ancient genomes extracted from the teeth of 101 adults dating from the Bronze Age and found across the Eurasian landmass.

They found Y. pestis bacteria in the DNA of seven of the adults, the oldest of whom died 5,783 years ago – the earliest evidence of plague. Previously, direct molecular evidence forY. pestis had not been obtained from skeletal material older than 1,500 years.

However, six of the seven plague samples were missing two key genetic components found in most modern strains of plague: a “virulence gene” called ymt, and a mutation in an “activator gene” called pla.

The ymt gene protects the bacteria from being destroyed by the toxins in flea guts, so that it multiplies, choking the flea’s digestive tract. This causes the starving flea to frantically bite anything it can, and, in doing so, spread the plague.

The mutation in the pla gene allows Y. pestis bacteria to spread across different tissues, turning the localised lung infection of pneumonic plague into one of the blood and lymph nodes.

Researchers concluded these early strains of plague could not have been carried by fleas without ymt. Nor could they cause bubonic plague – which affects the lymphatic immune system, and inflicts the infamous swollen buboes of the Black Death – without the plamutation.

Consequently, the plague that stalked populations for much of the Bronze Age must have been pneumonic, which directly affects the respiratory system and causes desperate, hacking coughing fits just before death. Breathing around infected people leads to inhalation of the bacteria, the crux of its human-to-human transmission.

Study co-author Dr Marta Mirazón-Lahr, from Cambridge’s Leverhulme Centre for Human Evolutionary Studies (LCHES), points out that a study earlier this year from Willerslev’s Copenhagen group showed the Bronze Age to be a highly active migratory period, which could have led to the spread of pneumonic plague.

“The Bronze Age was a period of major metal weapon production, and it is thought increased warfare, which is compatible with emerging evidence of large population movements at the time. If pneumonic plague was carried as part of these migrations, it would have had devastating effects on small groups they encountered,” she said.

“Well-documented cases have shown the pneumonic plague’s chain of infection can go from a single hunter or herder to ravaging an entire community in two to three days.”

The most recent of the seven ancient genomes to reveal Y. pestis in the new study has both of the key genetic mutations, indicating an approximate timeline for the evolution that spawned flea-borne bubonic plague.

“Among our samples, the mutated plague strain is first observed in Armenia in 951 BC, yet is absent in the next most recent sample from 1686 BC – suggesting bubonic strains evolve and become fixed in the late 2nd and very early 1st millennium BC,” said Mirazón-Lahr.

“However, the 1686 BC sample is from the Altai mountains near Mongolia. Given the distance between Armenia and Altai, it’s also possible that the Armenian strain of bubonic plague has a longer history in the Middle East, and that historical movements during the 1st millennium BC exported it elsewhere.”

The Books of Samuel in the Bible describe an outbreak of plague among the Philistines in 1320 BC, complete with swellings in the groin, which the World Health Organization has argued fits the description of bubonic plague. Mirazón-Lahr suggests this may support the idea of a Middle Eastern origin for the plague’s highly lethal genetic evolution.

Co-author Professor Robert Foley, also from Cambridge’s LCHES, suggests that the lethality of bubonic plague may have required the right population demography before it could thrive.

“Every pathogen has a balance to maintain. If it kills a host before it can spread, it too reaches a ‘dead end’. Highly lethal diseases require certain demographic intensity to sustain them.

“The endemic nature of pneumonic plague was perhaps more adapted for an earlier Bronze Age population. Then, as Eurasian societies grew in complexity and trading routes continued to open up, maybe the conditions started to favour the more lethal form of plague,” Foley said.

“The Bronze Age is the edge of history, and ancient DNA is making what happened at this critical time more visible,” he said.

Willerslev added: “These results show that the ancient DNA has the potential not only to map our history and prehistory, but also discover how disease may have shaped it.”

Inset image: Map showing where the remains of the Bronze Age plague victims were found.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/plague-in-humans-twice-as-old-but-didnt-begin-as-flea-borne-ancient-dna-reveals#sthash.o5nw3wNu.dpuf

New Microscopic Imaging Technology Reveals Origins of Leukaemia

New microscopic imaging technology reveals origins of leukaemia

source: www.cam.ac.uk

Scientists at the Cambridge Institute for Medical Research at the University of Cambridge and the Medical Research Council Laboratory of Molecular Biology have taken advantage of revolutionary developments in microscopic imaging to reveal the origins of leukaemia.

Many forms of blood cancer can be traced back to defects in the basic housekeeping processes in our cells’ maturation

Alan Warren

The researchers studied tiny protein-producing factories, called ribosomes, isolated from cells. They capitalised on improvements made at the LMB to a high-powered imaging technique known as single particle cryo-electron microscopy.

The microscopes, capable of achieving detail near to the atomic level, enabled the team to link the molecular origins of a rare inherited leukaemia predisposition disorder, ‘Shwachman-Diamond Syndrome’ and a more common form of acute leukaemia to a common pathway involved in the construction of ribosomes.

Cryo-EM map showing the large ribosomal subunit (cyan), eIF6 (yellow) and the SBDS protein (magenta) that is deficient in the inherited leukaemia predisposition disorder Shwachman-Diamond syndrome. Credit: Alan Warren, University of Cambridge

The research, funded by the blood cancer charity Bloodwise and the Medical Research Council (MRC), is published online in the journal Nature Structural and Molecular Biology.

Ribosomes are the molecular machinery in cells that produce proteins by ‘translating’ the instructions contained in DNA via an intermediary messenger molecule. Errors in this process are known to play a part in the development of some bone marrow disorders and leukaemias. Until now scientists have been unable to study ribosomes at a high enough resolution to understand exactly what goes wrong.

Ribosomes are constructed in a series of discrete steps, like an assembly line. One of the final assembly steps involves the release of a key building block that allows the ribosome to become fully functional. The research team showed that a corrupted mechanism underlying this fundamental late step prevents proper assembly of the ribosome.

This provides an explanation for how cellular processes go awry in both Shwachman-Diamond syndrome and one in 10 cases of T-cell acute lymphoblastic leukaemia. This form of leukaemia, which affects around 60 children and young teenagers a year in the UK, is harder to treat than the more common B-cell form.

The findings from the Cambridge scientists, who worked in collaboration with scientists at the University of Rennes in France, open up the possibility that a single drug designed to target this molecular fault could be developed to treat both diseases.

Professor Alan Warren, from the Cambridge Institute of Medical Research at the University of Cambridge, said: “We are starting to find that many forms of blood cancer can be traced back to defects in the basic housekeeping processes in our cells’ maturation. Pioneering improvements to electron microscopes pave the way for the creation of a detailed map of the how these diseases develop, in a way that was never possible before.”

Single particle cryo-electron microscopy preserves the ribosomes at sub-zero temperatures to allow the collection and amalgamation of multiple images of maturing ribosomes in different orientations to ultimately provide more detail.

The technique has been refined in the MRC Laboratory of Molecular Biology by the development of new ‘direct electron detectors’ to better sense the electrons, yielding images of unprecedented quality. Methods to correct for beam-induced sample movements and new classification methods that can separate out several different structures within a single sample have also been developed.

Dr Matt Kaiser, Head of Research at Bloodwise, said: “New insights into the biology of blood cancers and disorders that originate in the bone marrow have only been made possible by the latest advances in technology. While survival rates for childhood leukaemia have improved dramatically over the years, this particular form of leukaemia is harder to treat and still relies on toxic chemotherapy. These findings will offer hope that new, more targeted, treatments can be developed.”

The research received additional funding from a Federation of European Biochemical Societies (FEBS) Long term Fellowship, the SDS patient charity Ted’s Gang and the Cambridge NIHR Biomedical Research Centre.

Adapted from a press release by Bloodwise

Reference
Weis, F et al. Mechanism of eIF6 release from the nascent 60S ribosomal subunit. Nature Structural and Molecular Biology; 19 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-microscopic-imaging-technology-reveals-origins-of-leukaemia#sthash.70gtBdVm.dpuf

New Graphene Based Inks For High-Speed Manufacturing of Printed Electronics

New graphene based inks for high-speed manufacturing of printed electronics

source: www.cam.ac.uk

A low-cost, high-speed method for printing electronics using graphene and other conductive materials could open up a wide range of commercial applications.

Being able to produce conductive inks that could effortlessly be used for printing at a commercial scale at a very high speed will open up all kinds of different applications for graphene and other similar materials

Tawfique Hasan

A low-cost, high-speed method for printing graphene inks using a conventional roll-to-roll printing process, like that used to print newspapers and crisp packets, could open up a wide range of practical applications, including inexpensive printed electronics, intelligent packaging and disposable sensors.

Developed by researchers at the University of Cambridge in collaboration with Cambridge-based technology company Novalia, the method allows graphene and other electrically conducting materials to be added to conventional water-based inks and printed using typical commercial equipment, the first time that graphene has been used for printing on a large-scale commercial printing press at high speed.

Graphene is a two-dimensional sheet of carbon atoms, just one atom thick. Its flexibility, optical transparency and electrical conductivity make it suitable for a wide range of applications, including printed electronics. Although numerous laboratory prototypes have been demonstrated around the world, widespread commercial use of graphene is yet to be realised.

“We are pleased to be the first to bring graphene inks close to real-world manufacturing. There are lots of companies that have produced graphene inks, but none of them has done it on a scale close to this,” said Dr Tawfique Hasan of the Cambridge Graphene Centre (CGC), who developed the method. “Being able to produce conductive inks that could effortlessly be used for printing at a commercial scale at a very high speed will open up all kinds of different applications for graphene and other similar materials.”

“This method will allow us to put electronic systems into entirely unexpected shapes,” said Chris Jones of Novalia. “It’s an incredibly flexible enabling technology.”

Hasan’s method, developed at the University’s Nanoscience Centre, works by suspending tiny particles of graphene in a ‘carrier’ solvent mixture, which is added to conductive water-based ink formulations. The ratio of the ingredients can be adjusted to control the liquid’s properties, allowing the carrier solvent to be easily mixed into a conventional conductive water-based ink to significantly reduce the resistance. The same method works for materials other than graphene, including metallic, semiconducting and insulating nanoparticles.

Currently, printed conductive patterns use a combination of poorly conducting carbon with other materials, most commonly silver, which is expensive. Silver-based inks cost £1000 or more per kilogram, whereas this new graphene ink formulation would be 25 times cheaper. Additionally, silver is not recyclable, while graphene and other carbon materials can easily be recycled. The new method uses cheap, non-toxic and environmentally friendly solvents that can be dried quickly at room temperature, reducing energy costs for ink curing. Once dry, the ‘electric ink’ is also waterproof and adheres to its substrate extremely well.

The graphene-based inks have been printed at a rate of more than 100 metres per minute, which is in line with commercial production rates for graphics printing, and far faster than earlier prototypes. Two years ago, Hasan and his colleagues produced a prototype of a transparent and flexible piano using graphene-based inks, which took between six and eight hours to make. Through the use of this new ink, more versatile devices on paper or plastic can be made at a rate of 300 per minute, at a very low cost. Novalia has also produced a printed DJ deck and an interactive poster, which functions as a drum kit using the same method.

Hasan and PhD students Guohua Hu, Richard Howe and Zongyin Yang of the Hybrid Nanomaterials Engineering group at CGC, in collaboration with Novalia, tested the method on a typical commercial printing press, which required no modifications in order to print with the graphene ink. In addition to the new applications the method will open up for graphene, it could also initiate entirely new business opportunities for commercial graphics printers, who could diversify into the electronics sector.

“The UK, and the Cambridge area in particular, has always been strong in the printing sector, but mostly for graphics printing and packaging,” said Hasan, a Royal Academy of Engineering Research Fellow and a University Lecturer in the Engineering Department. “We hope to use this strong local expertise to expand our functional ink platform. In addition to cheaper printable electronics, this technology opens up potential application areas such as smart packaging and disposable sensors, which to date have largely been inaccessible due to cost.”

In the short to medium term, the researchers hope to use their method to make printed, disposable biosensors, energy harvesters and RFID tags.

The research was supported by grants from the Engineering and Physical Sciences Research Council’s Impact Acceleration Account and a Royal Academy of Engineering Research Fellowship. The technology is being commercialised by Cambridge Enterprise, the University’s commercialisation arm.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-graphene-based-inks-for-high-speed-manufacturing-of-printed-electronics#sthash.U9l33wPE.dpuf

Using Experts ‘Inexpertly’ Leads to Policy Failure, Warn Researchers

Using experts ‘inexpertly’ leads to policy failure, warn researchers

source: www.cam.ac.uk

Evidence shows that experts are frequently fallible, say leading risk researchers, and policy makers should not act on expert advice without using rigorous methods that balance subjective distortions inherent in expert estimates.

The cost of ignoring these techniques – of using experts inexpertly – is less accurate information and so more frequent, and more serious, policy failures

William Sutherland and Mark Burgman

The accuracy and reliability of expert advice is often compromised by “cognitive frailties”, and needs to be interrogated with the same tenacity as research data to avoid weak and ill-informed policy, warn two leading risk analysis and conservation researchers in thejournal Nature today.

While many governments aspire to evidence-based policy, the researchers say the evidence on experts themselves actually shows that they are highly susceptible to “subjective influences” – from individual values and mood, to whether they stand to gain or lose from a decision – and, while highly credible, experts often vastly overestimate their objectivity and the reliability of peers.

The researchers caution that conventional approaches of informing policy by seeking advice from either well-regarded individuals or assembling expert panels needs to be balanced with methods that alleviate the effects of psychological and motivational bias.

They offer a straightforward framework for improving expert advice, and say that experts should provide and assess evidence on which decisions are made – but not advise decision makers directly, which can skew impartiality.

“We are not advocating replacing evidence with expert judgements, rather we suggest integrating and improving them,” write professors William Sutherland and Mark Burgman from the universities of Cambridge and Melbourne respectively.

“Policy makers use expert evidence as though it were data. So they should treat expert estimates with the same critical rigour that must be applied to data,” they write.

“Experts must be tested, their biases minimised, their accuracy improved, and their estimates validated with independent evidence. Put simply, experts should be held accountable for their opinions.”

Sutherland and Burgman point out that highly regarded experts are routinely shown to be no better than novices at making judgements.

However, several processes have been shown to improve performances across the spectrum, they say, such as ‘horizon scanning’ – identifying all possible changes and threats – and ‘solution scanning’ – listing all possible options, using both experts and evidence, to reduce the risk of overlooking valuable alternatives.

To get better answers from experts, they need better, more structured questions, say the authors. “A seemingly straightforward question, ‘How many diseased animals are there in the area?’ for example, could be interpreted very differently by different people. Does it include those that are infectious and those that have recovered? What about those yet to be identified?” said Sutherland, from Cambridge’s Department of Zoology.

“Structured question formats that extract upper and lower boundaries, degrees of confidence and force consideration of alternative theories are important for shoring against slides into group-think, or individuals getting ascribed greater credibility based on appearance or background,” he said.

When seeking expert advice, all parties must be clear about what they expect of each other, says Burgman, Director of the Centre of Excellence for Biosecurity Risk Analysis. “Are policy makers expecting estimates of facts, predictions of the outcome of events, or advice on the best course of action?”

“Properly managed, experts can help with estimates and predictions, but providing advice assumes the expert shares the same values and objectives as the decision makers. Experts need to stick to helping provide and assess evidence on which such decisions are made,” he said.

Sutherland and Burgman have created a framework of eight key ways to improve the advice of experts. These include using groups – not individuals – with diverse, carefully selected members well within their expertise areas.

They also caution against being bullied or “starstruck” by the over-assertive or heavyweight. “People who are less self-assured will seek information from a more diverse range of sources, and age, number of qualifications and years of experience do not explain an expert’s ability to predict future events – a finding that applies in studies from geopolitics to ecology,” said Sutherland.

Added Burgman: “Some experts are much better than others at estimation and prediction. However, the only way to tell a good expert from a poor one is to test them. Qualifications and experience don’t help to tell them apart.”

“The cost of ignoring these techniques – of using experts inexpertly – is less accurate information and so more frequent, and more serious, policy failures,” write the researchers.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/using-experts-inexpertly-leads-to-policy-failure-warn-researchers#sthash.jTZ2MaAt.dpuf

New Insights into the Dynamics of Past Climate hange

New insights into the dynamics of past climate change

source: www.cam.ac.uk

A new study finds that changing climate in the polar regions can affect conditions in the rest of the world far quicker than previously thought.

Other studies have shown that the overturning circulation in the Atlantic has faced a slowdown during the last few decades. The scientific community is only beginning to understand what it would mean for global climate should this trend continue, as predicted by some climate models

Julia Gottschalk

A new study of the relationship between ocean currents and climate change has found that they are tightly linked, and that changes in the polar regions can affect the ocean and climate on the opposite side of the world within one to two hundred years, far quicker than previously thought.

The study, by an international team of scientists led by the University of Cambridge, examined how changes in ocean currents in the Atlantic Ocean were related to climate conditions in the northern hemisphere during the last ice age, by examining data from ice cores and fossilised plankton shells. It found that variations in ocean currents and abrupt climate events in the North Atlantic region were tightly linked in the past, and that changes in the polar regions affected the ocean circulation and climate on the opposite side of the world.

The researchers determined that as large amounts of fresh water were emptied into the North Atlantic as icebergs broke off the North American and Eurasian ice sheets, the deep and shallow currents in the North Atlantic rapidly slowed down, which led to the formation of sea ice around Greenland and the subsequent cooling of the Northern Hemisphere. It also strongly affected conditions in the South Atlantic within a matter of one to two hundred years. The results, published in the journal Nature Geoscience, show how climate events in the Northern Hemisphere were tightly coupled with changes in the strength of deep ocean currents in the Atlantic Ocean, and how that may have affected conditions across the globe.

During the last ice age, which took place from 70,000 to 19,000 years ago, the climate in the Northern Hemisphere toggled back and forth between warm and cold states roughly every 1000 to 6000 years. These events, known as Dansgaard-Oeschger events, were first identified in data from Greenland ice cores in the early 1990s, and had far-reaching impacts on the global climate.

The ocean, which covers 70% of the planet, is a huge reservoir of carbon dioxide and heat. It stores about 60 times more carbon than the atmosphere, and can release or take up carbon on both short and long timescales. As changes happen in the polar regions, they are carried around the world by ocean currents, both at the surface and in the deep ocean. These currents are driven by winds, ocean temperature and salinity differences, and are efficient at distributing heat and carbon around the globe. Ocean currents therefore have a strong influence on whether regions of the world are warm (such as Europe) or whether they are not (such as Antarctica) as they modulate the effects of solar radiation. They also influence whether CO2 is stored in the ocean or the atmosphere, which is very important for global climate variability.

“Other studies have shown that the overturning circulation in the Atlantic has faced a slowdown during the last few decades,” said Dr Julia Gottschalk of Cambridge Department of Earth Sciences, the paper’s lead author. “The scientific community is only beginning to understand what it would mean for global climate should this trend continue, as predicted by some climate models.”

Analysing new data from marine sediment cores taken from the deep South Atlantic, between the southern tip of South America and the southern tip of Africa, the researchers discovered that during the last ice age, deep ocean currents in the South Atlantic varied essentially in unison with Greenland ice-core temperatures. “This implies that a very rapid transmission process must have operated, that linked rapid climate change around Greenland with the otherwise sluggish deep Atlantic Ocean circulation,” said Gottschalk, who is a Gates Cambridge Scholar. Best estimates of the delay between these two records suggest that the transmission happened within about 100 to 200 years.

Digging through metres of ocean mud from depths of 3,800 metres, the team studied the dissolution of fossil plankton shells that was closely linked to the chemical signature of different water masses. Water masses originating in the North Atlantic are less corrosive than water masses from the South Atlantic.

“Periods of very intense North Atlantic circulation and higher Northern Hemisphere temperatures increased the preservation of microfossils in the sediment cores, whereas those with slower circulation, when the study site was primarily influenced from the south, were linked with decreased carbonate ion concentrations at our core site which led to partial dissolution,” said co-author Dr Luke Skinner, also from Cambridge’s Department of Earth Sciences.

To better understand the physical mechanisms of rapid ocean adjustment, the data was compared with a climate model simulation which covers the same period. “The data of the model simulation was so close to the deep ocean sediment data, that we knew immediately, we were on the right track,” said co-author Dr Laurie Menviel from the University of New South Wales, Australia, who conducted the model simulation.

The timescales of these large-scale adjustments found in the palaeoceanographic data agree extremely well with those predicted by the model. “Waves between layers of different density in the deep ocean are responsible for quickly transmitting signals from North to South. This is a paradigm shift in our understanding of how the ocean works,” said Axel Timmermann, Professor of Oceanography at the University of Hawaii.

Although conditions at the end of the last ice age were very different to those of today, the findings could shed light on how changing conditions in the polar regions may affect ocean currents. However, much more research is needed in this area. The study’s findings could help test and improve climate models that are run for both past and future conditions.

The sediment cores were recovered by Dr Claire Waelbroeck and colleagues aboard the French research vessel Marion Dufresne.

The research was supported by the Gates Cambridge Trust, the Natural Environmental Research Council of the UK, the Royal Society, the European Research Council, the Australian Research Council and the National Science Foundation of the United States of America.

Reference:
Gottschalk, J et. al.
Abrupt changes in the southern extent of North Atlantic Deep Water during Dansgaard-Oeschger events. Nature Geoscience (2015). DOI: 10.1038/ngeo2558

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

How Hallucinations Emerge From Trying to Make Sense of an Ambiguous World

How hallucinations emerge from trying to make sense of an ambiguous world

source: www.cam.ac.uk

Why are some people prone to hallucinations? According to new research from the University of Cambridge and Cardiff University, hallucinations may come from our attempts to make sense of the ambiguous and complex world around us.

Take a look at the black and white image. It probably looks like a meaningless pattern of black and white blotches. But now take a look at the image at the bottom of this article and then return to the black and white picture: it’s likely that you can now make sense of the black and white image. It is this ability that scientists at Cardiff University and the University of Cambridge believe could help explain why some people are prone to hallucinations.

A bewildering and often very frightening experience in some mental illnesses is psychosis – a loss of contact with external reality. This often results in a difficulty in making sense of the world, which can appear threatening, intrusive and confusing. Psychosis is sometimes accompanied by drastic changes in perception, to the extent that people may see, feel, smell and taste things that are not actually there – so-called hallucinations. These hallucinations may be accompanied by beliefs that others find irrational and impossible to comprehend.

In research published today in the journal Proceedings of National Academy of Sciences (PNAS), a team of researchers based at Cardiff University and the University of Cambridge explore the idea that hallucinations arise due to an enhancement of our normal tendency to interpret the world around us by making use of prior knowledge and predictions.

In order to make sense of and interact with our physical and social environment, we need appropriate information about the world around us, for example the size or location of a nearby object. However, we have no direct access to this information and are forced to interpret potentially ambiguous and incomplete information from our senses. This challenge is overcome in the brain – for example in our visual system – by combining ambiguous sensory information with our prior knowledge of the environment to generate a robust and unambiguous representation of the world around us. For example, when we enter our living room, we may have little difficulty discerning a fast-moving black shape as the cat, even though the visual input was little more than a blur that rapidly disappeared behind the sofa: the actual sensory input was minimal and our prior knowledge did all the creative work.

“Vision is a constructive process – in other words, our brain makes up the world that we ‘see’,” explains first author Dr Christoph Teufel from the School of Psychology at Cardiff University. “It fills in the blanks, ignoring the things that don’t quite fit, and presents to us an image of the world that has been edited and made to fit with what we expect.”

“Having a predictive brain is very useful – it makes us efficient and adept at creating a coherent picture of an ambiguous and complex world,” adds senior author Professor Paul Fletcher from the Department of Psychiatry at the University of Cambridge. “But it also means that we are not very far away from perceiving things that aren’t actually there, which is the definition of a hallucination.

“In fact, in recent years we’ve come to realise that such altered perceptual experiences are by no means restricted to people with mental illness. They are relatively common, in a milder form, across the entire population. Many of us will have heard or seen things that aren’t there.”

In order to address the question of whether such predictive processes contribute to the emergence of psychosis, the researchers worked with 18 individuals who had been referred to a mental health service run by the NHS Cambridgeshire and Peterborough Foundation Trust, and led by Dr Jesus Perez, one of the co-authors on the study, and who suffered from very early signs of psychosis. They examined how these individuals, as well as a group of 16 healthy volunteers, were able to use predictions in order to make sense of ambiguous, incomplete black and white images, similar to the one shown above.

The volunteers were asked to look at a series of these black and white images, some of which contained a person, and then to say for a given image whether or not it contained a person. Because of the ambiguous nature of the images, the task was very difficult at first. Participants were then shown a series of full colour original images, including those from which the black and white images had been derived: this information could be used to improve the brain’s ability to make sense of the ambiguous image. The researchers reasoned that, since hallucinations may come from a greater tendency to superimpose one’s predictions on the world, people who were prone to hallucinations would be better at using this information because, in this task, such a strategy would be an advantage.

The researchers found a larger performance improvement in people with very early signs of psychosis in comparison to the healthy control group. This suggested that people from the clinical group were indeed relying more strongly on the information that they had been given to make sense of the ambiguous pictures.

When the researchers presented the same task to a larger group of 40 healthy people, they found a continuum in task performance that correlated with the participants’ scores on tests of psychosis-proneness. In other words, the shift in information processing that favours prior knowledge over sensory input during perception can be detected even before the onset of early psychotic symptoms.

“These findings are important because they tell us that the emergence of key symptoms of mental illness can be understood in terms of an altered balance in normal brain functions,” says Naresh Subramaniam from the Department of Psychiatry at the University of Cambridge. “Importantly, they also suggest that these symptoms and experiences do not reflect a ‘broken’ brain but rather one that is striving – in a very natural way – to make sense of incoming data that are ambiguous.”

The study was carried out in collaboration with Dr Veronika Dobler and Professor Ian Goodyer from the Department of Child and Adolescent Psychiatry at the University of Cambridge. The research was funded by the Wellcome Trust and the Bernard Wolfe Health Neuroscience Fund. It was carried out within the Cambridge and Peterborough NHS Foundation Trust. Additional support for the Behavioural and Clinical Neuroscience Institute at the University of Cambridge came from the Wellcome Trust and the Medical Research Council.

Reference
Teufel, C et al. Shift towards prior knowledge confers a perceptual advantage in early psychosis and psychosis-prone healthy individuals. PNAS; 12 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Ancient Genome From Africa Sequenced for the First Time

Ancient genome from Africa sequenced for the first time

source: www.cam.ac.uk

DNA from 4,500-year-old Ethiopian skull reveals a huge migratory wave of West Eurasians into the Horn of Africa around 3,000 years ago had a genetic impact on modern populations right across the African continent.

The sequencing of ancient genomes is still so new, and it’s changing the way we reconstruct human origins

Andrea Manica

The first ancient human genome from Africa to be sequenced has revealed that a wave of migration back into Africa from Western Eurasia around 3,000 years ago was up to twice as significant as previously thought, and affected the genetic make-up of populations across the entire African continent.

The genome was taken from the skull of a man buried face-down 4,500 years ago in a cave called Mota in the highlands of Ethiopia – a cave cool and dry enough to preserve his DNA for thousands of years. Previously, ancient genome analysis has been limited to samples from northern and arctic regions.

The latest study is the first time an ancient human genome has been recovered and sequenced from Africa, the source of all human genetic diversity. The findings are published today in the journal Science.

The ancient genome predates a mysterious migratory event which occurred roughly 3,000 years ago, known as the ‘Eurasian backflow’, when people from regions of Western Eurasia such as the Near East and Anatolia suddenly flooded back into the Horn of Africa.

The genome enabled researchers to run a millennia-spanning genetic comparison and determine that these Western Eurasians were closely related to the Early Neolithic farmers who had brought agriculture to Europe 4,000 years earlier.

By comparing the ancient genome to DNA from modern Africans, the team have been able to show that not only do East African populations today have as much as 25% Eurasian ancestry from this event, but that African populations in all corners of the continent – from the far West to the South – have at least 5% of their genome traceable to the Eurasian migration.

Researchers describe the findings as evidence that the ‘backflow’ event was of far greater size and influence than previously thought. The massive wave of migration was perhaps equivalent to over a quarter of the then population of the Horn of Africa, which hit the area and then dispersed genetically across the whole continent.

“Roughly speaking, the wave of West Eurasian migration back into the Horn of Africa could have been as much as 30% of the population that already lived there – and that, to me, is mind-blowing. The question is: what got them moving all of a sudden?” said Dr Andrea Manica, senior author of the study from the University of Cambridge’s Department of Zoology.

Previous work on ancient genetics in Africa had involved trying to work back through the genomes of current populations, attempting to eliminate modern influences. “With an ancient genome, we have a direct window into the distant past. One genome from one individual can provide a picture of an entire population,” said Manica.

The cause of the West Eurasian migration back into Africa is currently a mystery, with no obvious climatic reasons. Archaeological evidence does, however, show the migration coincided with the arrival of Near Eastern crops into East Africa such as wheat and barley, suggesting the migrants helped develop new forms of agriculture in the region.

The researchers say it’s clear that the Eurasian migrants were direct descendants of, or a very close population to, the Neolithic farmers that had had brought agriculture from the Near East into West Eurasia around 7,000 years ago, and then migrated into the Horn of Africa some 4,000 years later. “It’s quite remarkable that genetically-speaking this is the same population that left the Near East several millennia previously,” said Eppie Jones, a geneticist at Trinity College Dublin who led the laboratory work to sequence the genome.

While the genetic make-up of the Near East has changed completely over the last few thousand years, the closest modern equivalents to these Neolithic migrants are Sardinians, probably because Sardinia is an isolated island, says Jones. “The famers found their way to Sardinia and created a bit of a time capsule. Sardinian ancestry is closest to the ancient Near East.”


View looking out from the Mota cave in the Ethiopian highlands

“Genomes from this migration seeped right across the continent, way beyond East Africa, from the Yoruba on the western coast to the Mbuti in the heart of the Congo – who show as much as 7% and 6% of their genomes respectively to be West Eurasian,” said Marcos Gallego Llorente, first author of the study, also from Cambridge’s Zoology Department.

“Africa is a total melting pot. We know that the last 3,000 years saw a complete scrambling of population genetics in Africa. So being able to get a snapshot from before these migration events occurred is a big step,” Gallego Llorente said.

The ancient Mota genome allows researchers to jump to before another major African migration: the Bantu expansion, when speakers of an early Bantu language flowed out of West Africa and into central and southern areas around 3,000 years ago. Manica says the Bantu expansion may well have helped carry the Eurasian genomes to the continent’s furthest corners.

The researchers also identified genetic adaptations for living at altitude, and a lack of genes for lactose tolerance – all genetic traits shared by the current populations of the Ethiopian highlands. In fact, the researchers found that modern inhabitants of the area highlands are direct descendants of the Mota man.

Finding high-quality ancient DNA involves a lot of luck, says Dr Ron Pinhasi, co-senior author from University College Dublin. “It’s hard to get your hands on remains that have been suitably preserved. The denser the bone, the more likely you are to find DNA that’s been protected from degradation, so teeth are often used, but we found an even better bone – the petrous.” The petrous bone is a thick part of the temporal bone at the base of the skull, just behind the ear.

“The sequencing of ancient genomes is still so new, and it’s changing the way we reconstruct human origins,” added Manica. “These new techniques will keep evolving, enabling us to gain an ever-clearer understanding of who our earliest ancestors were.”

The study was conducted by an international team of researchers, with permission from the Ethiopia’s Ministry of Culture and Authority for Research and Conservation of Cultural Heritage.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Calling For Help: Damaged Nerve Cells Communicate With Stem Cells

Calling for help: damaged nerve cells communicate with stem cells

Source: www.cam.ac.uk

Nerve cells damaged in diseases such as multiple sclerosis (MS), ‘talk’ to stem cells in the same way that they communicate with other nerve cells, calling out for ‘first aid’, according to new research from the University of Cambridge.

This is the first time that we’ve been able to show that damaged nerve fibres communicate with stem cells using synaptic connections – the same connections they use to ‘talk to’ other nerve cells

Thora Karadottir

The study, published today in the journal Nature Communications, may have significant implications for the development of future medicines for disorders that affect myelin sheath, the insulation that protects and insulates our nerve cells.

For our brain and central nervous system to work, electrical signals must travel quickly along nerve fibres. This is achieved by insulating the nerve fibres with a fatty substance called myelin. In diseases such as MS, the myelin sheath around nerve fibres is lost or damaged, causing physical and mental disability.

Stem cells – the body’s master cells, which can develop into almost any type of cell – can act as ‘first aid kits’, repairing damage to the body. In our nervous system, these stem cells are capable of producing new myelin, which, in the case of MS, for example, can help recover lost function. However, myelin repair often fails, leading to sustained disability. To understand why repair fails in disease, and to design novel ways of promoting myelin repair, researchers at the Wellcome Trust-Medical Research Council Stem Cell Institute at the University of Cambridge studied how this repair process works.

When nerve fibres lose myelin, they stay active but conduct signals at much lower speed than healthy fibres. Using electrical recording techniques, a team of researchers led by Dr Thora Karadottir discovered that the damaged nerve fibres then form connections with stem cells. These connections are the same as those that connect synapses between different nerve fibres. These new synaptic connections enable the damaged fibres to communicate directly with the stem cells by releasing the glutamate, a chemical that the stem cells can sense via receptors. This communication is critical for directing the stem cells to produce new myelin – when the researchers inhibited either the nerve fibres’ activity, their ability to communicate, or the stem cells’ ability to sense the communication, the repair process fails.

“This is the first time that we’ve been able to show that damaged nerve fibres communicate with stem cells using synaptic connections – the same connections they use to ‘talk to’ other nerve cells,” says Dr Karadottir. “Armed with this new knowledge, we can start looking into ways to enhance this communication to promote myelin repair in disease.”

Dr Helene Gautier from the Department of Physiology, Development and Neuroscience, adds: “So far, the majority of the available treatments are only slowing down damage. Our research opens the possibility to enhance repair and potentially treat the most devastating forms of MS and demyelinated diseases.”

Reference
Gautier, HOB et al. Neuronal activity regulates remyelination via glutamate signalling to oligodendrocyte progenitors. Nature Communications; 6 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/calling-for-help-damaged-nerve-cells-communicate-with-stem-cells#sthash.fxfycRyX.dpuf

Bacteria in the World’s Oceans Produce Millions of Tonnes of Hydrocarbons Each Year

Bacteria in the world’s oceans produce millions of tonnes of hydrocarbons each year

Source: www.cam.ac.uk

Scientists have calculated that millions of tonnes of hydrocarbons are produced annually by photosynthetic bacteria in the world’s oceans.

This cycle is like an insurance policy – the hydrocarbon-producing and hydrocarbon-degrading bacteria exist in equilibrium with each other

David Lea-Smith

An international team of researchers, led by the University of Cambridge, has estimated the amount of hydrocarbons – the primary ingredient in crude oil – that are produced by a massive population of photosynthetic marine microbes, called cyanobacteria. These organisms in turn support another population of bacteria that ‘feed’ on these compounds.

In the study, conducted in collaboration with researchers from the University of Warwick and MIT, and published today (5 October) in the journal Proceedings of the National Academy of Sciences of the USA, the scientists measured the amount of hydrocarbons in a range of laboratory-grown cyanobacteria and used the data to estimate the amount produced in the oceans.

Although each individual cell contains minuscule quantities of hydrocarbons, the researchers estimated that the amount produced by two of the most abundant cyanobacteria in the world – Prochlorococcus and Synechococcus – is more than two million tonnes in the ocean at any one time. This indicates that these two groups alone produce between 300 and 800 million tonnes of hydrocarbons per year, yet the concentration at any time in unpolluted areas of the oceans is tiny, thanks to other bacteria that break down the hydrocarbons as they are produced.

“Hydrocarbons are ubiquitous in the oceans, even in areas with minimal crude oil pollution, but what hadn’t been recognised until now is the likely quantity produced continually by living oceanic organisms,” said Professor Christopher Howe from Cambridge’s Department of Biochemistry, the paper’s senior author. “Based on our laboratory studies, we believe that at least two groups of cyanobacteria are responsible for the production of massive amounts of hydrocarbons, and this supports other bacteria that break down the hydrocarbons as they are produced.”

The scientists argue that the cyanobacteria are key players in an important biogeochemical cycle, which they refer to as the short-term hydrocarbon cycle. The study suggests that the amount of hydrocarbons produced by cyanobacteria dwarfs the amount of crude oil released into the seas by natural seepage or accidental oil spills.

However, the hydrocarbons produced by cyanobacteria are continually broken down by other bacteria, keeping the overall concentrations low. When an event such as an oil spill occurs, hydrocarbon-degrading bacteria are known to spring into action, with their numbers rapidly expanding, fuelled by the sudden local increase in their primary source of energy.

The researchers caution that their results do not in any way diminish the enormous harm caused by oil spills. Although some microorganisms are known to break down hydrocarbons in oil spills, they cannot repair the damage done to marine life, seabirds and coastal ecosystems.

“Oil spills cause widespread damage, but some parts of the marine environment recover faster than others,” said Dr David Lea-Smith, a postdoctoral researcher in the Department of Biochemistry, and the paper’s lead author. “This cycle is like an insurance policy – the hydrocarbon-producing and hydrocarbon-degrading bacteria exist in equilibrium with each other, and the latter multiply if and when an oil spill happens. However, these bacteria cannot reverse the damage to ecosystems which oil spills cause.”

The researchers stress the need to test if their findings are supported by direct measurements on cyanobacteria growing in the oceans. They are also interested in the possibility of harnessing the hydrocarbon production potential of cyanobacteria industrially as a possible source of fuel in the future, although such work is at a very early stage.

Reference:
Lea-Smith, D. et. al. “Contribution of cyanobacterial alkane production to the ocean hydrocarbon cycle.” PNAS (2015). DOI: 10.1073/pnas.1507274112


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/bacteria-in-the-worlds-oceans-produce-millions-of-tonnes-of-hydrocarbons-each-year#sthash.mYeq3A1W.dpuf

Doctors Liken Keeping Patients Alive Unnecessarily to Torture

Doctors liken keeping patients alive unnecessarily to torture

Source: www.cam.ac.uk

Elizabeth Dzeng’s research shows doctors’ moral distress surrounding futile end of life treatments.

Doctors’ words – ‘torture’, ‘gruesome’, ‘abuse’, ‘mutilate’ and ‘cruel’ evoke images more fitting of penal regimes than hospitals. The moral toll exacted upon these physicians is evident in descriptions such as feeling ‘violated’ and ‘traumatised’.

Elizabeth Dzeng

Trainee doctors think they are being asked to prolong some patients’ lives unnecessarily and describe such cases as being tantamount to torture and abuse, according to a new study.

The research, led by Elizabeth Dzeng, a Gates Cambridge Scholar at the University of Cambridge, is the first to focus on US doctors’ moral distress surrounding resuscitation and treatments that they believe may be futile at the end of life and has implications for the UK as more power is handed from doctors to patients’ representatives around end of life issues.

The qualitative study, titled “Moral Distress Amongst American Physician Trainees Regarding Futile Treatments at the End of Life: A Qualitative Inquiry”, is published in the Journal of General Internal Medicine.

Elizabeth conducted in-depth interviews with 22 physician trainees in internal medicine at three accredited medical centres in the US and asked how they reacted and responded to ethical challenges arising in the context of perceived futile treatments at the end of life and how they felt about its ethical implications.

It found that doctors who are required to perform procedures such as resuscitation which they feel are futile or harmful have significant moral qualms that they are prolonging suffering as opposed to providing care. Some cope with these by developing detached and dehumanising attitudes towards patients.

One said of a patient: “It felt horrible, like I was torturing him. He was telling us we were torturing him. I did not think we were doing the right things.”

Another said: “I agree with giving the patient’s choice, but oftentimes it’s the family member. If the patient says, “Torture me, I want everything done.” Fine. The family member is doing it for other reasons. Like guilt; they can’t let go.”

Ways of coping ranged from formal and informal conversations with colleagues and superiors about the emotional and ethical challenges of providing care at the end of life to a tendency among some to dehumanise the patient.

One trainee said: “We’re abusing a body and I get that, but as long as I remember I’m only abusing a body and not a person, it’s okay. Frequently when it’s an inappropriate code, that’s what’s happening.”

Trainees also said that the hierarchical nature of their relationship with other doctors meant they felt powerless to have any influence on decisions.

Dzeng [2011] is completing a PhD on medical sociology and ethics and is also a fellow in General Internal Medicine at the Johns Hopkins School of Medicine. She said: “Our study sheds light on a significant cause of moral distress amongst physician trainees when they feel obligated to provide treatments at the end of life that they believe to be futile or harmful. Their words – ‘torture’, ‘gruesome’, ‘abuse’, ‘mutilate’ and ‘cruel’ evoke images more fitting of penal regimes than hospitals. The moral toll exacted upon these physicians is evident in descriptions such as feeling ‘violated’ and ‘traumatised’. Previous research shows that moral distress can have significant negative effects on job satisfaction, psychological and physical well-being and self-image, resulting in burnout and thoughts of quitting.”

The paper is part of a larger study investigating the influence of institutional cultures and policies on physicians’ and trainees’ views on resuscitation orders. A previous paper found that in hospitals with cultures and policies which prioritised patient autonomy over the patients’ best interest, physicians tended to give patients a menu of choices without guidance or recommendations over whether resuscitation would be beneficial or merely prolong suffering. It explored the effect of moves towards greater patient autonomy over end of life decisions. Although this was an understandable reaction to the paternalistic approach often adopted by doctors in the past, the paper voiced fears that the pendulum may have swung too far, to the detriment of patients themselves.

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/doctors-liken-keeping-patients-alive-unnecessarily-to-torture#sthash.IInQmPAg.dpuf

Exploiting the Government’s Education Data Could Help to Bridge the UK Skills Gap

Exploiting the Government’s education data could help to bridge the UK skills gap

Source: www.cam.ac.uk

Analysing graduate earnings using anonymous administrative data can show how earnings vary for graduates and indicate which skills are in short supply, says Cambridge education professor Anna Vignoles.

Providing information is not enough to change policy, but without good data any policy development is likely to be ineffective

Anna Vignoles

Fully exploiting the Government’s education data could help to bridge the skills gap that is holding back UK businesses, Cambridge expert Professor Anna Vignoles has said at aRustat Conference session on the application of Big Data, held at Jesus College.

The UK’s skills gap has been highlighted by both the Confederation of British Industry(CBI) and the Chartered Institute of Management Accountants (CIMA) this year. The CBI reported that over half of employers (55%) are not confident there will be enough people available in the future with the necessary skills to fill their high-skilled jobs¹.

In the last ten years, the Government has allowed researchers to access some of its educational data under secure conditions. Academics including Vignoles have recently mapped the journeys taken by students from the age of four right through to employment.

During a session on the application of Big Data and data driven business models, Vignoles argued that researchers could now use earnings data to determine which skills are in greater demand in the labour market, and feed this back into policy to ensure that the education system teaches the skills needed by UK companies.

“Many firms have difficulties recruiting people with the right skills, and are having to pay a big premium for some skills. Although we can survey firms about their needs, the results can be misleading, not least because only a select group of companies may respond,” said Vignoles.

“I have been working with colleagues to accurately analyse graduate earnings, using anonymous Government administrative data. This type of analysis can show how earnings vary for different types of graduates, and so indicate which skills are in short supply.

“For example, let’s say that the next stage of our research reveals that graduates with strong analytical skills are in demand. This data could inform students, universities and policy makers, and may result in courses offering more training in analytical skills. More graduates will then have the analytical skills needed by businesses, and the skills gap should start to close.

“Of course, providing information is not enough to change policy, but without good data any policy development is likely to be ineffective. The UK is world-leading when it comes to education data, but it is only recently that a Big Data approach has been used to look at graduate earnings. Fully exploiting the Government’s education data could help to bridge the UK skills gap.

“However there should always be strict limitations on the way data is used to ensure that people’s privacy is protected. We need to have an informed debate about the extent to which members of the public are happy for data collected by the state to be used in this way.”

Vignoles sits on the steering group of the University of Cambridge’s Big Data Research Initiative. This brings together researchers to address challenges presented by access to unprecedented volumes of data, as well as important issues around law, ethics and economics, in order to apply Big Data to solve challenging problems for society.

Rustat Conferences are held three times a year at Jesus College, Cambridge, with this conference focusing on Big Data. Other sessions explored the Internet of Things, sharing data and respecting individual rights without disrupting new business models, and the legal aspects of Big Data. Rustat Conferences offer an opportunity for decision-makers from the frontlines of politics, business, finance, the media and education to discuss vital issues with leading academic experts.

 

Mindfulness Study to Look at Benefits in Helping Build Resilience To Stress Among University Students

Mindfulness study to look at benefits in helping build resilience to stress among university students

Source: www.cam.ac.uk

Students at the University of Cambridge are to be offered free, eight-week mindfulness training to help build resilience against stress as part of a new research project launched to coincide with the start of term.

University life can be stressful at time for students, as they develop the skills to live and study independently

Géraldine Dufour

The study, which could see over 500 students receive mindfulness training, aims to measure its effectiveness in managing stress amongst students, particularly at exam time, and whether it helps in other factors such as sleep and wellbeing. It will also explore whether the training affects students’ use of mental health treatment and support services.

Mindfulness involves the use of meditation techniques and self-awareness. Originally developed to help patients with chronic pain cope with their condition, it is now a recognised – and clinically-proven – way of helping individuals cope with depression, anxiety and stress.

Géraldine Dufour, Head of Counselling at the University, says: “University life can be stressful at time for students, as they develop the skills to live and study independently. Developing resilience and the skills to cope with stress is key so that students can make the most of life in the collegiate university and when they leave. The university counselling service offers many opportunities for students to develop their skills through an extensive programme of workshops, groups and individual counselling. We believe mindfulness could be a powerful tool to help them, in addition to the other counselling services we offer. This research project will help us determine if mindfulness is a good use of resources.”

From October, undergraduates and postgraduates at the University of Cambridge will be invited to register for a free, eight-week mindfulness training course called Mindfulness Skills for Students, which will be led by Dr Elizabeth English, the University’s Mindfulness Practitioner. The course is a group-based training programme based on the course bookMindfulness: A Practical Guide to Finding Peace in a Frantic World, by Mark Williams and Danny Penman, and adapted for Cambridge students. It consists of one 90-minute session and seven 75-minute sessions. Participants are also requested to do some home practice and reading every week.

Students will be allocated at random to two groups – one to receive training immediately, the second to be deferred twelve months. All students – both those who take the course and those whose training is deferred – will record their stress levels using a smartphone app during the exam period, while activity monitors will record their physical activity and sleep patterns.

“The academic year provides a very real ‘natural experiment’,” says Dr Julieta Galante from the Department of Psychiatry, who will carry out the research together with Professor Peter Jones. “Students receive training, practice at home, then face a ‘pressure point’ – their exams. We hope that our study will help us answer the question of whether the provision of mindfulness training, which we know to be effective in other settings, can help students throughout the year and particularly at exam time.”

The level of support available to students at Cambridge is unparalleled in most other universities. The University Counselling Service, one of the best funded in the country, includes counsellors as well as mental health advisors and supplements the support available to students from specialist staff in the colleges such as college nurses and chaplains. In the previous academic year, over 1,500 people were seen for counselling – this represents around one in 12 of the student population. Its Mindfulness Skills for Students programme is believed to be the largest such programme in any university.

Students wishing to register for the evaluation study of the Mindfulness Training Programme should visit the mindfulness website or emailmindfulstudentstudy@medschl.cam.ac.uk.

 

Maintaining Healthy DNA Delays Menopause

Maintaining healthy DNA delays menopause

Source: www.cam.ac.uk

An international study of nearly 70,000 women has identified more than forty regions of the human genome that are involved in governing at what age a woman goes through menopause. The study, led by scientists at the Universities of Cambridge and Exeter, found that two thirds of those regions contain genes that act to keep DNA healthy, by repairing the small damages that can accumulate with age.

We have known for some time that the age at which women go through menopause is partly determined by genes. This study now tells us that there are likely hundreds of genes involved

John Perry

The findings, published today (September 28) in the journal Nature Genetics, suggest that the reproductive cells or ‘eggs’ in a woman’s ovaries (known as oocytes) that repair damaged DNA more efficiently survive longer. This results in a later age at menopause, which marks the end of a woman’s reproductive lifetime. Previous research has shown that DNA is regularly damaged by age and by toxic substances such as cigarette smoke – hence women who smoke go through menopause 1-2 years earlier on average than non-smokers.

Our cells have many mechanisms to detect and repair such damage, but cells die when too much damage accumulates. DNA is also damaged and repaired during the production of eggs – therefore these genes might also act to enhance a woman’s pool of eggs which is set in early life.

In a collaboration involving scientists from 177 institutions worldwide, the authors undertook a genome-wide association study of almost 70,000 women of European ancestry.

“Many women today are choosing to have babies later in life, but they may find it difficult to conceive naturally because fertility starts to diminish at least 10 years before menopause,” said Dr Anna Murray from the University of Exeter, and the paper’s senior author. “Our research has substantially increased our understanding of how reproductive ageing in women happens, which we hope will lead to the development of new treatments to avoid early menopause.”

Dr John Perry from the Medical Research Council (MRC) Epidemiology Unit at the University of Cambridge co-led the study, and said: “We have known for some time that the age at which women go through menopause is partly determined by genes. This study now tells us that there are likely hundreds of genes involved, each altering menopause age by anything from a few weeks to a year. It is striking that genes involved in DNA repair have such an important influence on the age of menopause, which we think is due to their effect on how quickly a woman’s eggs are lost throughout her lifetime.”

The researchers also used these genetic findings to examine links between menopause and other health conditions. They predict that every one year later that menopause occurs increases the risk of developing breast cancer by 6%.

Dr Deborah Thompson from the Department of Public Health and Primary Care at the University of Cambridge also co-led this large international collaboration and said: “One particularly convincing finding was that going through menopause earlier reduces your chances of developing breast cancer and we think this is because these women have less exposure to the hormone oestrogen over their lifetime.”

The next step is to understand in more detail how the genetic variations found in this study are causing alterations in the timing of menopause. Uncovering these mechanisms will hopefully lead to better treatment for conditions linked to menopause, such as infertility and also improved understanding of the heath impact of menopause, such as risk of osteoporosis and heart disease.

Menopause usually occurs between ages 40 to 60 years old, indicated by the end of natural menstrual cycles and in many women by physical symptoms, such as hot flushes, disrupted sleep, and reduced energy levels. Natural menopause before the age of 40 is often called “primary ovarian insufficiency” and occurs in 1% of women.

Adapted from a University of Exeter press release.

Reference:
Felix R Day et al. ‘Large-scale genomic analyses link reproductive aging to hypothalamic signaling, breast cancer susceptibility and BRCA1-mediated DNA repair.’ Nature Genetics (2015). DOI: 10.1038/ng.3412

 

New Research Leaves Tumours With Nowhere to Hide

New research leaves tumours with nowhere to hide

 

source: www.cam.ac.uk
Hidden tumours that cause potentially fatal high blood pressure but lurk undetected in the body until pregnancy have been discovered by a Cambridge medical team.

Conditions are often around for 60 years which we have had no explanation for, now we can get to the heart of what has gone wrong

Morris Brown

The small tumours concealed in the adrenal gland are “unmasked” in early pregnancy, when a sudden surge of hormones fires them into life, leading to raised blood pressure and causing risk to patients.

New research published today in the New England Journal of Medicine conducted by a team led by Professor Morris Brown, professor of clinical pharmacology at Cambridge University and a Fellow of Gonville & Caius College, identifies this small group of lurking tumours for the first time, and explains why they behave as they do.

The study means that, when patients are found to have high blood pressure early in pregnancy, doctors will now be encouraged to consider that the cause could be the tumours, which can be easily treated. Currently, adrenal tumours are not usually suspected as the cause of high blood pressure in pregnancy, and so go undiagnosed.

Brown and an international group of PhD students including first-author Ada Teo of Newnham College used a combination of state-of-the-art gene “fingerprinting” technology and old-fashioned deduction from patient case histories to work out that the otherwise benign tumours harbour genetic mutations that affect cells in the adrenal gland.

The mutation means the adrenal cells are given false information and their clock is effectively turned back to “childhood”, returning them to their original state as ovary cells. They then respond to hormones released in pregnancy, producing increased levels of the salt-regulating hormone aldosterone.

Aldosterone in turn regulates the kidneys to retain more salt and hence water, pushing up blood pressure. High blood pressure – also known as hypertension – can be fatal, since it greatly increases the risk of stroke and heart attack.

The new findings build on a growing body of research focusing on the adrenal gland and blood pressure. Sixty years ago, the American endocrinologist Dr Jerome Conn first observed that large benign tumours in the adrenal gland can release aldosterone and increase blood pressure (now known as Conn’s Syndrome).

Brown and his team have previously found a group of much smaller tumours, arising from the outer part of the gland, that have the same effect. The latest discovery drills down still further, revealing that roughly one in ten of this group has a mutation that makes the cells receptive to pregnancy hormones.

Brown said: “This is an example of what modern scientific techniques, and collaborations among doctors and scientists, allow you to do [through a form of genetic fingerprinting]. Conditions are often around for 60 years which we have had no explanation for, and now we can get to the heart of what has gone wrong.”

But the discovery also relied on what doctors call “clinical pattern recognition” – using experience to spot similarities. Brown was able to link together the cases of two pregnant women almost ten years apart and a woman in early menopause. All suffered high blood pressure, leading him to screen their adrenal tumours and identify a matching genetic mutation.

Pregnant women found to have the newly identified subset of tumours can now be identified more readily, and the tumours either treated with drugs or potentially even removed.

The research was funded by the Wellcome Trust, National Institute for Health Research, British Heart Foundation and A* Singapore.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.