All posts by Admin

First Evidence of ‘Ghost Particles’

First evidence of ‘ghost particles’

source: www.cam.ac.uk

Major international collaboration has seen its first neutrinos – so-called ‘ghost particles’ – in the experiment’s newly built detector.

This is an important step towards the much larger Deep Underground Neutrino Experiment (DUNE)

Mark Thomson

An international team of scientists at the MicroBooNE physics experiment in the US, including researchers from the University of Cambridge, detected their first neutrino candidates, which are also known as ‘ghost particles’. It represents a milestone for the project, involving years of hard work and a 40-foot-long particle detector that is filled with 170 tons of liquid argon.

Neutrinos are subatomic, almost weightless particles that only interact via gravity or nuclear decay. Because they don’t interact with light, they can’t be seen. Neutrinos carry no electric charge and travel through the universe almost entirely unaffected by natural forces. They are considered a fundamental building block of matter. The 2015 Nobel Prize in physics was awarded for neutrino oscillations, a phenomenon that is of great important to the field of elementary particle physics.

“It’s nine years since we proposed, designed, built, assembled and commissioned this experiment,” said Bonnie Fleming, MicroBooNE co-spokesperson and a professor of physics at Yale University. “That kind of investment makes seeing first neutrinos incredible.”

Following a 13-week shutdown for maintenance, Fermilab’s accelerator complex near Chicago delivered a proton beam on Thursday, which is used to make the neutrinos, to the laboratory’s experiments. After the beam was turned on, scientists analysed the data recorded by MicroBooNE’s particle detector to find evidence of its first neutrino interactions.

Scientists at the University of Cambridge have been working on advanced image reconstruction techniques that contributed to the ability to identify the rare neutrino interactions in the MicroBooNE data.

The MicroBooNE experiment aims to study how neutrinos interact and change within a distance of 500 meters. The detector will help scientists reconstruct the results of neutrino collisions as finely detailed, three-dimensional images. MicroBooNE findings also will be relevant for the forthcoming Deep Underground Neutrino Experiment (DUNE), which will examine neutrino transitions over longer distances.

“Future neutrino experiments will use this technology,” said Sam Zeller, Fermilab physicist and MicroBooNE co-spokesperson. “We’re learning a lot from this detector. It’s important not just for us, but for the whole physics community.”

“This is an important step towards the much larger Deep Underground Neutrino Experiment (DUNE)”, said Professor Mark Thomson of Cambridge’s Cavendish Laboratory, co-spokesperson of the DUNE collaboration and member of MicroBooNE. “It is the first time that fully automated pattern recognition software has been used to identify neutrino interactions from the complex images in a detector such as MicroBooNE and the proposed DUNE detector.”

Adapted from a Fermilab press release.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above

– See more at: http://www.cam.ac.uk/research/news/first-evidence-of-ghost-particles#sthash.lr5o0H6c.dpuf

Cambridge Chemists Make Breakthrough With “Ultimate” Battery Which Can Power a Car From London to Edinburgh

Cambridge chemists make breakthrough with “ultimate” battery which can power a car from London to Edinburgh

source: http://www.independent.co.uk/

Scientists have made a breakthrough at Cambridge University by solving issues related to a battery that, in theory, could enable a car to drive from London to Edinburgh on a single charge.

A research paper published in the journal Science details how the team at Cambridge University overcome obstacles in the development of lithium-air batteries. The batteries, touted as the “ultimate battery” theoretically have the ability to store ten times more energy than lithium-ion batteries.

But until now, unwanted chemical reactions and problems with efficiency associated with lithium-air batteries have plagued efforts by scientists to develop them.

The researchers at Cambridge are claiming to have solved a number of the issues and if the team’s laboratory experiment can be turned into a commercial product it will enable a car, on a single charge, to drive from London to Edinburgh.

electric-car-3379965.jpg
A driver demonstrates a miniature electric car, in 1985

Professor Clare Grey, one of the paper’s senior authors, said: “What we’ve achieved is a significant advance for this technology and suggests whole new areas for research – we haven’t solved all the problems inherent to this chemistry, but our results do show routes forward towards a practical device.”

But the report’s authors do warn that a practical lithium-air battery still remains at least a decade away – there are several practical challenges that need the batteries become a viable alternative to gasoline.

Prof Grey added: “While there are still plenty of fundamental studies that remain to be done, to iron out some of the mechanistic details, the current results are extremely exciting – we are still very much at the development stage, but we’ve shown that there are solutions to some of the tough problems associated with this technology.”

Breaking the Mould: Untangling the Jelly-Like Properties of Diseased Proteins

Breaking the mould: Untangling the jelly-like properties of diseased proteins

source: www.cam.ac.uk

Scientists at the University of Cambridge have identified a new property of essential proteins which, when it malfunctions, can cause the build up, or ‘aggregation’, of misshaped proteins and lead to serious diseases.

Our approach shows the importance of considering the mechanisms of diseases as not just biological, but also physical processes

Peter St George-Hyslop

A common characteristic of neurodegenerative diseases – such as Alzheimer’s, Parkinson’s and Huntington’s disease – is the build-up of ‘misfolded’ proteins, which cause irreversible damage to the brain. For example, Alzheimer’s disease sees the build-up of beta-amyloid ‘plaques’ and tau ‘tangles’.

In the case of some forms of motor neurone disease (also known as amyotrophic lateral sclerosis, or ALS) and frontotemporal dementia, it is the build up of ‘assemblies’ of misshapen FUS protein and several other RNA-binding proteins that is associated with disease. However, the assembly of these RNA binding proteins has several differences to conventional protein aggregates seen in Alzheimer’s disease and Parkinson’s disease and as a result, the significance of the build-up of these proteins and how it occurs has until now been unclear.

FUS is an RNA-binding protein, which has a number of important functions in regulating RNA transcription (the first step in DNA expression) and splicing in the nucleus of cells. FUS also has functions in the cytoplasm of cells involved in regulating the translation of RNA into proteins. There are several other similar RNA binding proteins: a common feature of all of them is that in addition to having domains to bind RNA they also have domains where the protein appears to be unfolded or unstructured.

In a study published today in the journal Neuron, scientists at the University of Cambridge examined FUS’s physical properties to demonstrate how the protein’s unfolded domain enables it to undergo reversible ‘phase transitions’. In other words, it can change back and forth from a fully soluble ‘monomer’ form into distinct localised accumulations that resemble liquid droplets and then further condense into jelly-like structures that are known as hydrogels. During these changes, the protein ‘assemblies’ capture and release RNA and other proteins. In essence this process allows cellular machinery for RNA transcription and translation to be condensed in high concentrations within restricted three-dimensional space without requiring a limiting membrane, thereby helping to easily regulate these vital cellular processes.

Using the nematode worm C. elegans as a model of ALS and frontotemporal dementia, the team was then able to also show that this process can become irreversible. Mutated FUS proteins cause the condensation process to go too far, forming thick gels that are unable to return to their soluble state. As a result, these irreversible gel-like assemblies trap other important proteins, preventing them carrying out their usual functions. One consequence is that it affects the synthesis of new proteins in nerve cell axons (the trunk of a nerve cell).

Importantly, the researchers also showed that by disrupting the formation of these irreversible assemblies (for example, by targeting with particular small molecules), it is possible to rescue the impaired motility and prolong the worm’s lifespan.

Like jelly on a plate

The behaviour of FUS can be likened to that of a jelly, explains Professor Peter St George Hyslop from the Cambridge Institute for Medical Research.

When first made, jelly is runny, like a liquid. As it cools the fridge, it begins to set, initially becoming slightly thicker than water, but still runny as the gelatin molecules forms into longer, fibre-like chains known as fibrils. If you dropped a droplet of this nearly-set jelly into water, it would (at least briefly) remain distinct from the surrounding water – a ‘liquid droplet’ within a liquid.

As the jelly cools further in the fridge, the gelatin fibres condense more, and it eventually becomes a firmly set jelly that can be flipped out of the mould onto a plate. This set jelly is a ‘hydrogel’, a loose meshwork of protein (gelatin) fibrils that is dense enough to hold the water inside the spaces between its fibres. The set jelly holds the water in a constrained 3D space – and depending on the recipe, there may be some other ‘cargo’ suspended within the jelly, such as bits of fruit (in the case of FUS this ‘cargo’ might be ribosomes, other proteins, enzymes or RNA, for example).

When the jelly is stored in a cool room, the fruit is retained in the jelly. This means the fruit (or ribosomes, etc) can be moved around the house and eventually put on the dinner table (or in the case of FUS, be transported to parts of a cell with unique protein synthesis requirements).

If the jelly is re-warmed, it melts and releases its fruit, which then float off‎. But if the liquid molten jelly is put back in the fridge and re-cooled, it re-makes a firm hydrogel again, and the fruit is once again trapped. In theory, this cycle of gel-melt-gel-melt can be repeated endlessly.

However, if the jelly is left out, the water will slowly evaporate, and the jelly condenses down, changing from a soft, easily-melted jelly to a thick, rubbery jelly.  (In fact, jelly is often sold as a dense cube like this.) In this condensed jelly, the meshwork of protein fibrils are much closer together and it becomes increasingly difficult to get the condensed jelly to melt (you would have to pour boiling water on it to get it to melt). Because the condensed jelly is not easily meltable when it gets to this state, any cargo (fruit, ribosomes, etc.) within the jelly essentially becomes irreversibly trapped.

In the case of FUS and other RNA binding proteins, the ‘healthy’ proteins only very rarely spontaneously over-condense. However, disease-causing mutations make these proteins much more prone to spontaneously ‎condense down into thick fibrous gels, trapping their cargo (in this case the ribosomes, etc), which then become unavailable for use.

So essentially, this new research shows that the ability of some proteins to self-assemble into liquid droplets and (slightly more viscous) jellies/hydrogel is a useful property that allows cells to transiently concentrate cellular machinery into a constrained 3D space in order to perform key tasks, and then disassemble and disperse the machinery when not needed. It is probably faster and less energy-costly than doing the same thing inside intracellular membrane-bound vesicles – but that same property can go too far, leading to disease.

Professor St George Hyslop says: “We’ve shown that a particular group of proteins can regulate vital cellular processes by their distinct ability to transition between different states. But this essential property also makes them vulnerable to forming more fixed structures if mutated, disrupting their normal function and causing disease.

“The same principles are likely to be at play in other more common forms of these diseases due to mutation in other related binding proteins. Understanding what is in these assemblies should provide further targets for disease treatments.

“Our approach shows the importance of considering the mechanisms of diseases as not just biological, but also physical processes. By bringing together people from the biological and physical sciences, we’ve been able to better understand how misshapen proteins build up and cause disease.”

The research was funded by in the UK by the Wellcome Trust, Medical Research Council and National Institutes of Health Research, in Canada by Canadian Institutes of Health Research, and in the US by National Institutes of Health.

Reference
Murakami, T et al. ALS/FTD mutation-induced phase transition of FUS liquid droplets and reversible hydrogels into irreversible hydrogels impairs RNP granule function. Neuron; 29 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

New Design Points a Path To The ‘Ultimate’ Battery

New design points a path to the ‘ultimate’ battery

source: www.cam.ac.uk

Researchers have successfully demonstrated how several of the problems impeding the practical development of the so-called ‘ultimate’ battery could be overcome.

What we’ve achieved is a significant advance for this technology and suggests whole new areas for research

Clare Grey

Scientists have developed a working laboratory demonstrator of a lithium-oxygen battery which has very high energy density, is more than 90% efficient, and, to date, can be recharged more than 2000 times, showing how several of the problems holding back the development of these devices could be solved.

Lithium-oxygen, or lithium-air, batteries have been touted as the ‘ultimate’ battery due to their theoretical energy density, which is ten times that of a lithium-ion battery. Such a high energy density would be comparable to that of gasoline – and would enable an electric car with a battery that is a fifth the cost and a fifth the weight of those currently on the market to drive from London to Edinburgh on a single charge.

However, as is the case with other next-generation batteries, there are several practical challenges that need to be addressed before lithium-air batteries become a viable alternative to gasoline.

Now, researchers from the University of Cambridge have demonstrated how some of these obstacles may be overcome, and developed a lab-based demonstrator of a lithium-oxygen battery which has higher capacity, increased energy efficiency and improved stability over previous attempts.

Their demonstrator relies on a highly porous, ‘fluffy’ carbon electrode made from graphene (comprising one-atom-thick sheets of carbon atoms), and additives that alter the chemical reactions at work in the battery, making it more stable and more efficient. While theresults, reported in the journal Science, are promising, the researchers caution that a practical lithium-air battery still remains at least a decade away.

“What we’ve achieved is a significant advance for this technology and suggests whole new areas for research – we haven’t solved all the problems inherent to this chemistry, but our results do show routes forward towards a practical device,” said Professor Clare Grey of Cambridge’s Department of Chemistry, the paper’s senior author.

Many of the technologies we use every day have been getting smaller, faster and cheaper each year – with the notable exception of batteries. Apart from the possibility of a smartphone which lasts for days without needing to be charged, the challenges associated with making a better battery are holding back the widespread adoption of two major clean technologies: electric cars and grid-scale storage for solar power.

“In their simplest form, batteries are made of three components: a positive electrode, a negative electrode and an electrolyte,’’ said Dr Tao Liu, also from the Department of Chemistry, and the paper’s first author.

In the lithium-ion (Li-ion) batteries we use in our laptops and smartphones, the negative electrode is made of graphite (a form of carbon), the positive electrode is made of a metal oxide, such as lithium cobalt oxide, and the electrolyte is a lithium salt dissolved in an organic solvent. The action of the battery depends on the movement of lithium ions between the electrodes. Li-ion batteries are light, but their capacity deteriorates with age, and their relatively low energy densities mean that they need to be recharged frequently.

Over the past decade, researchers have been developing various alternatives to Li-ion batteries, and lithium-air batteries are considered the ultimate in next-generation energy storage, because of their extremely high energy density. However, previous attempts at working demonstrators have had low efficiency, poor rate performance, unwanted chemical reactions, and can only be cycled in pure oxygen.

What Liu, Grey and their colleagues have developed uses a very different chemistry than earlier attempts at a non-aqueous lithium-air battery, relying on lithium hydroxide (LiOH) instead of lithium peroxide (Li2O2). With the addition of water and the use of lithium iodide as a ‘mediator’, their battery showed far less of the chemical reactions which can cause cells to die, making it far more stable after multiple charge and discharge cycles.

By precisely engineering the structure of the electrode, changing it to a highly porous form of graphene, adding lithium iodide, and changing the chemical makeup of the electrolyte, the researchers were able to reduce the ‘voltage gap’ between charge and discharge to 0.2 volts. A small voltage gap equals a more efficient battery – previous versions of a lithium-air battery have only managed to get the gap down to 0.5 – 1.0 volts, whereas 0.2 volts is closer to that of a Li-ion battery, and equates to an energy efficiency of 93%.

The highly porous graphene electrode also greatly increases the capacity of the demonstrator, although only at certain rates of charge and discharge. Other issues that still have to be addressed include finding a way to protect the metal electrode so that it doesn’t form spindly lithium metal fibres known as dendrites, which can cause batteries to explode if they grow too much and short-circuit the battery.

Additionally, the demonstrator can only be cycled in pure oxygen, while the air around us also contains carbon dioxide, nitrogen and moisture, all of which are generally harmful to the metal electrode.

“There’s still a lot of work to do,” said Liu. “But what we’ve seen here suggests that there are ways to solve these problems – maybe we’ve just got to look at things a little differently.”

“While there are still plenty of fundamental studies that remain to be done, to iron out some of the mechanistic details, the current results are extremely exciting – we are still very much at the development stage, but we’ve shown that there are solutions to some of the tough problems associated with this technology,” said Grey.

The authors acknowledge support from the US Department of Energy, the Engineering and Physical Sciences Research Council (EPSRC), Johnson Matthey and the European Union via Marie Curie Actions and the Graphene Flagship. The technology has been patented and is being commercialised through Cambridge Enterprise, the University’s commercialisation arm.

Reference:
Liu, T et. al. ‘Cycling Li-O2 Batteries via LiOH Formation and Decomposition.’ Science (2015). DOI: 10.1126/science.aac7730


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-design-points-a-path-to-the-ultimate-battery#sthash.g1djETiB.dpuf

Entanglement at Heart of ‘Two-For-One’ Fission in Next-Generation Solar Cells

Entanglement at heart of ‘two-for-one’ fission in next-generation solar cells

source: www.cam.ac.uk

The mechanism behind a process known as singlet fission, which could drive the development of highly efficient solar cells, has been directly observed by researchers for the first time.

Harnessing the process of singlet fission into new solar cell technologies could allow tremendous increases in energy conversion efficiencies in solar cells

Alex Chin

An international team of scientists have observed how a mysterious quantum phenomenon in organic molecules takes place in real time, which could aid in the development of highly efficient solar cells.

The researchers, led by the University of Cambridge, used ultrafast laser pulses to observe how a single particle of light, or photon, can be converted into two energetically excited particles, known as spin-triplet excitons, through a process called singlet fission. If singlet fission can be controlled, it could enable solar cells to double the amount of electrical current that can be extracted.

In conventional semiconductors such as silicon, when one photon is absorbed it leads to the formation of one free electron that can be harvested as electrical current. However certain materials undergo singlet fission instead, where the absorption of a photon leads to the formation of two spin-triplet excitons.

Working with researchers from the Netherlands, Germany and Sweden, the Cambridge team confirmed that this ‘two-for-one’ transformation involves an elusive intermediate state in which the two triplet excitons are ‘entangled’, a feature of quantum theory that causes the properties of each exciton to be intrinsically linked to that of its partner.

By shining ultrafast laser pulses – just a few quadrillionths of a second – on a sample of pentacene, an organic material which undergoes singlet fission, the researchers were able to directly observe this entangled state for the first time, and showed how molecular vibrations make it both detectable and drive its creation through quantum dynamics. Theresults are reported today (26 October) in the journal Nature Chemistry.

“Harnessing the process of singlet fission into new solar cell technologies could allow tremendous increases in energy conversion efficiencies in solar cells,” said Dr Alex Chin from the University’s Cavendish Laboratory, one of the study’s co-authors. “But before we can do that, we need to understand how exciton fission happens at the microscopic level. This is the basic requirement for controlling this fascinating process.”

The key challenge for observing real-time singlet fission is that the entangled spin-triplet excitons are essentially ‘dark’ to almost all optical probes, meaning they cannot be directly created or destroyed by light. In materials like pentacene, the first stage – which can be seen – is the absorption of light that creates a single, high-energy exciton, known as a spin singlet exciton. The subsequent fission of the singlet exciton into two less energetic triplet excitons gives the process its name, but the ability to see what is going on vanishes as the process take place.

To get around this, the team employed a powerful technique known as two-dimensional spectroscopy, which involves hitting the material with a co-ordinated sequence of ultrashort laser pulses and then measuring the light emitted by the excited sample. By varying the time between the pulses in the sequence, it is possible to follow in real time how energy absorbed by previous pulses is transferred and transformed into different states.

Using this approach, the team found that when the pentacene molecules were vibrated by the laser pulses, certain changes in the molecular shapes cause the triplet pair to become briefly light-absorbing, and therefore detectable by later pulses. By carefully filtering out all but these frequencies, a weak but unmistakable signal from the triplet pair state became apparent.

The authors then developed a model which showed that when the molecules are vibrating, they possess new quantum states that simultaneously have the properties of both the light-absorbing singlet exciton and the dark triplet pairs. These quantum ‘super positions’, which are the basis of Schrödinger’s famous thought experiment in which a cat is – according to quantum theory – in a state of being both alive and dead at the same time, not only make the triplet pairs visible, they also allow fission to occur directly from the moment light is absorbed.

“This work shows that optimised fission in real materials requires us to consider more than just the static arrangements and energies of molecules; their motion and quantum dynamics are just as important,” said Dr Akshay Rao, from the University’s Cavendish Laboratory. “It is a crucial step towards opening up new routes to highly efficiency solar cells.”

The research was supported by the European LaserLab Consortium, Royal Society, and the Netherlands Organization for Scientific Research. The work at Cambridge forms part of a broader initiative to harness high tech knowledge in the physical sciences to tackle global challenges such as climate change and renewable energy. This initiative is backed by the UK Engineering and Physical Sciences Research Council (EPSRC) and the Winton Programme for the Physics of Sustainability.

Reference:
Bakulin, Artem et. al. ‘Real-time observation of multiexcitonic states in ultrafast singlet fission using coherent 2D electronic spectroscopy.’ Nature Chemistry (2015). DOI: 10.1038/nchem.2371


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Social Yeast Cells Prefer to Work With Close Relatives to Make Our Beer, Bread & Wine

Social yeast cells prefer to work with close relatives to make our beer, bread & wine

source: www.cam.ac.uk

Baker’s yeast cells living together in communities help feed each other, but leave incomers from the same species to die from starvation, according to new research from the University of Cambridge.

The cell-cell cooperation we uncovered plays a significant role in allowing yeast to help us to produce our food, beer and wine

Kate Campbell

The findings, published today in the open access journal eLife, could lead to new biotechnological production systems based on metabolic cooperation. They could also be used to inhibit cell growth by blocking the exchange of metabolites between cells. This could be a new strategy to combat fungal pathogens or tumour cells.

“The cell-cell cooperation we uncovered plays a significant role in allowing yeast to help us to produce our food, beer and wine,” says first author Kate Campbell.

“It may also be crucial for all eukaryotic life, including animals, plants and fungi.”

Yeast metabolism has been exploited for thousands of years by mankind for brewing and baking. Yeast metabolizes sugar and secretes a wide array of small molecules during their life cycle, from alcohols and carbon dioxide to antioxidants and amino acids. Although much research has shown yeast to be a robust metabolic work-horse, only more recently has it become clear that these single-cellular organisms assemble in communities, in which individual cells may play a specialised function.

For the new study funded by the Wellcome Trust and European Research Council, researchers at the University of Cambridge and the Francis Crick Institute found cells to be highly efficient at exchanging some of their essential building blocks (amino acids and nucleobases, such as the A, T, G and C constituents of DNA) in what they call metabolic cooperation. However, they do not do so with every kind of yeast cell: they share nutrients with cells descendant from the same ancestor, but not with other cells from the same species when they originate from another community.

Using a synthetic biology approach, the team led by Dr Markus Ralser at the Department of Biochemistry started with a metabolically competent yeast mother cell, genetically manipulated so that its daughters progressively loose essential metabolic genes. They used it to grow a heterogeneous population of yeast with multiple generations, in which individual cells are deficient for various nutrients.

Campbell then tested whether cells lacking a metabolic gene can survive by sharing nutrients with their family members. When living within their community setting, these cells could continue to grow and survive. This meant that cells were being kept alive by neighbouring cells, which still had their metabolic activity intact, providing them with a much needed nutrient supply. Eventually, the colony established a composition where the majority of cells did help each other out. When cells of the same species but derived from another community were introduced, social interactions did not establish and the foreign cells died from starvation.

When the successful community was compared to other yeast strains, which had no metabolic deficiencies, the researchers found no pronounced differences in how both communities grew and produced biomass. This is implies that sharing was so efficient that any disadvantage was cancelled out.

The implications of these results may therefore be substantial for industries in which yeast are used to produce biomolecules of interest. This includes biofuels, vaccines and food supplements. The research might also help to develop therapeutic strategies against pathogenic fungi, such as the yeast Candida albicans, which form cooperative communities to overcome our immune system.

Reference

Kate Campbell, Jakob Vowinckel, Michael Muelleder, Silke Malmsheimer, Nicola Lawrence, Enrica Calvani, Leonor Miller-Fleming, Mohammad T. Alam, Stefan Christen, Markus A. Keller, and Markus Ralser

Self-establishing communities enable cooperative metabolite exchange in a eukaryote eLife 2015, http://dx.doi.org/10.7554/eLife.09943


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Plague in Humans ‘Twice as Old’ But Didn’t Begin as Flea-Borne,Ancient DNA Reveal

Plague in humans ‘twice as old’ but didn’t begin as flea-borne, ancient DNA reveals

source: www.cam.ac.uk

New research dates plague back to the early Bronze Age, showing it had been endemic in humans across Eurasia for millennia prior to first recorded global outbreak, and that ancestral plague mutated into its bubonic, flea-borne form between the 2nd and 1st millennium BC.

These results show that the ancient DNA has the potential not only to map our history and prehistory, but also discover how disease may have shaped it

Eske Willerslev

New research using ancient DNA has revealed that plague has been endemic in human populations for more than twice as long as previously thought, and that the ancestral plague would have been predominantly spread by human-to-human contact – until genetic mutations allowedYersinia pestis (Y. pestis), the bacteria that causes plague, to survive in the gut of fleas.

These mutations, which may have occurred near the turn of the 1st millennium BC, gave rise to the bubonic form of plague that spreads at terrifying speed through flea – and consequently rat – carriers. The bubonic plague caused the pandemics that decimated global populations, including the Black Death, which wiped out half the population of Europe in the 14th century.

Before its flea-borne evolution, however, researchers say that plague was in fact endemic in the human populations of Eurasia at least 3,000 years before the first plague pandemic in historical records (the Plague of Justinian in 541 AD).

They say the new evidence that Y. pestis bacterial infection in humans actually emerged around the beginning of the Bronze Age suggests that plague may have been responsible for major population declines believed to have occurred in the late 4th and early 3rd millennium BC.

The work was conducted by an international team including researchers from the universities of Copenhagen, Denmark, and Cambridge, UK, and the findings are published today in the journal Cell.

“We found that the Y. pestis lineage originated and was widespread much earlier than previously thought, and we narrowed the time window as to when and how it developed,” said senior author Professor Eske Willerslev, who recently joined Cambridge University’s Department of Zoology from the University of Copenhagen.

“The underlying mechanisms that facilitated the evolution of Y. pestis are present even today. Learning from the past may help us understand how future pathogens may arise and evolve,” he said.

Researchers analysed ancient genomes extracted from the teeth of 101 adults dating from the Bronze Age and found across the Eurasian landmass.

They found Y. pestis bacteria in the DNA of seven of the adults, the oldest of whom died 5,783 years ago – the earliest evidence of plague. Previously, direct molecular evidence forY. pestis had not been obtained from skeletal material older than 1,500 years.

However, six of the seven plague samples were missing two key genetic components found in most modern strains of plague: a “virulence gene” called ymt, and a mutation in an “activator gene” called pla.

The ymt gene protects the bacteria from being destroyed by the toxins in flea guts, so that it multiplies, choking the flea’s digestive tract. This causes the starving flea to frantically bite anything it can, and, in doing so, spread the plague.

The mutation in the pla gene allows Y. pestis bacteria to spread across different tissues, turning the localised lung infection of pneumonic plague into one of the blood and lymph nodes.

Researchers concluded these early strains of plague could not have been carried by fleas without ymt. Nor could they cause bubonic plague – which affects the lymphatic immune system, and inflicts the infamous swollen buboes of the Black Death – without the plamutation.

Consequently, the plague that stalked populations for much of the Bronze Age must have been pneumonic, which directly affects the respiratory system and causes desperate, hacking coughing fits just before death. Breathing around infected people leads to inhalation of the bacteria, the crux of its human-to-human transmission.

Study co-author Dr Marta Mirazón-Lahr, from Cambridge’s Leverhulme Centre for Human Evolutionary Studies (LCHES), points out that a study earlier this year from Willerslev’s Copenhagen group showed the Bronze Age to be a highly active migratory period, which could have led to the spread of pneumonic plague.

“The Bronze Age was a period of major metal weapon production, and it is thought increased warfare, which is compatible with emerging evidence of large population movements at the time. If pneumonic plague was carried as part of these migrations, it would have had devastating effects on small groups they encountered,” she said.

“Well-documented cases have shown the pneumonic plague’s chain of infection can go from a single hunter or herder to ravaging an entire community in two to three days.”

The most recent of the seven ancient genomes to reveal Y. pestis in the new study has both of the key genetic mutations, indicating an approximate timeline for the evolution that spawned flea-borne bubonic plague.

“Among our samples, the mutated plague strain is first observed in Armenia in 951 BC, yet is absent in the next most recent sample from 1686 BC – suggesting bubonic strains evolve and become fixed in the late 2nd and very early 1st millennium BC,” said Mirazón-Lahr.

“However, the 1686 BC sample is from the Altai mountains near Mongolia. Given the distance between Armenia and Altai, it’s also possible that the Armenian strain of bubonic plague has a longer history in the Middle East, and that historical movements during the 1st millennium BC exported it elsewhere.”

The Books of Samuel in the Bible describe an outbreak of plague among the Philistines in 1320 BC, complete with swellings in the groin, which the World Health Organization has argued fits the description of bubonic plague. Mirazón-Lahr suggests this may support the idea of a Middle Eastern origin for the plague’s highly lethal genetic evolution.

Co-author Professor Robert Foley, also from Cambridge’s LCHES, suggests that the lethality of bubonic plague may have required the right population demography before it could thrive.

“Every pathogen has a balance to maintain. If it kills a host before it can spread, it too reaches a ‘dead end’. Highly lethal diseases require certain demographic intensity to sustain them.

“The endemic nature of pneumonic plague was perhaps more adapted for an earlier Bronze Age population. Then, as Eurasian societies grew in complexity and trading routes continued to open up, maybe the conditions started to favour the more lethal form of plague,” Foley said.

“The Bronze Age is the edge of history, and ancient DNA is making what happened at this critical time more visible,” he said.

Willerslev added: “These results show that the ancient DNA has the potential not only to map our history and prehistory, but also discover how disease may have shaped it.”

Inset image: Map showing where the remains of the Bronze Age plague victims were found.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/plague-in-humans-twice-as-old-but-didnt-begin-as-flea-borne-ancient-dna-reveals#sthash.o5nw3wNu.dpuf

New Microscopic Imaging Technology Reveals Origins of Leukaemia

New microscopic imaging technology reveals origins of leukaemia

source: www.cam.ac.uk

Scientists at the Cambridge Institute for Medical Research at the University of Cambridge and the Medical Research Council Laboratory of Molecular Biology have taken advantage of revolutionary developments in microscopic imaging to reveal the origins of leukaemia.

Many forms of blood cancer can be traced back to defects in the basic housekeeping processes in our cells’ maturation

Alan Warren

The researchers studied tiny protein-producing factories, called ribosomes, isolated from cells. They capitalised on improvements made at the LMB to a high-powered imaging technique known as single particle cryo-electron microscopy.

The microscopes, capable of achieving detail near to the atomic level, enabled the team to link the molecular origins of a rare inherited leukaemia predisposition disorder, ‘Shwachman-Diamond Syndrome’ and a more common form of acute leukaemia to a common pathway involved in the construction of ribosomes.

Cryo-EM map showing the large ribosomal subunit (cyan), eIF6 (yellow) and the SBDS protein (magenta) that is deficient in the inherited leukaemia predisposition disorder Shwachman-Diamond syndrome. Credit: Alan Warren, University of Cambridge

The research, funded by the blood cancer charity Bloodwise and the Medical Research Council (MRC), is published online in the journal Nature Structural and Molecular Biology.

Ribosomes are the molecular machinery in cells that produce proteins by ‘translating’ the instructions contained in DNA via an intermediary messenger molecule. Errors in this process are known to play a part in the development of some bone marrow disorders and leukaemias. Until now scientists have been unable to study ribosomes at a high enough resolution to understand exactly what goes wrong.

Ribosomes are constructed in a series of discrete steps, like an assembly line. One of the final assembly steps involves the release of a key building block that allows the ribosome to become fully functional. The research team showed that a corrupted mechanism underlying this fundamental late step prevents proper assembly of the ribosome.

This provides an explanation for how cellular processes go awry in both Shwachman-Diamond syndrome and one in 10 cases of T-cell acute lymphoblastic leukaemia. This form of leukaemia, which affects around 60 children and young teenagers a year in the UK, is harder to treat than the more common B-cell form.

The findings from the Cambridge scientists, who worked in collaboration with scientists at the University of Rennes in France, open up the possibility that a single drug designed to target this molecular fault could be developed to treat both diseases.

Professor Alan Warren, from the Cambridge Institute of Medical Research at the University of Cambridge, said: “We are starting to find that many forms of blood cancer can be traced back to defects in the basic housekeeping processes in our cells’ maturation. Pioneering improvements to electron microscopes pave the way for the creation of a detailed map of the how these diseases develop, in a way that was never possible before.”

Single particle cryo-electron microscopy preserves the ribosomes at sub-zero temperatures to allow the collection and amalgamation of multiple images of maturing ribosomes in different orientations to ultimately provide more detail.

The technique has been refined in the MRC Laboratory of Molecular Biology by the development of new ‘direct electron detectors’ to better sense the electrons, yielding images of unprecedented quality. Methods to correct for beam-induced sample movements and new classification methods that can separate out several different structures within a single sample have also been developed.

Dr Matt Kaiser, Head of Research at Bloodwise, said: “New insights into the biology of blood cancers and disorders that originate in the bone marrow have only been made possible by the latest advances in technology. While survival rates for childhood leukaemia have improved dramatically over the years, this particular form of leukaemia is harder to treat and still relies on toxic chemotherapy. These findings will offer hope that new, more targeted, treatments can be developed.”

The research received additional funding from a Federation of European Biochemical Societies (FEBS) Long term Fellowship, the SDS patient charity Ted’s Gang and the Cambridge NIHR Biomedical Research Centre.

Adapted from a press release by Bloodwise

Reference
Weis, F et al. Mechanism of eIF6 release from the nascent 60S ribosomal subunit. Nature Structural and Molecular Biology; 19 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-microscopic-imaging-technology-reveals-origins-of-leukaemia#sthash.70gtBdVm.dpuf

New Graphene Based Inks For High-Speed Manufacturing of Printed Electronics

New graphene based inks for high-speed manufacturing of printed electronics

source: www.cam.ac.uk

A low-cost, high-speed method for printing electronics using graphene and other conductive materials could open up a wide range of commercial applications.

Being able to produce conductive inks that could effortlessly be used for printing at a commercial scale at a very high speed will open up all kinds of different applications for graphene and other similar materials

Tawfique Hasan

A low-cost, high-speed method for printing graphene inks using a conventional roll-to-roll printing process, like that used to print newspapers and crisp packets, could open up a wide range of practical applications, including inexpensive printed electronics, intelligent packaging and disposable sensors.

Developed by researchers at the University of Cambridge in collaboration with Cambridge-based technology company Novalia, the method allows graphene and other electrically conducting materials to be added to conventional water-based inks and printed using typical commercial equipment, the first time that graphene has been used for printing on a large-scale commercial printing press at high speed.

Graphene is a two-dimensional sheet of carbon atoms, just one atom thick. Its flexibility, optical transparency and electrical conductivity make it suitable for a wide range of applications, including printed electronics. Although numerous laboratory prototypes have been demonstrated around the world, widespread commercial use of graphene is yet to be realised.

“We are pleased to be the first to bring graphene inks close to real-world manufacturing. There are lots of companies that have produced graphene inks, but none of them has done it on a scale close to this,” said Dr Tawfique Hasan of the Cambridge Graphene Centre (CGC), who developed the method. “Being able to produce conductive inks that could effortlessly be used for printing at a commercial scale at a very high speed will open up all kinds of different applications for graphene and other similar materials.”

“This method will allow us to put electronic systems into entirely unexpected shapes,” said Chris Jones of Novalia. “It’s an incredibly flexible enabling technology.”

Hasan’s method, developed at the University’s Nanoscience Centre, works by suspending tiny particles of graphene in a ‘carrier’ solvent mixture, which is added to conductive water-based ink formulations. The ratio of the ingredients can be adjusted to control the liquid’s properties, allowing the carrier solvent to be easily mixed into a conventional conductive water-based ink to significantly reduce the resistance. The same method works for materials other than graphene, including metallic, semiconducting and insulating nanoparticles.

Currently, printed conductive patterns use a combination of poorly conducting carbon with other materials, most commonly silver, which is expensive. Silver-based inks cost £1000 or more per kilogram, whereas this new graphene ink formulation would be 25 times cheaper. Additionally, silver is not recyclable, while graphene and other carbon materials can easily be recycled. The new method uses cheap, non-toxic and environmentally friendly solvents that can be dried quickly at room temperature, reducing energy costs for ink curing. Once dry, the ‘electric ink’ is also waterproof and adheres to its substrate extremely well.

The graphene-based inks have been printed at a rate of more than 100 metres per minute, which is in line with commercial production rates for graphics printing, and far faster than earlier prototypes. Two years ago, Hasan and his colleagues produced a prototype of a transparent and flexible piano using graphene-based inks, which took between six and eight hours to make. Through the use of this new ink, more versatile devices on paper or plastic can be made at a rate of 300 per minute, at a very low cost. Novalia has also produced a printed DJ deck and an interactive poster, which functions as a drum kit using the same method.

Hasan and PhD students Guohua Hu, Richard Howe and Zongyin Yang of the Hybrid Nanomaterials Engineering group at CGC, in collaboration with Novalia, tested the method on a typical commercial printing press, which required no modifications in order to print with the graphene ink. In addition to the new applications the method will open up for graphene, it could also initiate entirely new business opportunities for commercial graphics printers, who could diversify into the electronics sector.

“The UK, and the Cambridge area in particular, has always been strong in the printing sector, but mostly for graphics printing and packaging,” said Hasan, a Royal Academy of Engineering Research Fellow and a University Lecturer in the Engineering Department. “We hope to use this strong local expertise to expand our functional ink platform. In addition to cheaper printable electronics, this technology opens up potential application areas such as smart packaging and disposable sensors, which to date have largely been inaccessible due to cost.”

In the short to medium term, the researchers hope to use their method to make printed, disposable biosensors, energy harvesters and RFID tags.

The research was supported by grants from the Engineering and Physical Sciences Research Council’s Impact Acceleration Account and a Royal Academy of Engineering Research Fellowship. The technology is being commercialised by Cambridge Enterprise, the University’s commercialisation arm.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-graphene-based-inks-for-high-speed-manufacturing-of-printed-electronics#sthash.U9l33wPE.dpuf

Using Experts ‘Inexpertly’ Leads to Policy Failure, Warn Researchers

Using experts ‘inexpertly’ leads to policy failure, warn researchers

source: www.cam.ac.uk

Evidence shows that experts are frequently fallible, say leading risk researchers, and policy makers should not act on expert advice without using rigorous methods that balance subjective distortions inherent in expert estimates.

The cost of ignoring these techniques – of using experts inexpertly – is less accurate information and so more frequent, and more serious, policy failures

William Sutherland and Mark Burgman

The accuracy and reliability of expert advice is often compromised by “cognitive frailties”, and needs to be interrogated with the same tenacity as research data to avoid weak and ill-informed policy, warn two leading risk analysis and conservation researchers in thejournal Nature today.

While many governments aspire to evidence-based policy, the researchers say the evidence on experts themselves actually shows that they are highly susceptible to “subjective influences” – from individual values and mood, to whether they stand to gain or lose from a decision – and, while highly credible, experts often vastly overestimate their objectivity and the reliability of peers.

The researchers caution that conventional approaches of informing policy by seeking advice from either well-regarded individuals or assembling expert panels needs to be balanced with methods that alleviate the effects of psychological and motivational bias.

They offer a straightforward framework for improving expert advice, and say that experts should provide and assess evidence on which decisions are made – but not advise decision makers directly, which can skew impartiality.

“We are not advocating replacing evidence with expert judgements, rather we suggest integrating and improving them,” write professors William Sutherland and Mark Burgman from the universities of Cambridge and Melbourne respectively.

“Policy makers use expert evidence as though it were data. So they should treat expert estimates with the same critical rigour that must be applied to data,” they write.

“Experts must be tested, their biases minimised, their accuracy improved, and their estimates validated with independent evidence. Put simply, experts should be held accountable for their opinions.”

Sutherland and Burgman point out that highly regarded experts are routinely shown to be no better than novices at making judgements.

However, several processes have been shown to improve performances across the spectrum, they say, such as ‘horizon scanning’ – identifying all possible changes and threats – and ‘solution scanning’ – listing all possible options, using both experts and evidence, to reduce the risk of overlooking valuable alternatives.

To get better answers from experts, they need better, more structured questions, say the authors. “A seemingly straightforward question, ‘How many diseased animals are there in the area?’ for example, could be interpreted very differently by different people. Does it include those that are infectious and those that have recovered? What about those yet to be identified?” said Sutherland, from Cambridge’s Department of Zoology.

“Structured question formats that extract upper and lower boundaries, degrees of confidence and force consideration of alternative theories are important for shoring against slides into group-think, or individuals getting ascribed greater credibility based on appearance or background,” he said.

When seeking expert advice, all parties must be clear about what they expect of each other, says Burgman, Director of the Centre of Excellence for Biosecurity Risk Analysis. “Are policy makers expecting estimates of facts, predictions of the outcome of events, or advice on the best course of action?”

“Properly managed, experts can help with estimates and predictions, but providing advice assumes the expert shares the same values and objectives as the decision makers. Experts need to stick to helping provide and assess evidence on which such decisions are made,” he said.

Sutherland and Burgman have created a framework of eight key ways to improve the advice of experts. These include using groups – not individuals – with diverse, carefully selected members well within their expertise areas.

They also caution against being bullied or “starstruck” by the over-assertive or heavyweight. “People who are less self-assured will seek information from a more diverse range of sources, and age, number of qualifications and years of experience do not explain an expert’s ability to predict future events – a finding that applies in studies from geopolitics to ecology,” said Sutherland.

Added Burgman: “Some experts are much better than others at estimation and prediction. However, the only way to tell a good expert from a poor one is to test them. Qualifications and experience don’t help to tell them apart.”

“The cost of ignoring these techniques – of using experts inexpertly – is less accurate information and so more frequent, and more serious, policy failures,” write the researchers.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/using-experts-inexpertly-leads-to-policy-failure-warn-researchers#sthash.jTZ2MaAt.dpuf

New Insights into the Dynamics of Past Climate hange

New insights into the dynamics of past climate change

source: www.cam.ac.uk

A new study finds that changing climate in the polar regions can affect conditions in the rest of the world far quicker than previously thought.

Other studies have shown that the overturning circulation in the Atlantic has faced a slowdown during the last few decades. The scientific community is only beginning to understand what it would mean for global climate should this trend continue, as predicted by some climate models

Julia Gottschalk

A new study of the relationship between ocean currents and climate change has found that they are tightly linked, and that changes in the polar regions can affect the ocean and climate on the opposite side of the world within one to two hundred years, far quicker than previously thought.

The study, by an international team of scientists led by the University of Cambridge, examined how changes in ocean currents in the Atlantic Ocean were related to climate conditions in the northern hemisphere during the last ice age, by examining data from ice cores and fossilised plankton shells. It found that variations in ocean currents and abrupt climate events in the North Atlantic region were tightly linked in the past, and that changes in the polar regions affected the ocean circulation and climate on the opposite side of the world.

The researchers determined that as large amounts of fresh water were emptied into the North Atlantic as icebergs broke off the North American and Eurasian ice sheets, the deep and shallow currents in the North Atlantic rapidly slowed down, which led to the formation of sea ice around Greenland and the subsequent cooling of the Northern Hemisphere. It also strongly affected conditions in the South Atlantic within a matter of one to two hundred years. The results, published in the journal Nature Geoscience, show how climate events in the Northern Hemisphere were tightly coupled with changes in the strength of deep ocean currents in the Atlantic Ocean, and how that may have affected conditions across the globe.

During the last ice age, which took place from 70,000 to 19,000 years ago, the climate in the Northern Hemisphere toggled back and forth between warm and cold states roughly every 1000 to 6000 years. These events, known as Dansgaard-Oeschger events, were first identified in data from Greenland ice cores in the early 1990s, and had far-reaching impacts on the global climate.

The ocean, which covers 70% of the planet, is a huge reservoir of carbon dioxide and heat. It stores about 60 times more carbon than the atmosphere, and can release or take up carbon on both short and long timescales. As changes happen in the polar regions, they are carried around the world by ocean currents, both at the surface and in the deep ocean. These currents are driven by winds, ocean temperature and salinity differences, and are efficient at distributing heat and carbon around the globe. Ocean currents therefore have a strong influence on whether regions of the world are warm (such as Europe) or whether they are not (such as Antarctica) as they modulate the effects of solar radiation. They also influence whether CO2 is stored in the ocean or the atmosphere, which is very important for global climate variability.

“Other studies have shown that the overturning circulation in the Atlantic has faced a slowdown during the last few decades,” said Dr Julia Gottschalk of Cambridge Department of Earth Sciences, the paper’s lead author. “The scientific community is only beginning to understand what it would mean for global climate should this trend continue, as predicted by some climate models.”

Analysing new data from marine sediment cores taken from the deep South Atlantic, between the southern tip of South America and the southern tip of Africa, the researchers discovered that during the last ice age, deep ocean currents in the South Atlantic varied essentially in unison with Greenland ice-core temperatures. “This implies that a very rapid transmission process must have operated, that linked rapid climate change around Greenland with the otherwise sluggish deep Atlantic Ocean circulation,” said Gottschalk, who is a Gates Cambridge Scholar. Best estimates of the delay between these two records suggest that the transmission happened within about 100 to 200 years.

Digging through metres of ocean mud from depths of 3,800 metres, the team studied the dissolution of fossil plankton shells that was closely linked to the chemical signature of different water masses. Water masses originating in the North Atlantic are less corrosive than water masses from the South Atlantic.

“Periods of very intense North Atlantic circulation and higher Northern Hemisphere temperatures increased the preservation of microfossils in the sediment cores, whereas those with slower circulation, when the study site was primarily influenced from the south, were linked with decreased carbonate ion concentrations at our core site which led to partial dissolution,” said co-author Dr Luke Skinner, also from Cambridge’s Department of Earth Sciences.

To better understand the physical mechanisms of rapid ocean adjustment, the data was compared with a climate model simulation which covers the same period. “The data of the model simulation was so close to the deep ocean sediment data, that we knew immediately, we were on the right track,” said co-author Dr Laurie Menviel from the University of New South Wales, Australia, who conducted the model simulation.

The timescales of these large-scale adjustments found in the palaeoceanographic data agree extremely well with those predicted by the model. “Waves between layers of different density in the deep ocean are responsible for quickly transmitting signals from North to South. This is a paradigm shift in our understanding of how the ocean works,” said Axel Timmermann, Professor of Oceanography at the University of Hawaii.

Although conditions at the end of the last ice age were very different to those of today, the findings could shed light on how changing conditions in the polar regions may affect ocean currents. However, much more research is needed in this area. The study’s findings could help test and improve climate models that are run for both past and future conditions.

The sediment cores were recovered by Dr Claire Waelbroeck and colleagues aboard the French research vessel Marion Dufresne.

The research was supported by the Gates Cambridge Trust, the Natural Environmental Research Council of the UK, the Royal Society, the European Research Council, the Australian Research Council and the National Science Foundation of the United States of America.

Reference:
Gottschalk, J et. al.
Abrupt changes in the southern extent of North Atlantic Deep Water during Dansgaard-Oeschger events. Nature Geoscience (2015). DOI: 10.1038/ngeo2558

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

How Hallucinations Emerge From Trying to Make Sense of an Ambiguous World

How hallucinations emerge from trying to make sense of an ambiguous world

source: www.cam.ac.uk

Why are some people prone to hallucinations? According to new research from the University of Cambridge and Cardiff University, hallucinations may come from our attempts to make sense of the ambiguous and complex world around us.

Take a look at the black and white image. It probably looks like a meaningless pattern of black and white blotches. But now take a look at the image at the bottom of this article and then return to the black and white picture: it’s likely that you can now make sense of the black and white image. It is this ability that scientists at Cardiff University and the University of Cambridge believe could help explain why some people are prone to hallucinations.

A bewildering and often very frightening experience in some mental illnesses is psychosis – a loss of contact with external reality. This often results in a difficulty in making sense of the world, which can appear threatening, intrusive and confusing. Psychosis is sometimes accompanied by drastic changes in perception, to the extent that people may see, feel, smell and taste things that are not actually there – so-called hallucinations. These hallucinations may be accompanied by beliefs that others find irrational and impossible to comprehend.

In research published today in the journal Proceedings of National Academy of Sciences (PNAS), a team of researchers based at Cardiff University and the University of Cambridge explore the idea that hallucinations arise due to an enhancement of our normal tendency to interpret the world around us by making use of prior knowledge and predictions.

In order to make sense of and interact with our physical and social environment, we need appropriate information about the world around us, for example the size or location of a nearby object. However, we have no direct access to this information and are forced to interpret potentially ambiguous and incomplete information from our senses. This challenge is overcome in the brain – for example in our visual system – by combining ambiguous sensory information with our prior knowledge of the environment to generate a robust and unambiguous representation of the world around us. For example, when we enter our living room, we may have little difficulty discerning a fast-moving black shape as the cat, even though the visual input was little more than a blur that rapidly disappeared behind the sofa: the actual sensory input was minimal and our prior knowledge did all the creative work.

“Vision is a constructive process – in other words, our brain makes up the world that we ‘see’,” explains first author Dr Christoph Teufel from the School of Psychology at Cardiff University. “It fills in the blanks, ignoring the things that don’t quite fit, and presents to us an image of the world that has been edited and made to fit with what we expect.”

“Having a predictive brain is very useful – it makes us efficient and adept at creating a coherent picture of an ambiguous and complex world,” adds senior author Professor Paul Fletcher from the Department of Psychiatry at the University of Cambridge. “But it also means that we are not very far away from perceiving things that aren’t actually there, which is the definition of a hallucination.

“In fact, in recent years we’ve come to realise that such altered perceptual experiences are by no means restricted to people with mental illness. They are relatively common, in a milder form, across the entire population. Many of us will have heard or seen things that aren’t there.”

In order to address the question of whether such predictive processes contribute to the emergence of psychosis, the researchers worked with 18 individuals who had been referred to a mental health service run by the NHS Cambridgeshire and Peterborough Foundation Trust, and led by Dr Jesus Perez, one of the co-authors on the study, and who suffered from very early signs of psychosis. They examined how these individuals, as well as a group of 16 healthy volunteers, were able to use predictions in order to make sense of ambiguous, incomplete black and white images, similar to the one shown above.

The volunteers were asked to look at a series of these black and white images, some of which contained a person, and then to say for a given image whether or not it contained a person. Because of the ambiguous nature of the images, the task was very difficult at first. Participants were then shown a series of full colour original images, including those from which the black and white images had been derived: this information could be used to improve the brain’s ability to make sense of the ambiguous image. The researchers reasoned that, since hallucinations may come from a greater tendency to superimpose one’s predictions on the world, people who were prone to hallucinations would be better at using this information because, in this task, such a strategy would be an advantage.

The researchers found a larger performance improvement in people with very early signs of psychosis in comparison to the healthy control group. This suggested that people from the clinical group were indeed relying more strongly on the information that they had been given to make sense of the ambiguous pictures.

When the researchers presented the same task to a larger group of 40 healthy people, they found a continuum in task performance that correlated with the participants’ scores on tests of psychosis-proneness. In other words, the shift in information processing that favours prior knowledge over sensory input during perception can be detected even before the onset of early psychotic symptoms.

“These findings are important because they tell us that the emergence of key symptoms of mental illness can be understood in terms of an altered balance in normal brain functions,” says Naresh Subramaniam from the Department of Psychiatry at the University of Cambridge. “Importantly, they also suggest that these symptoms and experiences do not reflect a ‘broken’ brain but rather one that is striving – in a very natural way – to make sense of incoming data that are ambiguous.”

The study was carried out in collaboration with Dr Veronika Dobler and Professor Ian Goodyer from the Department of Child and Adolescent Psychiatry at the University of Cambridge. The research was funded by the Wellcome Trust and the Bernard Wolfe Health Neuroscience Fund. It was carried out within the Cambridge and Peterborough NHS Foundation Trust. Additional support for the Behavioural and Clinical Neuroscience Institute at the University of Cambridge came from the Wellcome Trust and the Medical Research Council.

Reference
Teufel, C et al. Shift towards prior knowledge confers a perceptual advantage in early psychosis and psychosis-prone healthy individuals. PNAS; 12 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Ancient Genome From Africa Sequenced for the First Time

Ancient genome from Africa sequenced for the first time

source: www.cam.ac.uk

DNA from 4,500-year-old Ethiopian skull reveals a huge migratory wave of West Eurasians into the Horn of Africa around 3,000 years ago had a genetic impact on modern populations right across the African continent.

The sequencing of ancient genomes is still so new, and it’s changing the way we reconstruct human origins

Andrea Manica

The first ancient human genome from Africa to be sequenced has revealed that a wave of migration back into Africa from Western Eurasia around 3,000 years ago was up to twice as significant as previously thought, and affected the genetic make-up of populations across the entire African continent.

The genome was taken from the skull of a man buried face-down 4,500 years ago in a cave called Mota in the highlands of Ethiopia – a cave cool and dry enough to preserve his DNA for thousands of years. Previously, ancient genome analysis has been limited to samples from northern and arctic regions.

The latest study is the first time an ancient human genome has been recovered and sequenced from Africa, the source of all human genetic diversity. The findings are published today in the journal Science.

The ancient genome predates a mysterious migratory event which occurred roughly 3,000 years ago, known as the ‘Eurasian backflow’, when people from regions of Western Eurasia such as the Near East and Anatolia suddenly flooded back into the Horn of Africa.

The genome enabled researchers to run a millennia-spanning genetic comparison and determine that these Western Eurasians were closely related to the Early Neolithic farmers who had brought agriculture to Europe 4,000 years earlier.

By comparing the ancient genome to DNA from modern Africans, the team have been able to show that not only do East African populations today have as much as 25% Eurasian ancestry from this event, but that African populations in all corners of the continent – from the far West to the South – have at least 5% of their genome traceable to the Eurasian migration.

Researchers describe the findings as evidence that the ‘backflow’ event was of far greater size and influence than previously thought. The massive wave of migration was perhaps equivalent to over a quarter of the then population of the Horn of Africa, which hit the area and then dispersed genetically across the whole continent.

“Roughly speaking, the wave of West Eurasian migration back into the Horn of Africa could have been as much as 30% of the population that already lived there – and that, to me, is mind-blowing. The question is: what got them moving all of a sudden?” said Dr Andrea Manica, senior author of the study from the University of Cambridge’s Department of Zoology.

Previous work on ancient genetics in Africa had involved trying to work back through the genomes of current populations, attempting to eliminate modern influences. “With an ancient genome, we have a direct window into the distant past. One genome from one individual can provide a picture of an entire population,” said Manica.

The cause of the West Eurasian migration back into Africa is currently a mystery, with no obvious climatic reasons. Archaeological evidence does, however, show the migration coincided with the arrival of Near Eastern crops into East Africa such as wheat and barley, suggesting the migrants helped develop new forms of agriculture in the region.

The researchers say it’s clear that the Eurasian migrants were direct descendants of, or a very close population to, the Neolithic farmers that had had brought agriculture from the Near East into West Eurasia around 7,000 years ago, and then migrated into the Horn of Africa some 4,000 years later. “It’s quite remarkable that genetically-speaking this is the same population that left the Near East several millennia previously,” said Eppie Jones, a geneticist at Trinity College Dublin who led the laboratory work to sequence the genome.

While the genetic make-up of the Near East has changed completely over the last few thousand years, the closest modern equivalents to these Neolithic migrants are Sardinians, probably because Sardinia is an isolated island, says Jones. “The famers found their way to Sardinia and created a bit of a time capsule. Sardinian ancestry is closest to the ancient Near East.”


View looking out from the Mota cave in the Ethiopian highlands

“Genomes from this migration seeped right across the continent, way beyond East Africa, from the Yoruba on the western coast to the Mbuti in the heart of the Congo – who show as much as 7% and 6% of their genomes respectively to be West Eurasian,” said Marcos Gallego Llorente, first author of the study, also from Cambridge’s Zoology Department.

“Africa is a total melting pot. We know that the last 3,000 years saw a complete scrambling of population genetics in Africa. So being able to get a snapshot from before these migration events occurred is a big step,” Gallego Llorente said.

The ancient Mota genome allows researchers to jump to before another major African migration: the Bantu expansion, when speakers of an early Bantu language flowed out of West Africa and into central and southern areas around 3,000 years ago. Manica says the Bantu expansion may well have helped carry the Eurasian genomes to the continent’s furthest corners.

The researchers also identified genetic adaptations for living at altitude, and a lack of genes for lactose tolerance – all genetic traits shared by the current populations of the Ethiopian highlands. In fact, the researchers found that modern inhabitants of the area highlands are direct descendants of the Mota man.

Finding high-quality ancient DNA involves a lot of luck, says Dr Ron Pinhasi, co-senior author from University College Dublin. “It’s hard to get your hands on remains that have been suitably preserved. The denser the bone, the more likely you are to find DNA that’s been protected from degradation, so teeth are often used, but we found an even better bone – the petrous.” The petrous bone is a thick part of the temporal bone at the base of the skull, just behind the ear.

“The sequencing of ancient genomes is still so new, and it’s changing the way we reconstruct human origins,” added Manica. “These new techniques will keep evolving, enabling us to gain an ever-clearer understanding of who our earliest ancestors were.”

The study was conducted by an international team of researchers, with permission from the Ethiopia’s Ministry of Culture and Authority for Research and Conservation of Cultural Heritage.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Calling For Help: Damaged Nerve Cells Communicate With Stem Cells

Calling for help: damaged nerve cells communicate with stem cells

Source: www.cam.ac.uk

Nerve cells damaged in diseases such as multiple sclerosis (MS), ‘talk’ to stem cells in the same way that they communicate with other nerve cells, calling out for ‘first aid’, according to new research from the University of Cambridge.

This is the first time that we’ve been able to show that damaged nerve fibres communicate with stem cells using synaptic connections – the same connections they use to ‘talk to’ other nerve cells

Thora Karadottir

The study, published today in the journal Nature Communications, may have significant implications for the development of future medicines for disorders that affect myelin sheath, the insulation that protects and insulates our nerve cells.

For our brain and central nervous system to work, electrical signals must travel quickly along nerve fibres. This is achieved by insulating the nerve fibres with a fatty substance called myelin. In diseases such as MS, the myelin sheath around nerve fibres is lost or damaged, causing physical and mental disability.

Stem cells – the body’s master cells, which can develop into almost any type of cell – can act as ‘first aid kits’, repairing damage to the body. In our nervous system, these stem cells are capable of producing new myelin, which, in the case of MS, for example, can help recover lost function. However, myelin repair often fails, leading to sustained disability. To understand why repair fails in disease, and to design novel ways of promoting myelin repair, researchers at the Wellcome Trust-Medical Research Council Stem Cell Institute at the University of Cambridge studied how this repair process works.

When nerve fibres lose myelin, they stay active but conduct signals at much lower speed than healthy fibres. Using electrical recording techniques, a team of researchers led by Dr Thora Karadottir discovered that the damaged nerve fibres then form connections with stem cells. These connections are the same as those that connect synapses between different nerve fibres. These new synaptic connections enable the damaged fibres to communicate directly with the stem cells by releasing the glutamate, a chemical that the stem cells can sense via receptors. This communication is critical for directing the stem cells to produce new myelin – when the researchers inhibited either the nerve fibres’ activity, their ability to communicate, or the stem cells’ ability to sense the communication, the repair process fails.

“This is the first time that we’ve been able to show that damaged nerve fibres communicate with stem cells using synaptic connections – the same connections they use to ‘talk to’ other nerve cells,” says Dr Karadottir. “Armed with this new knowledge, we can start looking into ways to enhance this communication to promote myelin repair in disease.”

Dr Helene Gautier from the Department of Physiology, Development and Neuroscience, adds: “So far, the majority of the available treatments are only slowing down damage. Our research opens the possibility to enhance repair and potentially treat the most devastating forms of MS and demyelinated diseases.”

Reference
Gautier, HOB et al. Neuronal activity regulates remyelination via glutamate signalling to oligodendrocyte progenitors. Nature Communications; 6 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/calling-for-help-damaged-nerve-cells-communicate-with-stem-cells#sthash.fxfycRyX.dpuf

Bacteria in the World’s Oceans Produce Millions of Tonnes of Hydrocarbons Each Year

Bacteria in the world’s oceans produce millions of tonnes of hydrocarbons each year

Source: www.cam.ac.uk

Scientists have calculated that millions of tonnes of hydrocarbons are produced annually by photosynthetic bacteria in the world’s oceans.

This cycle is like an insurance policy – the hydrocarbon-producing and hydrocarbon-degrading bacteria exist in equilibrium with each other

David Lea-Smith

An international team of researchers, led by the University of Cambridge, has estimated the amount of hydrocarbons – the primary ingredient in crude oil – that are produced by a massive population of photosynthetic marine microbes, called cyanobacteria. These organisms in turn support another population of bacteria that ‘feed’ on these compounds.

In the study, conducted in collaboration with researchers from the University of Warwick and MIT, and published today (5 October) in the journal Proceedings of the National Academy of Sciences of the USA, the scientists measured the amount of hydrocarbons in a range of laboratory-grown cyanobacteria and used the data to estimate the amount produced in the oceans.

Although each individual cell contains minuscule quantities of hydrocarbons, the researchers estimated that the amount produced by two of the most abundant cyanobacteria in the world – Prochlorococcus and Synechococcus – is more than two million tonnes in the ocean at any one time. This indicates that these two groups alone produce between 300 and 800 million tonnes of hydrocarbons per year, yet the concentration at any time in unpolluted areas of the oceans is tiny, thanks to other bacteria that break down the hydrocarbons as they are produced.

“Hydrocarbons are ubiquitous in the oceans, even in areas with minimal crude oil pollution, but what hadn’t been recognised until now is the likely quantity produced continually by living oceanic organisms,” said Professor Christopher Howe from Cambridge’s Department of Biochemistry, the paper’s senior author. “Based on our laboratory studies, we believe that at least two groups of cyanobacteria are responsible for the production of massive amounts of hydrocarbons, and this supports other bacteria that break down the hydrocarbons as they are produced.”

The scientists argue that the cyanobacteria are key players in an important biogeochemical cycle, which they refer to as the short-term hydrocarbon cycle. The study suggests that the amount of hydrocarbons produced by cyanobacteria dwarfs the amount of crude oil released into the seas by natural seepage or accidental oil spills.

However, the hydrocarbons produced by cyanobacteria are continually broken down by other bacteria, keeping the overall concentrations low. When an event such as an oil spill occurs, hydrocarbon-degrading bacteria are known to spring into action, with their numbers rapidly expanding, fuelled by the sudden local increase in their primary source of energy.

The researchers caution that their results do not in any way diminish the enormous harm caused by oil spills. Although some microorganisms are known to break down hydrocarbons in oil spills, they cannot repair the damage done to marine life, seabirds and coastal ecosystems.

“Oil spills cause widespread damage, but some parts of the marine environment recover faster than others,” said Dr David Lea-Smith, a postdoctoral researcher in the Department of Biochemistry, and the paper’s lead author. “This cycle is like an insurance policy – the hydrocarbon-producing and hydrocarbon-degrading bacteria exist in equilibrium with each other, and the latter multiply if and when an oil spill happens. However, these bacteria cannot reverse the damage to ecosystems which oil spills cause.”

The researchers stress the need to test if their findings are supported by direct measurements on cyanobacteria growing in the oceans. They are also interested in the possibility of harnessing the hydrocarbon production potential of cyanobacteria industrially as a possible source of fuel in the future, although such work is at a very early stage.

Reference:
Lea-Smith, D. et. al. “Contribution of cyanobacterial alkane production to the ocean hydrocarbon cycle.” PNAS (2015). DOI: 10.1073/pnas.1507274112


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/bacteria-in-the-worlds-oceans-produce-millions-of-tonnes-of-hydrocarbons-each-year#sthash.mYeq3A1W.dpuf

Doctors Liken Keeping Patients Alive Unnecessarily to Torture

Doctors liken keeping patients alive unnecessarily to torture

Source: www.cam.ac.uk

Elizabeth Dzeng’s research shows doctors’ moral distress surrounding futile end of life treatments.

Doctors’ words – ‘torture’, ‘gruesome’, ‘abuse’, ‘mutilate’ and ‘cruel’ evoke images more fitting of penal regimes than hospitals. The moral toll exacted upon these physicians is evident in descriptions such as feeling ‘violated’ and ‘traumatised’.

Elizabeth Dzeng

Trainee doctors think they are being asked to prolong some patients’ lives unnecessarily and describe such cases as being tantamount to torture and abuse, according to a new study.

The research, led by Elizabeth Dzeng, a Gates Cambridge Scholar at the University of Cambridge, is the first to focus on US doctors’ moral distress surrounding resuscitation and treatments that they believe may be futile at the end of life and has implications for the UK as more power is handed from doctors to patients’ representatives around end of life issues.

The qualitative study, titled “Moral Distress Amongst American Physician Trainees Regarding Futile Treatments at the End of Life: A Qualitative Inquiry”, is published in the Journal of General Internal Medicine.

Elizabeth conducted in-depth interviews with 22 physician trainees in internal medicine at three accredited medical centres in the US and asked how they reacted and responded to ethical challenges arising in the context of perceived futile treatments at the end of life and how they felt about its ethical implications.

It found that doctors who are required to perform procedures such as resuscitation which they feel are futile or harmful have significant moral qualms that they are prolonging suffering as opposed to providing care. Some cope with these by developing detached and dehumanising attitudes towards patients.

One said of a patient: “It felt horrible, like I was torturing him. He was telling us we were torturing him. I did not think we were doing the right things.”

Another said: “I agree with giving the patient’s choice, but oftentimes it’s the family member. If the patient says, “Torture me, I want everything done.” Fine. The family member is doing it for other reasons. Like guilt; they can’t let go.”

Ways of coping ranged from formal and informal conversations with colleagues and superiors about the emotional and ethical challenges of providing care at the end of life to a tendency among some to dehumanise the patient.

One trainee said: “We’re abusing a body and I get that, but as long as I remember I’m only abusing a body and not a person, it’s okay. Frequently when it’s an inappropriate code, that’s what’s happening.”

Trainees also said that the hierarchical nature of their relationship with other doctors meant they felt powerless to have any influence on decisions.

Dzeng [2011] is completing a PhD on medical sociology and ethics and is also a fellow in General Internal Medicine at the Johns Hopkins School of Medicine. She said: “Our study sheds light on a significant cause of moral distress amongst physician trainees when they feel obligated to provide treatments at the end of life that they believe to be futile or harmful. Their words – ‘torture’, ‘gruesome’, ‘abuse’, ‘mutilate’ and ‘cruel’ evoke images more fitting of penal regimes than hospitals. The moral toll exacted upon these physicians is evident in descriptions such as feeling ‘violated’ and ‘traumatised’. Previous research shows that moral distress can have significant negative effects on job satisfaction, psychological and physical well-being and self-image, resulting in burnout and thoughts of quitting.”

The paper is part of a larger study investigating the influence of institutional cultures and policies on physicians’ and trainees’ views on resuscitation orders. A previous paper found that in hospitals with cultures and policies which prioritised patient autonomy over the patients’ best interest, physicians tended to give patients a menu of choices without guidance or recommendations over whether resuscitation would be beneficial or merely prolong suffering. It explored the effect of moves towards greater patient autonomy over end of life decisions. Although this was an understandable reaction to the paternalistic approach often adopted by doctors in the past, the paper voiced fears that the pendulum may have swung too far, to the detriment of patients themselves.

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/doctors-liken-keeping-patients-alive-unnecessarily-to-torture#sthash.IInQmPAg.dpuf

Exploiting the Government’s Education Data Could Help to Bridge the UK Skills Gap

Exploiting the Government’s education data could help to bridge the UK skills gap

Source: www.cam.ac.uk

Analysing graduate earnings using anonymous administrative data can show how earnings vary for graduates and indicate which skills are in short supply, says Cambridge education professor Anna Vignoles.

Providing information is not enough to change policy, but without good data any policy development is likely to be ineffective

Anna Vignoles

Fully exploiting the Government’s education data could help to bridge the skills gap that is holding back UK businesses, Cambridge expert Professor Anna Vignoles has said at aRustat Conference session on the application of Big Data, held at Jesus College.

The UK’s skills gap has been highlighted by both the Confederation of British Industry(CBI) and the Chartered Institute of Management Accountants (CIMA) this year. The CBI reported that over half of employers (55%) are not confident there will be enough people available in the future with the necessary skills to fill their high-skilled jobs¹.

In the last ten years, the Government has allowed researchers to access some of its educational data under secure conditions. Academics including Vignoles have recently mapped the journeys taken by students from the age of four right through to employment.

During a session on the application of Big Data and data driven business models, Vignoles argued that researchers could now use earnings data to determine which skills are in greater demand in the labour market, and feed this back into policy to ensure that the education system teaches the skills needed by UK companies.

“Many firms have difficulties recruiting people with the right skills, and are having to pay a big premium for some skills. Although we can survey firms about their needs, the results can be misleading, not least because only a select group of companies may respond,” said Vignoles.

“I have been working with colleagues to accurately analyse graduate earnings, using anonymous Government administrative data. This type of analysis can show how earnings vary for different types of graduates, and so indicate which skills are in short supply.

“For example, let’s say that the next stage of our research reveals that graduates with strong analytical skills are in demand. This data could inform students, universities and policy makers, and may result in courses offering more training in analytical skills. More graduates will then have the analytical skills needed by businesses, and the skills gap should start to close.

“Of course, providing information is not enough to change policy, but without good data any policy development is likely to be ineffective. The UK is world-leading when it comes to education data, but it is only recently that a Big Data approach has been used to look at graduate earnings. Fully exploiting the Government’s education data could help to bridge the UK skills gap.

“However there should always be strict limitations on the way data is used to ensure that people’s privacy is protected. We need to have an informed debate about the extent to which members of the public are happy for data collected by the state to be used in this way.”

Vignoles sits on the steering group of the University of Cambridge’s Big Data Research Initiative. This brings together researchers to address challenges presented by access to unprecedented volumes of data, as well as important issues around law, ethics and economics, in order to apply Big Data to solve challenging problems for society.

Rustat Conferences are held three times a year at Jesus College, Cambridge, with this conference focusing on Big Data. Other sessions explored the Internet of Things, sharing data and respecting individual rights without disrupting new business models, and the legal aspects of Big Data. Rustat Conferences offer an opportunity for decision-makers from the frontlines of politics, business, finance, the media and education to discuss vital issues with leading academic experts.

 

Mindfulness Study to Look at Benefits in Helping Build Resilience To Stress Among University Students

Mindfulness study to look at benefits in helping build resilience to stress among university students

Source: www.cam.ac.uk

Students at the University of Cambridge are to be offered free, eight-week mindfulness training to help build resilience against stress as part of a new research project launched to coincide with the start of term.

University life can be stressful at time for students, as they develop the skills to live and study independently

Géraldine Dufour

The study, which could see over 500 students receive mindfulness training, aims to measure its effectiveness in managing stress amongst students, particularly at exam time, and whether it helps in other factors such as sleep and wellbeing. It will also explore whether the training affects students’ use of mental health treatment and support services.

Mindfulness involves the use of meditation techniques and self-awareness. Originally developed to help patients with chronic pain cope with their condition, it is now a recognised – and clinically-proven – way of helping individuals cope with depression, anxiety and stress.

Géraldine Dufour, Head of Counselling at the University, says: “University life can be stressful at time for students, as they develop the skills to live and study independently. Developing resilience and the skills to cope with stress is key so that students can make the most of life in the collegiate university and when they leave. The university counselling service offers many opportunities for students to develop their skills through an extensive programme of workshops, groups and individual counselling. We believe mindfulness could be a powerful tool to help them, in addition to the other counselling services we offer. This research project will help us determine if mindfulness is a good use of resources.”

From October, undergraduates and postgraduates at the University of Cambridge will be invited to register for a free, eight-week mindfulness training course called Mindfulness Skills for Students, which will be led by Dr Elizabeth English, the University’s Mindfulness Practitioner. The course is a group-based training programme based on the course bookMindfulness: A Practical Guide to Finding Peace in a Frantic World, by Mark Williams and Danny Penman, and adapted for Cambridge students. It consists of one 90-minute session and seven 75-minute sessions. Participants are also requested to do some home practice and reading every week.

Students will be allocated at random to two groups – one to receive training immediately, the second to be deferred twelve months. All students – both those who take the course and those whose training is deferred – will record their stress levels using a smartphone app during the exam period, while activity monitors will record their physical activity and sleep patterns.

“The academic year provides a very real ‘natural experiment’,” says Dr Julieta Galante from the Department of Psychiatry, who will carry out the research together with Professor Peter Jones. “Students receive training, practice at home, then face a ‘pressure point’ – their exams. We hope that our study will help us answer the question of whether the provision of mindfulness training, which we know to be effective in other settings, can help students throughout the year and particularly at exam time.”

The level of support available to students at Cambridge is unparalleled in most other universities. The University Counselling Service, one of the best funded in the country, includes counsellors as well as mental health advisors and supplements the support available to students from specialist staff in the colleges such as college nurses and chaplains. In the previous academic year, over 1,500 people were seen for counselling – this represents around one in 12 of the student population. Its Mindfulness Skills for Students programme is believed to be the largest such programme in any university.

Students wishing to register for the evaluation study of the Mindfulness Training Programme should visit the mindfulness website or emailmindfulstudentstudy@medschl.cam.ac.uk.

 

Maintaining Healthy DNA Delays Menopause

Maintaining healthy DNA delays menopause

Source: www.cam.ac.uk

An international study of nearly 70,000 women has identified more than forty regions of the human genome that are involved in governing at what age a woman goes through menopause. The study, led by scientists at the Universities of Cambridge and Exeter, found that two thirds of those regions contain genes that act to keep DNA healthy, by repairing the small damages that can accumulate with age.

We have known for some time that the age at which women go through menopause is partly determined by genes. This study now tells us that there are likely hundreds of genes involved

John Perry

The findings, published today (September 28) in the journal Nature Genetics, suggest that the reproductive cells or ‘eggs’ in a woman’s ovaries (known as oocytes) that repair damaged DNA more efficiently survive longer. This results in a later age at menopause, which marks the end of a woman’s reproductive lifetime. Previous research has shown that DNA is regularly damaged by age and by toxic substances such as cigarette smoke – hence women who smoke go through menopause 1-2 years earlier on average than non-smokers.

Our cells have many mechanisms to detect and repair such damage, but cells die when too much damage accumulates. DNA is also damaged and repaired during the production of eggs – therefore these genes might also act to enhance a woman’s pool of eggs which is set in early life.

In a collaboration involving scientists from 177 institutions worldwide, the authors undertook a genome-wide association study of almost 70,000 women of European ancestry.

“Many women today are choosing to have babies later in life, but they may find it difficult to conceive naturally because fertility starts to diminish at least 10 years before menopause,” said Dr Anna Murray from the University of Exeter, and the paper’s senior author. “Our research has substantially increased our understanding of how reproductive ageing in women happens, which we hope will lead to the development of new treatments to avoid early menopause.”

Dr John Perry from the Medical Research Council (MRC) Epidemiology Unit at the University of Cambridge co-led the study, and said: “We have known for some time that the age at which women go through menopause is partly determined by genes. This study now tells us that there are likely hundreds of genes involved, each altering menopause age by anything from a few weeks to a year. It is striking that genes involved in DNA repair have such an important influence on the age of menopause, which we think is due to their effect on how quickly a woman’s eggs are lost throughout her lifetime.”

The researchers also used these genetic findings to examine links between menopause and other health conditions. They predict that every one year later that menopause occurs increases the risk of developing breast cancer by 6%.

Dr Deborah Thompson from the Department of Public Health and Primary Care at the University of Cambridge also co-led this large international collaboration and said: “One particularly convincing finding was that going through menopause earlier reduces your chances of developing breast cancer and we think this is because these women have less exposure to the hormone oestrogen over their lifetime.”

The next step is to understand in more detail how the genetic variations found in this study are causing alterations in the timing of menopause. Uncovering these mechanisms will hopefully lead to better treatment for conditions linked to menopause, such as infertility and also improved understanding of the heath impact of menopause, such as risk of osteoporosis and heart disease.

Menopause usually occurs between ages 40 to 60 years old, indicated by the end of natural menstrual cycles and in many women by physical symptoms, such as hot flushes, disrupted sleep, and reduced energy levels. Natural menopause before the age of 40 is often called “primary ovarian insufficiency” and occurs in 1% of women.

Adapted from a University of Exeter press release.

Reference:
Felix R Day et al. ‘Large-scale genomic analyses link reproductive aging to hypothalamic signaling, breast cancer susceptibility and BRCA1-mediated DNA repair.’ Nature Genetics (2015). DOI: 10.1038/ng.3412

 

New Research Leaves Tumours With Nowhere to Hide

New research leaves tumours with nowhere to hide

 

source: www.cam.ac.uk
Hidden tumours that cause potentially fatal high blood pressure but lurk undetected in the body until pregnancy have been discovered by a Cambridge medical team.

Conditions are often around for 60 years which we have had no explanation for, now we can get to the heart of what has gone wrong

Morris Brown

The small tumours concealed in the adrenal gland are “unmasked” in early pregnancy, when a sudden surge of hormones fires them into life, leading to raised blood pressure and causing risk to patients.

New research published today in the New England Journal of Medicine conducted by a team led by Professor Morris Brown, professor of clinical pharmacology at Cambridge University and a Fellow of Gonville & Caius College, identifies this small group of lurking tumours for the first time, and explains why they behave as they do.

The study means that, when patients are found to have high blood pressure early in pregnancy, doctors will now be encouraged to consider that the cause could be the tumours, which can be easily treated. Currently, adrenal tumours are not usually suspected as the cause of high blood pressure in pregnancy, and so go undiagnosed.

Brown and an international group of PhD students including first-author Ada Teo of Newnham College used a combination of state-of-the-art gene “fingerprinting” technology and old-fashioned deduction from patient case histories to work out that the otherwise benign tumours harbour genetic mutations that affect cells in the adrenal gland.

The mutation means the adrenal cells are given false information and their clock is effectively turned back to “childhood”, returning them to their original state as ovary cells. They then respond to hormones released in pregnancy, producing increased levels of the salt-regulating hormone aldosterone.

Aldosterone in turn regulates the kidneys to retain more salt and hence water, pushing up blood pressure. High blood pressure – also known as hypertension – can be fatal, since it greatly increases the risk of stroke and heart attack.

The new findings build on a growing body of research focusing on the adrenal gland and blood pressure. Sixty years ago, the American endocrinologist Dr Jerome Conn first observed that large benign tumours in the adrenal gland can release aldosterone and increase blood pressure (now known as Conn’s Syndrome).

Brown and his team have previously found a group of much smaller tumours, arising from the outer part of the gland, that have the same effect. The latest discovery drills down still further, revealing that roughly one in ten of this group has a mutation that makes the cells receptive to pregnancy hormones.

Brown said: “This is an example of what modern scientific techniques, and collaborations among doctors and scientists, allow you to do [through a form of genetic fingerprinting]. Conditions are often around for 60 years which we have had no explanation for, and now we can get to the heart of what has gone wrong.”

But the discovery also relied on what doctors call “clinical pattern recognition” – using experience to spot similarities. Brown was able to link together the cases of two pregnant women almost ten years apart and a woman in early menopause. All suffered high blood pressure, leading him to screen their adrenal tumours and identify a matching genetic mutation.

Pregnant women found to have the newly identified subset of tumours can now be identified more readily, and the tumours either treated with drugs or potentially even removed.

The research was funded by the Wellcome Trust, National Institute for Health Research, British Heart Foundation and A* Singapore.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Love’s Labours: Study Shows Male Lizards Risk Becoming Lunch For a Bird in Order to Attract a Mate

Love’s Labours: study shows male lizards risk becoming lunch for a bird in order to attract a mate

source: www.cam.ac.uk

New research shows male lizards are more likely than females to be attacked by predators because the bright colours they need to attract a mate also make them more conspicuous to birds.

The models that had been attacked showed signs of beak marks, particularly around the head, and some had been decapitated

Kate Marshall

In the animal kingdom, the flashiest males often have more luck attracting a mate. But when your predators hunt by sight, this can pose an interesting problem.

Like many species, lizards use bright colours for sexual signalling to attract females and intimidate rival males. A new study published in Ecology and Evolution by Kate Marshall from the University of Cambridge’s Department of Zoology and Martin Stevens from the University of Exeter’s Centre for Ecology and Conservation has provided evidence that this signalling comes at a cost.

Using models that replicated the colouration of male and female wall lizards found on the Greek islands of Skopelos and Syros, they found that the male lizard models were less well camouflaged against their habitat and more likely to fall prey to bird attacks.

Marshall, lead author of the study, explains: “we wanted to get to the origins of colour evolution; to find out what is causing colour variation between these lizards. We wanted to know whether natural selection favours camouflage, and whether the conflicting need to have bright sexual signals might impair its effectiveness.

“It has previously been assumed that conspicuous male colours are costly to survival, but this hasn’t been tested before among these specific lizards living on different islands, and in general rarely in a way that takes into account the particular sensitivities of avian vision.”

Birds see the world differently from you or I: they are able to see ultraviolet (UV) light whereas we cannot, which means they perceive colour (and camouflage) in a very different way. To test whether the males really are more visible to feathered predators, the researchers had to develop clay models that accurately replicated the lizards’ colour to a bird’s eye.

Using visual modelling, Marshall and her colleagues painstakingly tested around 300 colour variations to find ones that matched the male and female colours in order to make the 600 clay lizards used in the study.

Marshall comments: “it was important to get a clay colour that would be indistinguishable from a real lizard to a bird’s eyes: we even tried using a paint colour chart, but they all reflected too much UV. To us the models may not look like very good likenesses, but to a bird the models should have looked the same colour as the real lizards.”

Marshall and her field assistant, Kate Philpot, placed the male and female lizard models in ten sites on each of the two islands and checked them every 24 hours over five days to see which had been attacked by birds.

“The models that had been attacked showed signs of beak marks, particularly around the head, and some had been decapitated,” explains Marshall. “We even found a few heads in different fields to the bodies.”

“The fact that the birds focused their attacks on the heads of the models also shows us that they perceived them as real lizards because that is how they would attack real prey,” she adds.

At the end of the study, the researchers found that the models with male colouration had been attacked more than the models with female colouration.

Marshall and the team also tested how conspicuous the models were against their real backgrounds using further modelling of avian vision, and found that the male models were less camouflaged than the females.

“In females, selection seems to have favoured better camouflage to avoid attack from avian predators. But in males, being bright and conspicuous also appears to be important even though this heightens the risk of being spotted by birds,” says Marshall.

However, it is not entirely a tale of woe for the male Aegean wall lizard. Despite being attacked more than the females by predatory birds, 83% of the male lizard models survived over the course of the five-day experiment. Marshall explains that this may indicate that males have colour adaptations that balance the contradictory needs to attract a mate and to avoid becoming lunch.

“In past work we’ve found these lizards have evolved bright colours on their sides, which are more visible to other lizards on the ground than to birds hunting from above,” explains Marshall. “The visual system of lizards is different again from birds, such as through increased sensitivity to UV, so the colour on their backs is more obvious to other lizards than to birds. Such selective “tuning” of colours to the eyes of different observers might provide at least some camouflage against dangerous predators that sneakily eavesdrop on the bright signals of their prey.”

“With these models we were only able to replicate the overall colour of the lizards rather than their patterns, so it would be interesting to investigate further whether these patterns affect the survival rates of lizard models,” she adds. “It would also be great to apply this type of experiment to other questions, such as how different environments affect the amount of predation that prey animals experience.”

Reference: Marshall, K et al. “Conspicuous male coloration impairs survival against avian predators in Aegean wall lizards, Podarcis erhardii” Ecology and Evolution (September 2015). DOI: http://onlinelibrary.wiley.com/doi/10.1002/ece3.1650/full

The research was enabled by funding from the Biotechnology and Biological Sciences Research Council, the British Herpetological Society, the Cambridge Philosophical Society, and Magdalene College, Cambridge.

Inset images: Tetrahedral plot of avian vision (Kate Marshall et al); Models showing signs of bird attack (Kate Marshall et al); Males, females and their corresponding models (Kate Marshall et al).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Emissions From Melting Permafrost Could Cost $43 trillion

Emissions from melting permafrost could cost $43 trillion

source: www.cam.ac.uk

New analysis of the effects of melting permafrost in the Arctic points to $43 trillion in extra economic damage by the end of the next century, on top of the more than the $300 trillion economic damage already predicted.

These results show just how much we need urgent action to slow the melting of the permafrost in order to minimise the scale of the release of greenhouse gases

Chris Hope

Increased greenhouse gas emissions from the release of carbon dioxide and methane contained in the Arctic permafrost could result in $43 trillion in additional economic damage by the end of the next century, according to researchers from the University of Cambridge and the University of Colorado.

In a letter published today (21 September) in the journal Nature Climate Change, the researchers have for the first time modelled the economic impact caused by melting permafrost in the Arctic to the end of the twenty-second century, on top of the damage already predicted by climate and economic models.

The Arctic is warming at a rate which is twice the global average, due to anthropogenic, or human-caused, greenhouse gas emissions. If emissions continue to rise at their current rates, Arctic warming will lead to the widespread thawing of permafrost and the release of hundreds of billions of tonnes of methane and CO2 – about 1,700 gigatonnes of carbon are held in permafrost soils in the form of frozen organic matter.

Rising emissions will result in both economic and non-economic impacts, as well as a higher chance of catastrophic events, such as the melting of the Greenland and West Antarctic ice sheets, increased flooding and extreme weather. Economic impacts directly affect a country’s gross domestic product (GDP), such as the loss of agricultural output and the additional cost of air conditioning, while non-economic impacts include effects on human health and ecosystems.

The researchers’ models predict $43 trillion in economic damage could be caused by the release of these greenhouse gases, an amount equivalent to more than half the current annual output of the global economy. This brings the total predicted impact of climate change by 2200 to $369 trillion, up from $326 trillion – an increase of 13 percent.

“These results show just how much we need urgent action to slow the melting of the permafrost in order to minimise the scale of the release of greenhouse gases,” said co-author Dr Chris Hope from the Cambridge Judge Business School.

Hope’s calculations were conducted in collaboration with Kevin Schaefer of the National Snow and Ice Data Center at the University of Colorado.

Hope and Schaefer used the PAGE09 (Policy Analysis of the Greenhouse Effect) integrated assessment model to measure the economic impact of permafrost thawing on top of previous calculations of the climate change costs of business-as-usual greenhouse gas emissions from the Intergovernmental Panel on Climate Change (IPCC).

“We want to use these models to help us make better decisions – linking scientific and economic models together is a way to help us do that,” said Hope. “We need to estimate how much it will cost if we do nothing, how much it will cost if we do something, and how much we need to spend to cut back greenhouse gases.”

The researchers say that if an aggressive strategy to reduce emissions from thawing permafrost is adopted, it could reduce the impact by as much as $37 trillion.

Reference:
Hope, C. and Schaefer, K. ‘Economic impacts of carbon dioxide and methane released from thawing permafrost’. Nature Climate Change (2015). DOI: 10.1038/nclimate2807


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Microfluidics Consortium Visiting Cambridge UK With European Open Day on Sept 22nd

Microfluidics Consortium visiting Cambridge UK with European Open Day on Sept 22nd

Now in its 7th year the Microfluidics Consortium continues to grow with members from all over the world supporting its mission “To Grow the Market for Microfluidics Enabled Products and Services”. In the recent past it has visited Dalian China, Paris, Carolina, Copenhagen and Boston working with players from all along the value chain as well as regulators, funders and academics.

On Sept 21 and hosted by the Royal Society for Chemistry the consortium will be in closed session in Cambridge working on business models for POC and testing its new ‘hot seat’ pitching formula for startups.

On Sept 22nd we are holding our annual ‘Open Day’ on the Cambridge Science Park where non-members are invited to come and meet us, share in our vision and processes, meet members and see table top demos and debate with us how global collaboration can help organisations achieve their microflluidics enabled goals faster. Details and registration here http://www.cfbi.com/mf54landingpage.htm (note this content is updated regularly as agenda and delegate list are refined so come back soon and remember to refresh your browser!)

Access To Startup Skills Threatened By U.K. Visa Review

Access To Startup Skills Threatened By U.K. Visa Review

Displaying image001.jpg

Source: tech crunch

The U.K.-based startup founders and investors who penned an open letter backing the Conservatives at the General Election in May are now confronted with the prospect of a Tory government looking for ways to make it harder for businesses to recruit and bring non-EU workers to the U.K. — owing to a political push to reduce net migration.

Soon after winning the election this May, Prime Minister David Cameron made a speech on migration, outlining the new government’s forthcoming Immigration Bill — which he said would include reforms to domestic immigration and labour market rules aimed at reducing the demand for skilled migrant labour.

Given that the U.K. is a member of the European Union, the U.K. government can’t pull any policy levers to reduce migration from the EU (although Cameron is also hoping to renegotiate migration rules with the EU). Which leaves non-EU migration bearing the brunt of the government’s planned migration squeeze. Startups of course rely on filling vacancies by bringing skills from abroad — given it may be the only way to obtain relevant expertise when you’re working in such nascent areas.

The Home Office has commissioned the Migration Advisory Committee to undertake two reviews of the U.K.’s current tier 2 skilled visa requirements (a main route used by startups to fill vacancies; there is also the tier 1 entrepreneur visa for founders). A review of tier 2 visa salary thresholds was conducted by MAC last month, but has yet to report its findings to the Home Office — although a spokesman for the department told TechCrunch the government would look to implement any changes it deems necessary based on that review this autumn.

The larger scale tier 2 visa review is still taking evidence, with a closing date for submissions of the end of September. The MAC is then likely to report in full by the year’s end — so further changes to the tier 2 visa, based on the government’s assessment of the review, may be incoming by the start of next year.

According to the MAC’s call for evidence, the proposed changes being considered by the government include some potentially radical measures — such as: significantly raising the salary threshold; restricting which job roles are eligible; and removing the right for dependents of visa holders to be able to work in the U.K.

Other measures apparently on the table include putting time-limits on shortage occupations in order to encourage domestic businesses to invest in upskilling; and imposing a skills levy on employers who hire from outside the European Economic Area in order to fund domestic apprenticeships.

The pro-startup organisation, Coadec, which advocates for policies that support the U.K.’s digital economy, kicked off acampaign last week to raise awareness of the MAC’s review and feed its submissions call with startups’ views. It’s asking U.K. startups to complete a survey asking for their views on the tier 2 visa process and the changes the government is considering. Coadec will then be compiling responses into its own report to submit to the MAC.

“The current situation is that the only way non-EU workers can get into the UK really is through the tier 2 visa, because the post-study work visa has been scrapped, the high skilled migrant program has been scrapped,” says Coadec executive director Guy Levin tells TechCrunch. “You can still come in as an entrepreneur through tier 1 or an investor, or if you’re an exceptional talent through tier 1, but tier 2’s the main visa route for non-EU workers to come into the country.

“You have to have a definite job offer, you need to have a degree level qualification, and the role needs to be advertised in the U.K. for 28 days first before it can be offered internationally. There has to be a minimum salary threshold, which is set for new entrants at the 10th percentile and for experienced hires at the 25th percentile… so for developers the 25th percentile is about £31,000. And the company itself needs to go through a process of getting accredited by the Home Office as a sponsor.”

Levin notes there were some 15,500 people entering the U.K. last year via the tier 2 general route — so excluding intracompany transfers (a route which does not apply to startups). A further breakdown of that by jobtype puts “Programmers and software development professionals” as the third most popular occupation under the ‘resident labour test market route’ (i.e. rather than the shortages occupation route) — with 2,618 devs entering the U.K. via that route in the year ending March 2015.

“It’s not enormous numbers but it’s significant. And that’s just for that particular job title. There may be others under otherjob titles, like data scientist or product managers,” says Levin.

“The system is fairly sensible, as it stands,” he adds. “Some bits of it are annoying, like the 28 day test. And thankfully that’s waived for shortage occupations… Which means you get to fast-track some bits of that… And at the start of the year some digital roles were put on that, so that’s fantastic and a good win for the sector.”

But Levin says the “worry” now for U.K. startups is that the Conservatives’ political imperative to find ways to reduce migration to the U.K. could result in policies that are actively harmful to the digital economy — given the options currently being considered by the government would limit founders’ ability to hire the skilled talent they need.

Levin says Coadec’s own migration survey has garnered around 100 responses thus far, with around 40 per cent saying they currently employ people hired via the tier 2 visa route. “The majority don’t… and several of the respondents said it’s already too complicated and expensive for us to go through that process,” he notes.

Speaking to TechCrunch about the government’s migration consultation, DueDil CEO and founder Damian Kimmelman, himself an American who relocated to the U.K. to build a data startup (one which has attracted some $22 million in funding thus far, according to CrunchBase), argues that “populist politics” could pose a threat the U.K.’s digital economy if the government ends up scrapping what he says has been a relatively liberal migration policy thus far. Approximately 10 per cent of DueDil’s staff are employed on tier 2 visas.

One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere.

“One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere. One of the problems building a company that’s scaling and building it in the U.K. is there are a not a lot of people that have scaled businesses, and have the experience of scaling large tech businesses. You can only find that outside of the U.K. All of the large companies that scaled got bought out. And this is an unfortunate fact about the talent pool — but one of the ways the U.K. has effectively been able to solve this is by really having quite liberal immigration policies,” he tells TechCrunch.

Broadly speaking, Kimmelman said any of the proposed changes being consulted on by the MAC could have “a serious impact” on DueDil’s ability to grow.

“Restricting what roles are eligible seems ludicrous. We are working in a very transformative economy. All of the types of roles are new types of roles every six months… Government can’t really police that. That’s sort of self defeating,” he adds. “If you restrict the rights of dependents you pretty much nullify the ability to bring in great talent. I don’t know anybody who’s going to move their family [if they can’t work here]… It’s already quite difficult hiring from the U.S. because the quality of life in the U.S. in a lot of cities is much greater than it is in London.”

He’s less concerned about the prospect of being required to increase the salary requirement for individuals hired under the tier 2 visa — although Coadec’s Levin points out that some startups, likely earlier stage, might choose to compensate a hire with equity rather than a large salary to reduce their burn rate. So a higher salary requirement could make life harder for other types of U.K. startups.

Kimmelman was actually one of the signatories of the aforementioned open letter backing the Conservative Party at the General Election. Signatories of that letter asserted the Tory-led government —

…has enthusiastically supported startups, job-makers and innovators and the need to build a British culture of entrepreneurialism to rival America’s. Elsewhere in the world people are envious at how much support startups get in the UK. This Conservative-led government has given us wholehearted support and we are confident that it would continue to do so. It would be bad for jobs, bad for growth, and bad for innovation to change course.

So is he disappointed that the new Conservative government is consulting on measures that, if implemented, could limit U.K.-based startup businesses’ access to digital skills? “I wouldn’t read too much into this just yet because they haven’t made any decisions,” says Kimmelman. “But if they do enact any of these policies I think it would be really harmful to the community.”

“They have a lot of constituents other than just the tech community that they’re working for. So I hope that they don’t do anything that’s rash. But I’ve been very impressed by the way that they’ve handled things thus far and so I think I need to give them the benefit of the doubt,” he adds.

Levin says responses to Coadec’s survey so far suggests U.K. startups’ main priority is the government keeps the overseas talent pipeline flowing — with less concern over cost increases, such as if the government applies a skills levy to fund apprenticeship programs.

But how the government squares the circle of an ideological commitment to reducing net migration with keeping skills-hungry digital businesses on side remains to be seen.

“The radical option of really restricting [migration] to genuine shortages is scary — because we just don’t know what that would look like,” adds Levin. “It could be that that would be the best answer for the tech sector because we might be able to make a case that there are genuine shortages and so we’d be fine. But there’s an uncertainty about what the future would look like — so at the moment we’re going to focus on making a positive case on why skilled migration is vital for the digital economy.”

The prior Tory-led U.K. coalition government introduced a cap on tier 2 visas back in 2011 — of just over 20,000 per year — which is applied as a monthly limit. That monthly cap was exceeded for the first time in June, with a swathe of visa applications turned down as a result. That’s something Levin says shows the current visa system is “creaking at the seams” — even before any further restrictions are factored in.

“Thirteen hundred applicants in June were turned down because they’d hit the cap,” he says, noting that when the cap is hit the Home Office uses salary level to choose between applicants. “So the salary thresholds jump up from the 25th percentile… which means the lower paid end of people lose out, which would probably disproportionately affect startups.”

London Tube Strike Produced Net Economic Benefit

London Tube strike produced net economic benefit

Source: www.cam.ac.uk

New analysis of the London Tube strike in February 2014 finds that it enabled a sizeable fraction of commuters to find better routes to work, and actually produced a net economic benefit.

For the small fraction of commuters who found a better route, when multiplied over a longer period of time, the benefit to them actually outweighs the inconvenience suffered by many more

Shaun Larcom

Analysis of the London Tube strike in February 2014 has found that despite the inconvenience to tens of thousands of people, the strike actually produced a net economic benefit, due to the number of people who found more efficient ways to get to work.

The researchers, from the University of Cambridge and the University of Oxford, examined 20 days’ worth of anonymised Oyster card data, containing more than 200 million data points, in order to see how individual Tube journeys changed during the strike. Since this particular strike only resulted in a partial closure of the Tube network and not all commuters were affected by the strike, a direct comparison was possible. The data enabled the researchers to see whether people chose to go back to their normal commute once the strike was over, or if they found a more efficient route and decided to switch.

The researchers found that of the regular commuters affected by the strike, either because certain stations were closed or because travel times were considerably different, a significant fraction – about one in 20 – decided to stick with their new route once the strike was over.

While the proportion of individuals who ended up changing their routes may sound small, the researchers found that the strike actually ended up producing a net economic benefit. By performing a cost-benefit analysis of the amount of time saved by those who changed their daily commute, the researchers found that the amount of time saved in the longer term actually outweighed the time lost by commuters during the strike. An Oxford working paper of their findings is published online today.

The London Tube map itself may have been a reason why many commuters did not find their optimal journey before the strike. In many parts of London, the actual distances between stations are distorted on the iconic map. By digitising the Tube map and comparing it to the actual distances between stations, the researchers found that those commuters living in, or travelling to, parts of London where distortion is greatest were more likely to have learned from the strike and found a more efficient route.

Additionally, since different Tube lines travel at different speeds, those commuters who had been travelling on one of the slower lines were also more likely to switch routes once the strike was over.

“One of the things we’re looking at is whether consumers usually make the best decision, but it’s never been empirically tested using a large consumer dataset such as this one,” said co-author Dr Ferninand Rauch from Oxford’s Department of Economics. “Our findings illustrate that people might get stuck with suboptimal decisions because they don’t experiment enough.”

According to the authors, being forced to alter a routine, whether that’s due to a Tube strike or government regulation, can often lead to net benefits, as people or corporations are forced to innovate. In economics, this is known as the Porter hypothesis.

“For the small fraction of commuters who found a better route, when multiplied over a longer period of time, the benefit to them actually outweighs the inconvenience suffered by many more,” said co-author Dr Shaun Larcom of Cambridge’s Department of Land Economy. “The net gains came from the disruption itself.”

“Given that a significant fraction of commuters on the London underground failed to find their optimal route until they were forced to experiment, perhaps we should not be too frustrated that we can’t always get what we want, or that others sometimes take decisions for us,” said co-author Dr Tim Willems, also from Oxford’s Department of Economics. “If we behave anything like London commuters and experiment too little, hitting such constraints may very well be to our long-term advantage.”

Reference:
Larcom, Shaun, Ferdinand Rauch and Tim Willems (2015), “The Benefits of Forced Experimentation: Striking Evidence from the London Underground Network”, University of Oxford Working Paper. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/london-tube-strike-produced-net-economic-benefit#sthash.of5zSwDm.dpuf