All posts by Admin

Advances In Brain Imaging Settle Debate Over Spread of Key Protein In Alzheimer’s

Advances in brain imaging settle debate over spread of key protein in Alzheimer’s

source: www.cam.ac.uk

Recent advances in brain imaging have enabled scientists to show for the first time that a key protein which causes nerve cell death spreads throughout the brain in Alzheimer’s disease – and hence that blocking its spread may prevent the disease from taking hold.

An estimated 44 million people worldwide are living with Alzheimer’s disease, a disease whose symptoms include memory problems, changes in behaviour and progressive loss of independence. These symptoms are caused by the build-up in the brain of two abnormal proteins: amyloid beta and tau. It is thought that amyloid beta occurs first, encouraging the appearance and spread of tau – and it is this latter protein that destroys the nerve cells, eating away at our memories and cognitive functions.

Until a few years ago, it was only possible to look at the build-up of these proteins by examining the brains of Alzheimer’s patients who had died, post mortem. However, recent developments in positron emission tomography (PET) scanning have enabled scientists to begin imaging their build-up in patients who are still alive: a patient is injected with a radioactive ligand, a tracer molecule that binds to the target (tau) and can be detected using a PET scanner.

In a study published today in the journal Brain, a team led by scientists at the University of Cambridge describe using a combination of imaging techniques to examine how patterns of tau relate to the wiring of the brain in 17 patients with Alzheimer’s disease, compared to controls.

Quite how tau appears throughout the brain has been the subject of speculation among scientists. One hypothesis is that harmful tau starts in one place and then spreads to other regions, setting off a chain reaction. This idea – known as ‘transneuronal spread’ – is supported by studies in mice. When a mouse is injected with abnormal human tau, the protein spreads rapidly throughout the brain; however, this evidence is controversial as the amount of tau injected is much higher relative to brain size compared to levels of tau observed in human brains, and the protein spreads rapidly throughout a mouse’s brain whereas it spreads slowly throughout a human brain.

There are also two other competing hypotheses. The ‘metabolic vulnerability’ hypothesis says that tau is made locally in nerve cells, but that some regions have higher metabolic demands and hence are more vulnerable to the protein. In these cases tau is a marker of distress in cells.

The third hypothesis, ‘trophic support’, also suggests that some brain regions are more vulnerable than others, but that this is less to do with metabolic demand and more to do with a lack of nutrition to the region or with gene expression patterns.

Thanks to the developments in PET scanning, it is now possible to compare these hypotheses.

“Five years ago, this type of study would not have been possible, but thanks to recent advances in imaging, we can test which of these hypotheses best agrees with what we observe,” says Dr Thomas Cope from the Department of Clinical Neurosciences at the University of Cambridge, the study’s first author.

Dr Cope and colleagues looked at the functional connections within the brains of the Alzheimer’s patients – in other words, how their brains were wired up – and compared this against levels of tau. Their findings supported the idea of transneuronal spread, that tau starts in one place and spreads, but were counter to predictions from the other two hypotheses.

“If the idea of transneuronal spread is correct, then the areas of the brain that are most highly connected should have the largest build-up of tau and will pass it on to their connections. It’s the same as we might see in a flu epidemic, for example – the people with the largest networks are most likely to catch flu and then to pass it on to others. And this is exactly what we saw.”

Professor James Rowe, senior author on the study, adds: “In Alzheimer’s disease, the most common brain region for tau to first appear is the entorhinal cortex area, which is next to the hippocampus, the ‘memory region’. This is why the earliest symptoms in Alzheimer’s tend to be memory problems. But our study suggests that tau then spreads across the brain, infecting and destroying nerve cells as it goes, causing the patient’s symptoms to get progressively worse.”

Confirmation of the transneuronal spread hypothesis is important because it suggests that we might slow down or halt the progression of Alzheimer’s disease by developing drugs to stop tau from moving along neurons.

Image: Artist’s illustration of the spread of tau filaments (red) throughout the brain. Credit: Thomas Cope

The same team also looked at 17 patients affected by another form of dementia, known as progressive supranuclear palsy (PSP), a rare condition that affects balance, vision and speech, but not memory. In PSP patients, tau tends to be found at the base of the brain rather than throughout. The researchers found that the pattern of tau build-up in these patients supported the second two hypotheses, metabolic vulnerability and trophic support, but not the idea that tau spreads across the brain.

The researchers also took patients at different stages of disease and looked at how tau build-up affected the connections in their brains.

In Alzheimer’s patients, they showed that as tau builds up and damages networks, the connections become more random, possibly explaining the confusion and muddled memories typical of such patients.

In PSP, the ‘highways’ that carry most information in healthy individuals receives the most damage, meaning that information needs to travel around the brain along a more indirect route. This may explain why, when asked a question, PSP patients may be slow to respond but will eventually arrive at the correct answer.

The study was funded by the NIHR Cambridge Biomedical Research Centre, the PSP Association, Wellcome, the Medical Research Council, the Patrick Berthoud Charitable Trust and the Association of British Neurologists.

Reference
Cope, TE et al. Tau Burden and the Functional Connectome in Alzheimer’s Disease and Progressive Supranuclear Palsy. Brain; 5 Jan 2018; DOI: 10.1093/brain/awx347


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Direct Genetic Evidence of Founding Population Reveals Story of First Native Americans

Direct genetic evidence of founding population reveals story of first Native Americans

source: www.cam.ac.uk

Direct genetic traces of the earliest Native Americans have been identified for the first time in a new study. The genetic evidence suggests that people may have entered the continent in a single migratory wave, perhaps arriving more than 20,000 years ago.

It’s the first time that we have had direct genomic evidence that all Native Americans can be traced back to one source population, via a single, founding migration event

Eske Willerslev

The data, which came from archaeological finds in Alaska, also points to the existence of a previously unknown Native American population, whom academics have named “Ancient Beringians”.

The findings are being published in the journal Nature and present possible answers to a series of long-standing questions about how the Americas were first populated.

It is widely accepted that the earliest settlers crossed from what is now Russia into Alaska via an ancient land bridge spanning the Bering Strait which was submerged at the end of the last Ice Age. Issues such as whether there was one founding group or several, when they arrived, and what happened next, are the subject of extensive debate, however.

In the new study, an international team of researchers led by academics from the Universities of Cambridge and Copenhagen sequenced the full genome of an infant – a girl named Xach’itee’aanenh t’eede gay, or Sunrise Child-girl, by the local Native community – whose remains were found at the Upward Sun River archaeological site in Alaska in 2013.

To their surprise, they found that although the child had lived around 11,500 years ago, long after people first arrived in the region, her genetic information did not match either of the two recognised branches of early Native Americans, which are referred to as Northern and Southern. Instead, she appeared to have belonged to an entirely distinct Native American population, which they called Ancient Beringians.

Further analyses then revealed that the Ancient Beringians were an offshoot of the same ancestor population as the Northern and Southern Native American groups, but that they separated from that population earlier in its history. This timeline allowed the researchers to construct a picture of how and when the continent might have been settled by a common, founding population of ancestral Native Americans, that gradually divided into these different sub-groupings.

The study was led by Professor Eske Willerslev, who holds positions both at St John’s College, University of Cambridge, and the University of Copenhagen in Denmark.

“The Ancient Beringians diversified from other Native Americans before any ancient or living Native American population sequenced to date. It’s basically a relict population of an ancestral group which was common to all Native Americans, so the sequenced genetic data gave us enormous potential in terms of answering questions relating to the early peopling of the Americas,” he said.

“We were able to show that people probably entered Alaska before 20,000 years ago. It’s the first time that we have had direct genomic evidence that all Native Americans can be traced back to one source population, via a single, founding migration event.”

The study compared data from the Upward Sun River remains with both ancient genomes, and those of numerous present-day populations. This allowed the researchers first to establish that the Ancient Beringian group was more closely related to early Native Americans than their Asian and Eurasian ancestors, and then to determine the precise nature of that relationship and how, over time, they split into distinct populations.

Until now, the existence of two separate Northern and Southern branches of early Native Americans has divided academic opinion regarding how the continent was populated. Researchers have disagreed over whether these two branches split after humans entered Alaska, or whether they represent separate migrations.

The Upward Sun River genome shows that Ancient Beringians were isolated from the common, ancestral Native American population, both before the Northern and Southern divide, and after the ancestral source population was itself isolated from other groups in Asia. The researchers say that this means it is likely there was one wave of migration into the Americas, with all subdivisions taking place thereafter.

According to the researchers’ timeline, the ancestral population first emerged as a separate group around 36,000 years ago, probably somewhere in northeast Asia. Constant contact with Asian populations continued until around 25,000 years ago, when the gene flow between the two groups ceased. This cessation was probably caused by brutal changes in the climate, which isolated the Native American ancestors. “It therefore probably indicates the point when people first started moving into Alaska,” Willerslev said.

Around the same time, there was a level of genetic exchange with an ancient North Eurasian population. Previous research by Willerslev has shown that a relatively specific, localised level of contact between this group, and East Asians, led to the emergence of a distinctive ancestral Native American population.

Ancient Beringians themselves then separated from the ancestral group earlier than either the Northern or Southern branches around 20,000 years ago. Genetic contact continued with their Native American cousins, however, at least until the Upward Sun River girl was born in Alaska around 8,500 years later.

The geographical proximity required for ongoing contact of this sort led the researchers to conclude that the initial migration into the Americas had probably already taken place when the Ancient Beringians broke away from the main ancestral line. José Víctor Moreno-Mayar, from the University of Copenhagen, said: “It looks as though this Ancient Beringian population was up there, in Alaska, from 20,000 years ago until 11,500 years ago, but they were already distinct from the wider Native American group.”

Finally, the researchers established that the Northern and Southern Native American branches only split between 17,000 and 14,000 years ago which, based on the wider evidence, indicates that they must have already been on the American continent south of the glacial ice.

The divide probably occurred after their ancestors had passed through, or around, the Laurentide and Cordilleran ice sheets – two vast glaciers which covered what is now Canada and parts of the northern United States, but began to thaw at around this time.

The continued existence of this ice sheet across much of the north of the continent would have isolated the southbound travellers from the Ancient Beringians in Alaska, who were eventually replaced or absorbed by other Native American populations. Although modern populations in both Alaska and northern Canada belong to the Northern Native American branch, the analysis shows that these derive from a later “back” migration north, long after the initial migration events.

“One significant aspect of this research is that some people have claimed the presence of humans in the Americas dates back earlier – to 30,000 years, 40,000 years, or even more,” Willerslev added. “We cannot prove that those claims are not true, but what we are saying, is that if they are correct, they could not possibly have been the direct ancestors to contemporary Native Americans.”

Reference:

Willerslev, E, et al. Terminal Pleistocene Alaskan genome reveals first founding population of Native AmericansNature. 3 Jan 2018. DOI: 10.1038/nature25173


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

New Brain Mapping Technique Highlights Relationship Between Connectivity and IQ

New brain mapping technique highlights relationship between connectivity and IQ

source: www.cam.ac.uk

A new and relatively simple technique for mapping the wiring of the brain has shown a correlation between how well connected an individual’s brain regions are and their intelligence, say researchers at the University of Cambridge.

This could take us closer to being able to get an idea of intelligence from brain scans, rather than having to rely on IQ tests

Ed Bullmore

In recent years, there has been a concerted effort among scientists to map the connections in the brain – the so-called ‘connectome’ – and to understand how this relates to human behaviours, such as intelligence and mental health disorders.

Now, in research published in the journal Neuron, an international team led by scientists at the University of Cambridge and the National Institutes of Health (NIH), USA, has shown that it is possible to build up a map of the connectome by analysing conventional brain scans taken using a magnetic resonance imaging (MRI) scanner.

The team compared the brains of 296 typically-developing adolescent volunteers. Their results were then validated in a cohort of a further 124 volunteers. The team used a conventional 3T MRI scanner, where 3T represents the strength of the magnetic field; however, Cambridge has recently installed a much more powerful Siemens 7T Terra MRI scanner, which should allow this technique to give an even more precise mapping of the human brain.

A typical MRI scan will provide a single image of the brain, from which it is possible to calculate multiple structural features of the brain. This means that every region of the brain can be described using as many as ten different characteristics. The researchers showed that if two regions have similar profiles, then they are described as having ‘morphometric similarity’ and it can be assumed that they are a connected network. They verified this assumption using publically-available MRI data on a cohort of 31 juvenile rhesus macaque monkeys to compare to ‘gold-standard’ connectivity estimates in that species.

Using these morphometric similarity networks (MSNs), the researchers were able to build up a map showing how well connected the ‘hubs’ – the major connection points between different regions of the brain network – were. They found a link between the connectivity in the MSNs in brain regions linked to higher order functions – such as problem solving and language – and intelligence.

“We saw a clear link between the ‘hubbiness’ of higher-order brain regions – in other words, how densely connected they were to the rest of the network – and an individual’s IQ,” explains PhD candidate Jakob Seidlitz at the University of Cambridge and NIH. “This makes sense if you think of the hubs as enabling the flow of information around the brain – the stronger the connections, the better the brain is at processing information.”

While IQ varied across the participants, the MSNs accounted for around 40% of this variation – it is possible that higher-resolution multi-modal data provided by a 7T scanner may be able to account for an even greater proportion of the individual variation, says the researchers.

“What this doesn’t tell us, though, is where exactly this variation comes from,” adds Seidlitz. “What makes some brains more connected than others – is it down to their genetics or their educational upbringing, for example? And how do these connections strengthen or weaken across development?”

“This could take us closer to being able to get an idea of intelligence from brain scans, rather than having to rely on IQ tests,” says Professor Ed Bullmore, Head of Psychiatry at Cambridge. “Our new mapping technique could also help us understand how the symptoms of mental health disorders such as anxiety and depression or even schizophrenia arise from differences in connectivity within the brain.”

The research was funded by the Wellcome Trust and the National Institutes of Health.

Reference
Seidlitz, J et al. Morphometric Similarity Networks Detect Microscale Cortical Organisation and Predict Inter-Individual Cognitive Variation. Neuron; 21 Dec 2017; DOI: 10.1016/j.neuron.2017.11.039


Researcher profile: Jakob Seidlitz

​Jakob Seidlitz is at PhD student on the NIH Oxford-Cambridge Scholars Programme. A graduate of the University of Rochester, USA, he spends half of his time in Cambridge and half at the National Institutes of Health in the USA.

Jakob’s research aims to better understand the origins of psychiatric disease, using techniques such as MRI to study child and adolescent brain development and map patterns of brain connectivity.

“A typical day consists of performing MRI data analysis, statistical testing, reading scientific literature, and preparing and editing manuscripts. “It’s great being able to work on such amazing large-scale neuroimaging datasets that allow for answering longstanding questions in psychiatry,” he says.

“Cambridge is a great place for my work. Ed [Bullmore], my supervisor, is extremely inclusive and collaborative, which meant developing relationships within and outside the department. Socially, the college post-grad community is amazingly diverse and welcoming, and the collegiate atmosphere of Cambridge can be truly inspiring.”

Jakob is a member of Wolfson College. Outside of his research, he plays football for the ‘Blues’ (the Cambridge University Association Football Club).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Researchers Chart The ‘Secret’ Movement of Quantum Particles

Researchers chart the ‘secret’ movement of quantum particles

Researchers from the University of Cambridge have taken a peek into the secretive domain of quantum mechanics. In a theoretical paper published in the journal Physical Review A, they have shown that the way that particles interact with their environment can be used to track quantum particles when they’re not being observed, which had been thought to be impossible.

We can verify old predictions of quantum mechanics, for example that particles can exist in different locations at the same time.

David Arvidsson-Shukur

One of the fundamental ideas of quantum theory is that quantum objects can exist both as a wave and as a particle, and that they don’t exist as one or the other until they are measured. This is the premise that Erwin Schrödinger was illustrating with his famous thought experiment involving a dead-or-maybe-not-dead cat in a box.

“This premise, commonly referred to as the wave function, has been used more as a mathematical tool than a representation of actual quantum particles,” said David Arvidsson-Shukur, a PhD student at Cambridge’s Cavendish Laboratory, and the paper’s first author. “That’s why we took on the challenge of creating a way to track the secret movements of quantum particles.”

Any particle will always interact with its environment, ‘tagging’ it along the way. Arvidsson-Shukur, working with his co-authors Professor Crispin Barnes from the Cavendish Laboratory and Axel Gottfries, a PhD student from the Faculty of Economics, outlined a way for scientists to map these ‘tagging’ interactions without looking at them. The technique would be useful to scientists who make measurements at the end of an experiment but want to follow the movements of particles during the full experiment.

Some quantum scientists have suggested that information can be transmitted between two people – usually referred to as Alice and Bob – without any particles travelling between them. In a sense, Alice gets the message telepathically. This has been termed counterfactual communication because it goes against the accepted ‘fact’ that for information to be carried between sources, particles must move between them.

“To measure this phenomenon of counterfactual communication, we need a way to pin down where the particles between Alice and Bob are when we’re not looking,” said Arvidsson-Shukur. “Our ‘tagging’ method can do just that. Additionally, we can verify old predictions of quantum mechanics, for example that particles can exist in different locations at the same time.”

The founders of modern physics devised formulas to calculate the probabilities of different results from quantum experiments. However, they did not provide any explanations of what a quantum particle is doing when it’s not being observed. Earlier experiments have suggested that the particles might do non-classical things when not observed, like existing in two places at the same time. In their paper, the Cambridge researchers considered the fact that any particle travelling through space will interact with its surroundings. These interactions are what they call the ‘tagging’ of the particle. The interactions encode information in the particles that can then be decoded at the end of an experiment, when the particles are measured.

The researchers found that this information encoded in the particles is directly related to the wave function that Schrödinger postulated a century ago. Previously the wave function was thought of as an abstract computational tool to predict the outcomes of quantum experiments. “Our result suggests that the wave function is closely related to the actual state of particles,” said Arvidsson-Shukur. “So, we have been able to explore the ‘forbidden domain’ of quantum mechanics: pinning down the path of quantum particles when no one is observing them.”

Reference
D. R. M. Arvidsson-Shukur, C. H. W. Barnes, and A. N. O. Gottfries. ‘Evaluation of counterfactuality in counterfactual communication protocols’. Physical Review A (2017). DOI: 10.1103/PhysRevA.96.062316


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Political Instability and Weak Governance Lead to Loss of Species, Study Finds

Political instability and weak governance lead to loss of species, study finds

 

source: www.cam.ac.uk

Big data study of global biodiversity shows ineffective national governance is a better indicator of species decline than any other measure of “anthropogenic impact”. Even protected conservation areas make little difference in countries that struggle with socio-political stability.

We now know that governance and political stability is a vital consideration when developing future environmental policies and practices

Tatsuya Amano

A vast new study of changes in global wildlife over almost three decades has found that low levels of effective national governance are the strongest predictor of declining species numbers – more so than economic growth, climate change or even surges in human population.

The findings, published in the journalNature, also show that protected conservation areas do maintain wildlife diversity, but only when situated in countries that are reasonably stable politically with sturdy legal and social structures.

The research used the fate of waterbird species since 1990 as a bellwether for broad biodiversity trends, as their wetland habitats are among the most diverse as well as the most endangered on Earth.

An international team of scientists and conservation experts led by the University of Cambridge analysed over 2.4 million annual count records of 461 waterbird species across almost 26,000 different survey sites around the world.

The researchers used this giant dataset to model localised species changes in nations and regions.  Results were compared to the Worldwide Governance Indicators, which measure everything from violence rates and rule of law to political corruption, as well as data such as gross domestic product (GDP) and conservation performance.

The team discovered that waterbird decline was greater in regions of the world where governance is, on average, less effective: such as Western and Central Asia, South America and sub-Saharan Africa.

The healthiest overall species quotas were seen in continental Europe, although even here the levels of key species were found to have nosedived.

This is the first time that effectiveness of national governance and levels of socio-political stability have been identified as the most significant global indicator of biodiversity and species loss.

“Although the global coverage of protected areas continues to increase, our findings suggest that ineffective governance could undermine the benefits of these biodiversity conservation efforts,” says Cambridge’s Dr Tatsuya Amano, who led the study at the University’s Department of Zoology and Centre for the Study of Existential Risk.

“We now know that governance and political stability is a vital consideration when developing future environmental policies and practices.”

For the latest study, Amano worked with Cambridge colleagues as well as researchers from the universities of Bath, UK, and Santa Clara, US, and conservation organisations Wetlands International and the National Audubon Society.

The lack of global-level data on changes to the natural world limits our understanding of the “biodiversity crisis”, say the study’s authors. However, they say there are advantages to focusing on waterbirds when trying to gauge these patterns.

Waterbirds are a diverse group of animals, from ducks and heron to flamingos and pelicans. Their wetland habitats cover some 1.3 billion hectares of the planet – from coast to freshwater and even highland – and provide crucial “ecosystem services”. Wetlands have also been degraded more than any other form of ecosystem.

In addition, waterbirds have a long history of population monitoring. The annual global census run by Wetlands International has involved more than 15,000 volunteers over the last 50 years, and the National Audubon Society’s annual Christmas bird count dates back to 1900.

“Our study shows that waterbird monitoring can provide useful lessons about what we need to do to halt the loss of biodiversity,” said co-author Szabolcs Nagy, Coordinator of the African-Eurasian Waterbird Census at Wetlands International.

Compared to all the “anthropogenic impacts” tested by the researchers, national governance was the most significant. ”Ineffective governance is often associated with lack of environmental enforcement and investment, leading to habitat loss,” says Amano.

The study also uncovered a relationship between the speed of GDP growth and biodiversity: the faster GDP per capita was growing, the greater the decline in waterbird species.

Diversity on a localised level was worst affected on average in South America, with a 0.95% annual loss equating to a 21% decline across the region over 25 years. Amano was also surprised to find severe species loss across inland areas of western and central Asia.

The researchers point out that poor water management and dam construction in parts of Asia and South America have caused wetlands to permanently dry out in counties such as Iran and Argentina – even in areas designated as protected.

Impotent hunting regulations can also explain species loss under ineffective governance. “Political instability can weaken legal enforcement, and consequently promote unsuitable, often illegal, killing even in protected areas,” says Amano.

In fact, the researchers found that protected conservation areas simply did not benefit biodiversity if they were located in nations with weak governance.

Recent Cambridge research involving Amano suggests that grassroots initiatives led by local and indigenous groups can be more effective than governments at protecting ecosystems – one possible conservation approach for regions suffering from political instability.

Reference
Amano, T et al. Successful conservation of global waterbird populations depends on effective governance. Nature; 20 December 2017; DOI: 10.1038/nature25139


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Habitable Planets Could Exist Around Pulsars

Habitable planets could exist around pulsars

source: www.cam.ac.uk

It is theoretically possible that habitable planets exist around pulsars – spinning neutron stars that emit short, quick pulses of radiation. According to new research, such planets must have an enormous atmosphere that converts the deadly x-rays and high energy particles of the pulsar into heat. The results, from astronomers at the University of Cambridge and Leiden University, are reported in the journal Astronomy & Astrophysics.

Pulsars are known for their extreme conditions. Each is a fast-spinning neutron star – the collapsed core of a massive star that has gone supernova at the end of its life. Only 10 to 30 kilometres across, a pulsar possesses enormous magnetic fields, accretes matter, and regularly gives out large bursts of X-rays and highly energetic particles.

Surprisingly, despite this hostile environment, neutron stars are known to host exoplanets. The first exoplanets ever discovered were around the pulsar PSR B1257+12 – but whether these planets were originally in orbit around the precursor massive star and survived the supernova explosion, or formed in the system later remains an open question. Such planets would receive little visible light but would be continually blasted by the energetic radiation and stellar wind from the host. Could such planets ever host life?

For the first time, astronomers have tried to calculate the ‘habitable’ zones near neutron stars – the range of orbits around a star where a planetary surface could possibly support water in a liquid form. Their calculations show that the habitable zone around a neutron star can be as large as the distance from our Earth to our Sun. An important premise is that the planet must be a super-Earth, with a mass between one and ten times our Earth. A smaller planet will lose its atmosphere within a few thousand years under the onslaught of the pulsar winds. To survive this barrage, a planet’s atmosphere must be a million times thicker than ours – the conditions on a pulsar planet surface might resemble those of the deep ocean floor on Earth.

The astronomers studied the pulsar PSR B1257+12 about 2300 light-years away as a test case, using the X-ray Chandra space telescope. Of the three planets in orbit around the pulsar, two are super-Earths with a mass of four to five times our Earth, and orbit close enough to the pulsar to warm up. According to co-author Alessandro Patruno from Leiden University, “The temperature of the planets might be suitable for the presence of liquid water on their surface. Though, we don’t know yet if the two super-Earths have the right, extremely dense atmosphere.”

In the future, Patruno and his co-author Mihkel Kama from Cambridge’s Institute of Astronomy would like to observe the pulsar in more detail and compare it with other pulsars. The European Southern Observatory’s ALMA Telescope would be able to show dust discs around neutron stars, which are good predictors of planets. The Milky Way contains about one billion neutron stars, of which about 200,000 are pulsars. So far, 3000 pulsars have been studied and only five pulsar planets have been found.

Reference:
A. Patruno & M. Kama. ‘Neutron Star Planets: Atmospheric processes and habitability.’ Accepted for publication in Astronomy & Astrophysics.

Adapted from a NOVA press release


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Mindfulness Training Reduces Stress During Exam Time

Mindfulness training reduces stress during exam time

source: www.cam.ac.uk

Mindfulness training can help support students at risk of mental health problems, concludes a randomised controlled trial carried out by researchers at the University of Cambridge.

This is, to the best of our knowledge, the most robust study to date to assess mindfulness training for students, and backs up previous studies that suggest it can improve mental health and wellbeing during stressful periods

Julieta Galante

While the prevalence of anxiety and depression among first year undergraduates is lower than the general population, it increases to overtake this during their second year. The number of students accessing counselling services in the UK grew by 50% from 2010 to 2015, surpassing the growth in the number of students during the same period. There is little consensus as to whether students are suffering more mental disorders, are less resilient than in the past or whether there is less stigma attached to accessing support. Regardless, mental health support services for students are becoming stretched.

Recent years have seen increasing interest in mindfulness, a means of training attention for the purpose of mental wellbeing based on the practice of meditation. There is evidence that mindfulness training can improve symptoms of common mental health issues such as anxiety and depression. However, there is little robust evidence on the effectiveness of mindfulness training in preventing such problems in university students.

“Given the increasing demands on student mental health services, we wanted to see whether mindfulness could help students develop preventative coping strategies,” says Géraldine Dufour Head of the University of Cambridge’s Counselling Service. Dufour is one of the authors of a study that set out to test the effectiveness of mindfulness – the results are published today in The Lancet Public Health.

In total, 616 students took part in the study and were randomised across two groups. Both groups were offered access to comprehensive centralised support at the University of Cambridge Counselling Service in addition to support available from the university and its colleges, and from health services including the National Health Service.

Half of the cohort (309 students) were also offered the Mindfulness Skills for Students course. This consisted of eight, weekly, face-to-face, group-based sessions based on the course book Mindfulness: A Practical Guide to Finding Peace in a Frantic World, adapted for university students. Students were encouraged to also practice at home, starting at eight minute meditations, and increasing to about 15-25 minutes per day, as well as other mindfulness practices such as a mindful walking and mindful eating. Students in the other half of the cohort were offered their mindfulness training the following year.

The researchers assessed the impact of the mindfulness training on stress (‘psychological distress’) during the main, annual examination period in May and June 2016, the most stressful weeks for most students. They measured this using the CORE-OM, a generic assessment used in many counselling services.

The mindfulness course led to lower distress scores after the course and during the exam term compared with students who only received the usual support. Mindfulness participants were a third less likely than other participants to have scores above a threshold commonly seen as meriting mental health support. Distress scores for the mindfulness group during exam time fell below their baselines levels (as measured at the start of the study, before exam time), whereas the students who received the standard support became increasingly stressed as the academic year progressed.

The researchers also looked at other measures, such as self-reported wellbeing. They found that mindfulness training improved wellbeing during the exam period when compared with the usual support.

“This is, to the best of our knowledge, the most robust study to date to assess mindfulness training for students, and backs up previous studies that suggest it can improve mental health and wellbeing during stressful periods,” says Dr Julieta Galante from the Department of Psychiatry at Cambridge, who led the study.

“Students who had been practising mindfulness had distress scores lower than their baseline levels even during exam time, which suggests that mindfulness helps build resilience against stress.”

Professor Peter Jones, also from the Department of Psychiatry, adds: “The evidence is mounting that mindfulness training can help people cope with accumulative stress. While these benefits may be similar to some other preventative methods, mindfulness could be a useful addition to the interventions already delivered by university counselling services. It appears to be popular, feasible, acceptable and without stigma.”

The team also looked at whether mindfulness had any effect of examination results; however, their findings proved inconclusive.

The research was supported by the University of Cambridge and the National Institute for Health (NIHR) Collaboration for Leadership in Applied Health Research and Care East of England, hosted by Cambridgeshire and Peterborough NHS Foundation Trust.

Reference
Galante, J et al. Effectiveness of providing university students with a mindfulness-based intervention to increase resilience to stress: a pragmatic randomised controlled trial.Lancet Public Health; 19 December 2017; DOI: 10.1016/S2468-2667(17)30231-1


Researcher profile: Dr Julieta Galante

Dr Julieta Galante is a research associate in the Department of Psychiatry. Her interests lie in mental health promotion, particularly the effects of meditation on mental health. She hopes to contribute to the growing number of approaches to preventing mental health problems that do not rely on medication.

“What fascinates me is the idea that you could potentially train your mind to improve your wellbeing and develop yourself as a person,” she says. “It’s not the academic type of mind-training –meditation training is more like embarking on a deep inner-exploration.”

Galante’s research involves studying large numbers of people in real-world settings, such as busy students revising for their exams. It’s a very complex research field, she says: there are many factors, social, psychological and biological, that contribute to an individual’s mental health.

“Our projects are most successful (and enjoyable) when we collaborate with people outside the academic sphere, in this particular project with the Student Counselling Service, University authorities, and the students themselves.”

The mindfulness trial was ‘blinded’, meaning that the researchers did not know which students (and hence which data) belonged to which group. The ‘unblinding’ of the results – when they found out whether their trial was successful – was nerve-wracking, she says. “The team statistician didn’t know which group had received mindfulness training and which group was the control. He showed his results to the rest of the team and we could all see that there was a clear difference between the groups, but we didn’t know whether this meant really good or really bad news for mindfulness training. When the results were then unveiled, we all laughed with relief!”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Birds Learn From Each Other’s ‘Disgust’, Enabling Insects To Evolve Bright Colours

Birds learn from each other’s ‘disgust’, enabling insects to evolve bright colours

source: www.cam.ac.uk

A new study of TV-watching great tits reveals how they learn through observation. Social interactions within a predator species can have “evolutionary consequences” for potential prey – such as the conspicuous warning colours of insects like ladybirds.

We suspect our findings apply over a wide range of predators and prey. Social information may have evolutionary consequences right across ecological communities

Rose Thorogood

Many animals have evolved to stand out. Bright colours may be easy to spot, but they warn predators off by signalling toxicity or foul taste.

Yet if every individual predator has to eat colourful prey to learn this unappetising lesson, it’s a puzzle how conspicuous colours had the chance to evolve as a defensive strategy.

Now, a new study using the great tit species as a “model predator” has shown that if one bird observes another being repulsed by a new type of prey, then both birds learn the lesson to stay away.

By filming a great tit having a terrible dining experience with conspicuous prey, then showing it on a television to other tits before tracking their meal selection, researchers found that birds acquired a better idea of which prey to avoid: those that stand out.

The team behind the study, published in the journal Nature Ecology & Evolution, say the ability of great tits to learn what to avoid through observing others is an example of “social transmission” of information.

The scientists scaled up data from their experiments through mathematical modelling to reveal a tipping point: where social transmission has occurred sufficiently in a predator species for its potential prey to stand a better chance with bright colours over camouflage.

“Our study demonstrates that the social behaviour of predators needs to be considered to understand the evolution of their prey,” said lead author Dr Rose Thorogood, from the University of Cambridge’s Department of Zoology.

“Without social transmission taking place in predator species such as great tits, it becomes extremely difficult for conspicuously coloured prey to outlast and outcompete alternative prey, even if they are distasteful or toxic.

“There is mounting evidence that learning by observing others occurs throughout the animal kingdom. Species ranging from fruit flies to trout can learn about food using social transmission.

“We suspect our findings apply over a wide range of predators and prey. Social information may have evolutionary consequences right across ecological communities.”

Thorogood (also based at the Helsinki Institute of Life Science) and colleagues from the University of Jyväskylä and University of Zürich captured wild great tits in the Finnish winter. At Konnevesi Research Station, they trained the birds to open white paper packages with pieces of almond inside as artificial prey.

The birds were given access to aviaries covered in white paper dotted with small black crosses. These crosses were also marked on some of the paper packages: the camouflaged prey.

One bird was filmed unwrapping a package stamped with a square instead of a cross: the conspicuous prey. As such, its contents were unpalatable – an almond soaked with bitter-tasting fluid.

The bird’s reaction was played on a TV in front of some great tits but not others (a control group). When foraging in the cross-covered aviaries containing both cross and square packages, the birds exposed to the video were quicker to select their first item, and 32% less likely to choose the ‘conspicuous’ square prey.

“Just as we might learn to avoid certain foods by seeing a facial expression of disgust, observing another individual headshake and wipe its beak encouraged the great tits to avoid that type of prey,” said Thorogood.

“By modelling the social spread of information from our experimental data, we worked out that predator avoidance of more vividly conspicuous species would become enough for them to survive, spread, and evolve.”

Great tits – a close relation of North America’s chickadee – make a good study species as they are “generalist insectivores” that forage in flocks, and are known to spread other forms of information through observation.

Famously, species of tit learned how to pierce milk bottle lids and siphon the cream during the middle of last century – a phenomenon that spread rapidly through flocks across the UK.

Something great tits don’t eat, however, is a seven-spotted ladybird. “One of the most common ladybird species is bright red, and goes untouched by great tits. Other insects that are camouflaged, such as the brown larch ladybird or green winter moth caterpillar, are fed on by great tits and their young,” said Thorogood.

“The seven-spotted ladybird is so easy to see that if every predator had to eat one before they discovered its foul taste, it would have struggled to survive and reproduce.

“We think it may be the social information of their unpalatable nature spreading through predator species such as great tits that makes the paradox of conspicuous insects such as seven-spotted ladybirds possible.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Calf’s Foot Jelly and a Tankard of Ale? Welcome to the 18th Century Starbucks

Calf’s foot jelly and a tankard of ale? Welcome to the 18th century Starbucks

source: www.cam.ac.uk

Researchers have published details of the largest collection of artefacts from an early English coffeehouse ever discovered. Described as an 18th century equivalent of Starbucks, the finds nonetheless suggest that it may have been less like a café, and more like an inn.

Coffee houses were important social centres during the 18th century. This is the first time that we have been able to study one in such depth

Craig Cessford

Customers today may settle for a flat white and a cinnamon swirl, but at coffee shops 250 years ago, many also expected ale, wine, and possibly a spot of calf’s foot jelly, a new study has shown.

Following its identification during an archaeological survey, researchers are publishing complete details of the most significant collection of artefacts from an early coffee shop ever recovered in the UK. The establishment, called Clapham’s, was on a site now owned by St John’s College, Cambridge, but in the mid-to-late 1700s it was a bustling coffeehouse – the contemporary equivalent, academics say, of a branch of Starbucks.

Researchers from the Cambridge Archaeological Unit – part of the Department of Archaeology at the University of Cambridge – uncovered a disused cellar which had been backfilled with unwanted items, possibly at some point during the 1770s. Inside, they found more than 500 objects, many in a very good state of preservation. These included drinking vessels for tea, coffee and chocolate, serving dishes, clay pipes, animal and fish bones, and an impressive haul of 38 teapots.

The assemblage has now been used to reconstruct what a visit to Clapham’s might have been like, and in particular what its clientele ate and drank. The report suggests that the standard view of early English coffeehouses, as civilised establishments where people engaged in sober, reasoned debate, may need some reworking.

Customers at Clapham’s, while they no doubt drank coffee, also enjoyed plenty of ale and wine, and tucked into dishes ranging from pastry-based snacks to substantial meals involving meat and seafood. The discovery of 18 jelly glasses, alongside a quantity of feet bones from immature cattle, led the researchers to conclude that calf’s foot jelly, a popular dish of that era, might well have been a house speciality.

Craig Cessford, from the Cambridge Archaeological Unit, said that by modern standards, Clapham’s was perhaps less like a coffee shop, and more like an inn.

“Coffee houses were important social centres during the 18th century, but relatively few assemblages of archaeological evidence have been recovered and this is the first time that we have been able to study one in such depth,” he said.

“In many respects, the activities at Clapham’s barely differed from contemporary inns. It seems that coffeehouses weren’t completely different establishments as they are now – they were perhaps at the genteel end of a spectrum that ran from alehouse to coffeehouse.”

Although the saturation of British high streets with coffee shops is sometimes considered a recent phenomenon, they were in fact also extremely common centuries ago. Coffee-drinking first came to Britain in the 16th century and increased in popularity thereafter. By the mid-18th century there were thousands of coffeehouses, which acted as important gathering places and social hubs. Only towards the end of the 1700s did these start to disappear, as tea eclipsed coffee as the national drink.

Clapham’s was owned by a couple, William and Jane Clapham, who ran it from the 1740s until the 1770s. It was popular with students and townspeople alike, and a surviving verse from a student publication of 1751 even attests to its importance as a social centre: “Dinner over, to Tom’s or Clapham’s I go; the news of the town so impatient to know.”

The researchers think that the cellar was perhaps backfilled towards the end of the 1770s, when Jane, by then a widow, retired and her business changed hands. It then lay forgotten until St John’s commissioned and paid for a series of archaeological surveys on and around the site of its Old Divinity School, which were completed in 2012.

Some of the items found were still clearly marked with William and Jane’s initials. They included tea bowls (the standard vessel for drinking tea at the time), saucers, coffee cans and cups, and chocolate cups – which the researchers were able to distinguish because they were taller, since “chocolate was served with a  frothy, foamy head”. They also found sugar bowls, milk and cream jugs, mixing bowls, storage jars, plates, bowls, serving dishes, sauceboats, and many other objects.

Even though Clapham’s was a coffeehouse, the finds suggest that tea was fast winning greater affection among drinkers; tea bowls were almost three times as common as coffee cans or cups.

Perhaps more striking, however, was the substantial collection of tankards, wine bottles and glasses, indicating that alcohol consumption was normal. Some drinkers appear to have had favourite tankards reserved for their personal use, while the team also found two-handled cups, possibly for drinking “possets” – milk curdled with wine or ale, and often spiced.

Compared with the sandwiches and muffins on offer in coffee shops today, dining was a much bigger part of life at Clapham’s. Utensils and crockery were found for making patties, pastries, tarts, jellies, syllabubs and other desserts. Animal bones revealed that patrons enjoyed shoulders and legs of mutton, beef, pork, hare, rabbit, chicken and goose. The researchers also found oyster shells, and bones from fish such as eel, herring and mackerel.

Although coffeehouses have traditionally been associated with the increasing popularity of smoking in Britain, there was little evidence of much at Clapham’s. Just five clay pipes were found, including one particularly impressive specimen which carries the slogan “PARKER for ever, Huzzah” – possibly referring to the naval Captain Peter Parker, who was celebrated for his actions during the American War of Independence. The lack of pipes may be because, at the time, tobacco was considered less fashionable than snuff.

Together, the assemblage adds up to a picture in which, rather than making short visits to catch up on the news and engage in polite conversation, customers often settled in for the evening at an establishment that offered them not just hot beverages, but beer, wine, punch and liqueurs, as well as extensive meals. Some even seem to have “ordered out” from nearby inns if their favourite food was not on the menu.

There was little evidence, too, that they read newspapers and pamphlets, the rise of which historians also link to coffeehouses. Newspapers were perishable and therefore unlikely to survive in the archaeological record, but the researchers also point out that other evidence of reading – such as book clasps – has been found on the site of inns nearby, while it is absent here.

“We need to remember this was just one of thousands of coffeehouses and Clapham’s may have been atypical in some ways,” Cessford added. “Despite this it does give us a clearer sense than we’ve ever had before of what these places were like, and a tentative blueprint for spotting the traces of other coffeehouse sites in archaeological assemblages in the future.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Ancient Faeces Reveal Parasites Described In Earliest Greek Medical Texts

Ancient faeces reveal parasites described in earliest Greek medical texts

source: www.cam.ac.uk

Earliest archaeological evidence of intestinal parasitic worms infecting the ancient inhabitants of Greece confirms descriptions found in writings associated with Hippocrates, the early physician and ‘father of Western medicine’.

This research shows how we can bring together archaeology and history to help us better understand the discoveries of key early medical practitioners and scientists

Piers Mitchell

Ancient faeces from prehistoric burials on the Greek island of Kea have provided the first archaeological evidence for the parasitic worms described 2,500 years ago in the writings of Hippocrates – the most influential works of classical medicine.

University of Cambridge researchers Evilena Anastasiou and Piers Mitchell used microscopy to study soil formed from decomposed faeces recovered from the surface of pelvic bones of skeletons buried in the Neolithic (4th millennium BC), Bronze Age (2nd millennium BC) and Roman periods (146 BC – 330 AD).

The Cambridge team worked on this project with Anastasia Papathanasiou and Lynne Schepartz, who are experts in the archaeology and anthropology of ancient Greece, and were based in Athens.

They found that eggs from two species of parasitic worm (helminths) were present: whipworm (Trichuris trichiura), and roundworm (Ascaris lumbricoides). Whipworm was present from the Neolithic, and roundworm from the Bronze Age.

Hippocrates was a medical practitioner from the Greek island of Cos, who lived in the 5th and 4th centuries BC. He became famous for developing the concept of humoural theory to explain why people became ill.

This theory – in which a healthy body has a balance of four ‘humours’: black bile, yellow bile, blood and phlegm – remained the accepted explanation for disease followed by doctors in Europe until the 17th century, over 2,000 years later.

Hippocrates and his students described many diseases in their medical texts, and historians have been trying to work out which diseases they were. Until now, they had to rely on the original written descriptions of intestinal worms to estimate which parasites may have infected the ancient Greeks. The Hippocratic texts called these intestinal worms Helmins strongyleAscaris, and Helmins plateia.

The researchers say that this new archaeological evidence identifies beyond doubt some of the species of parasites that infected people in the region. The findings are published today in the Journal of Archaeological Science: Reports.

“The Helmins strongyle worm in the ancient Greek texts is likely to have referred to roundworm, as found at Kea. The Ascaris worm described in the ancient medical texts may well have referred to two parasites, pinworm and whipworm, with the latter being found at Kea,” said study leader Piers Mitchell, from Cambridge’s Department of Archaeology.

“Until now we only had estimates from historians as to what kinds of parasites were described in the ancient Greek medical texts. Our research confirms some aspects of what the historians thought, but also adds new information that the historians did not expect, such as that whipworm was present”.

The mention of infections by these parasites in the Hippocratic Corpus includes symptoms of vomiting up worms, diarrhoea, fevers and shivers, heartburn, weakness, and swelling of the abdomen.

Descriptions of treatment for intestinal worms in the Corpus were mainly through medicines, such as the crushed root of the wild herb seseli mixed with water and honey taken as a drink.

“Finding the eggs of intestinal parasites as early as the Neolithic period in Greece is a key advance in our field,” said Evilena Anastasiou, one of the study’s authors. “This provides the earliest evidence for parasitic worms in ancient Greece.”

“This research shows how we can bring together archaeology and history to help us better understand the discoveries of key early medical practitioners and scientists,” added Mitchell.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Mistletoe and (A Large) Wine: Seven-Fold Increase In Wine Glass Size Over 300 Years

Mistletoe and (a large) wine: seven-fold increase in wine glass size over 300 years

 

source: www.cam.ac.uk

Our Georgian and Victorian ancestors probably celebrated Christmas with more modest wine consumption than we do today – if the size of their wine glasses are anything to go by. Researchers at the University of Cambridge have found that the capacity of wine glasses has increased seven-fold over the past 300 years, and most steeply in the last two decades as wine consumption rose.

Wine will no doubt be a feature of some merry Christmas nights, but when it comes to how much we drink, wine glass size probably does matter

Theresa Marteau

Both the types of alcoholic drink and the amount consumed in England has fluctuated over the last 300 years, largely in response to economic, legislative and social factors. Until the second part of the 20th century, beer and spirits were the most common forms of alcohol consumed, with wine most commonly consumed by the upper classes.

Wine consumption increased almost four-fold between 1960 and 1980, and almost doubled again between 1980 and 2004. Increased alcohol consumption since the mid-20th century reflects greater affordability, availability and marketing of alcoholic products, as well as licensing liberalisations leading to supermarkets competing in the drinks retail business.

In 2016, Professor Marteau and colleagues carried out an experiment at the Pint Shop in Cambridge, altering the size of wine glasses while keeping the serving sizes the same. They found that this led to an almost 10% increase in sales.

“Wine will no doubt be a feature of some merry Christmas nights, but when it comes to how much we drink, wine glass size probably does matter,” says Professor Theresa Marteau, Director of the Behaviour and Health Research Unit at the University of Cambridge.

In a study published today in The BMJ, Professor Marteau and colleagues looked at wine glass capacity over time to help understand whether any changes in their size might have contributed to the steep rise in its consumption over the past few decades.

“Wine glasses became a common receptacle from which wine was drunk around 1700,” says first author Dr Zorana Zupan. “This followed the development of lead crystal glassware by George Ravenscroft in the late 17th century, which led to the manufacture of less fragile and larger glasses than was previously possible.”

Through a combination of online searches and discussions with experts in antique glassware, including museum curators, the researchers obtained measurements of 411 glasses from 1700 to modern day. They found that wine glass capacity increased from 66 ml in the 1700s to 417ml in the 2000s, with the mean wine glass size in 2016-17 being 449ml.

“Our findings suggest that the capacity of wine glasses in England increased significantly over the past 300 years,” adds Dr Zupan. “For the most part, this was gradual, but since the 1990s, the size has increased rapidly. Whether this led to the rise in wine consumption in England, we can’t say for certain, but a wine glass 300 years ago would only have held about a half of today’s small measure. On top of this, we also have some evidence that suggests wine glass size itself influences consumption.”

Increases in the size of wine glasses over time likely reflect changes in a number of factors including price, technology, societal wealth and wine appreciation. The ‘Glass Excise’ tax, levied in the mid-18th century, led to the manufacture of smaller glass products. This tax was abolished in 1845, and in the late Victorian era glass production began to shift from more traditional mouth-blowing techniques to more automated processes. These changes in production are reflected in the data, which show the smallest wine glasses during the 1700s with no increases in glass size during that time-period – the increase in size beginning in the 19th century.

Two changes in the 20th century likely contributed further to increased glass sizes. Wine glasses started to be tailored in both shape and size for different wine varieties, both reflecting and contributing to a burgeoning market for wine appreciation, with larger glasses considered important in such appreciation. From 1990 onwards, demand for larger wine glasses by the US market was met by an increase in the size of glasses manufactured in England, where a ready market was also found.

A further influence on wine glass size may have come both from those running bars and restaurants, as well as their consumers. If sales of wine increased when sold in larger glasses, this may have incentivised vendors to use larger glasses. Larger wine glasses can also increase the pleasure from drinking wine, which may also increase the desire to drink more.

In England, wine is increasingly served in 250ml servings with smaller sizes of 125ml often absent from wine lists or menus, despite a regulatory requirement introduced in 2010 that licensees make customers aware that these smaller measures are available. A serving size of 250ml – one third of a standard 75cl bottle of wine and one fifth of the weekly recommended intake for low risk drinking – is larger than the mean capacity of a wine glass available in the 1980s.

Alongside increased wine glass capacity, the strength of wine sold in the UK since the 1990s has also increased, thereby likely further increasing any impact of larger wine glasses on the amount of pure alcohol being consumed by wine drinkers.

The researchers argue that if the impact of larger wine glasses upon consumption can be proven to be a reliable effect, then local licencing regulations limiting the size of glasses would expand the number of policy options for reducing alcohol consumption out of home. Reducing the size of wine glasses in licensed premises might also shift the social norm of what a wine glass should look like, with the potential to influence the size of wine glasses people use at home, where most alcohol, including wine, is consumed.

In the final line of their report, the researchers acknowledge the seasonal sensitivity to these suggestions: “We predict – with moderate confidence – that, while there will be some resistance to these suggestions, their palatability will be greater in the month of January than that of December.”

The research was funded by a Senior Investigator Award to Theresa Marteau from the National Institute for Health Research.

Reference
Zupan, Z et al. Wine glass size in England from 1700 to 2017: A measure of our time.BMJ; 14 Dec 2017; DOI: 10.1136/bmj.j5623

Image

WA1957.24.2.380 Enamelled Jacobite portrait glass. © Ashmolean Museum, University of Oxford


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Dolby Estate Gives Cambridge University Cavendish Lab £85m

Dolby estate gives Cambridge University Cavendish lab £85m

  • 6 December 2017

    source: www.bbc.co.uk

Ray DolbyImage copyrightCAMBRIDGE UNIVERSITY
Image captionRay Dolby died in 2013 at the age of 80

The family of sound pioneer Ray Dolby has donated £85m from his estate to Cambridge University.

The US-born engineer was best known for his work in developing noise reduction and surround-sound technology. He died in 2013 aged 80.

The donation will go to the Department of Physics’ Cavendish Laboratory where Mr Dolby worked on his PhD in 1961.

It is the second largest gift to the university in its 808-year history after Bill Gatesdonated $210m in 2000.

Pembroke CollegeImage copyrightCAMBRIDGE UNIVERSITY
Image captionRay Dolby gained a PhD in physics from Pembroke College, Cambridge

Mr Dolby came to Cambridge as a Marshall Scholar in 1957 and studied physics at Pembroke College which itself received £35m from his estate in 2015.

The bequest will complete the redevelopment of the Cavendish Laboratory, known as Cav III, with the Ray Dolby Centre set to open in 2021/22.

A research group and a professorship in physics will also be named in honour of the inventor.

Read more Cambridgeshire stories here

Professor Andy Parker, head of the Cavendish, said: “In addition to serving as a home for physics research at Cambridge, it will be a top-class facility for the nation.

“This gift is the most significant investment in physics research in generations.”

Cav IIIImage copyrightJESTICO AND WHILES
Image captionThe money will go towards development of the new Cavendish Laboratory, which specialises in physics

Mr Dolby’s widow, Dagmar, said the university played a pivotal role in his life “both professionally and personally”.

Cambridge Vice-Chancellor Professor Stephen Toope also described the donation as “a fitting tribute to Ray Dolby’s legacy”.

“His research paved the way for an entire industry,” he said.

“A century from now, we can only speculate on which discoveries will alter the way we live our lives and which new industries will have been born in the Cavendish Laboratory, in large part thanks to this extraordinarily generous gift.”

The multimillion-pound donation also pushes Cambridge University’s £2bn fundraising campaign – which was launched in 2015 – over the halfway mark.

The campaign will support students and university facilities, as well as boost its international reputation.

Presenting Facts As ‘Consensus’ Bridges Conservative-Liberal Divide Over Climate Change

Presenting facts as ‘consensus’ bridges conservative-liberal divide over climate change

source: www.cam.ac.uk

New evidence shows that ‘social fact’ highlighting expert consensus shifts perceptions across US political spectrum – particularly among highly educated conservatives. Facts that encourage agreement are a promising way of cutting through today’s ‘post-truth’ bluster, say psychologists.

Even in our so-called post-truth environment, hope is not lost for the fact

Sander van der Linden

In the murk of post-truth public debate, facts can polarise. Scientific evidence triggers reaction and spin that ends up entrenching the attitudes of opposing political tribes.

Recent research suggests this phenomenon is actually stronger among the more educated, through what psychologists call ‘motived reasoning’: where data is rejected or twisted – consciously or otherwise – to prop up a particular worldview.

However, a new study in the journal Nature Human Behaviour finds that one type of fact can bridge the chasm between conservative and liberal, and pull people’s opinions closer to the truth on one of the most polarising issues in US politics: climate change.

Previous research has broadly found US conservatives to be most sceptical of climate change. Yet by presenting a fact in the form of a consensus – “97% of climate scientists have concluded that human-caused global warming is happening” – researchers have now discovered that conservatives shift their perceptions significantly towards the scientific ‘norm’.

In an experiment involving over 6,000 US citizens, psychologists found that introducing people to this consensus fact reduced polarisation between higher educated liberals and conservatives by roughly 50%, and increased conservative belief in a scientific accord on climate change by 20 percentage points.

Moreover, the latest research confirms the prior finding that climate change scepticism is indeed more deeply rooted among highly educated conservatives. Yet exposure to the simple fact of a scientific consensus neutralises the “negative interaction” between higher education and conservatism that strongly embeds these beliefs.

“The vast majority of people want to conform to societal standards, it’s innate in us as a highly social species,” says Dr Sander van der Linden, study lead author from the University of Cambridge’s Department of Psychology.

“People often misperceive social norms, and seek to adjust once they are exposed to evidence of a group consensus,” he says, pointing to the example that college students always think their friends drink more than they actually do.

“Our findings suggest that presenting people with a social fact, a consensus of opinion among experts, rather than challenging them with blunt scientific data, encourages a shift towards mainstream scientific belief – particularly among conservatives.”

For van der Linden and his co-authors Drs Anthony Leiserowitz and Edward Maibach from Yale and George Mason universities in the US, social facts such as demonstrating a consensus can act as a “gateway belief”: allowing a gradual recalibration of private attitudes.

“Information that directly threatens people’s worldview can cause them to react negatively and become further entrenched in their beliefs. This ‘backfire effect’ appears to be particularly strong among highly educated US conservatives when it comes to contested issues such as manmade climate change,” says van der Linden.

“It is more acceptable for people to change their perceptions of what is normative in science and society. Previous research has shown that people will then adjust their core beliefs over time to match. This is a less threatening way to change attitudes, avoiding the ‘backfire effect’ that can occur when someone’s worldview is directly challenged.”

For the study, researchers conducted online surveys of 6,301 US citizens that adhered to nationally representative quotas of gender, age, education, ethnicity, region and political ideology.

The nature of the study was hidden by claims of testing random media messages, with the climate change perception tests sandwiched between questions on consumer technology and popular culture messaging.

Half the sample were randomly assigned to receive the ‘treatment’ of exposure to the fact of scientific consensus, while the other half, the control group, did not.

Researchers found that attitudes towards scientific belief on climate change among self-declared conservatives were, on average, 35 percentage points lower (64%) than the actual scientific consensus of 97%. Among liberals it was 20 percentage points lower.

They also found a small additional negative effect: when someone is highly educated and conservative they judge scientific agreement to be even lower.

However, once the treatment group were exposed to the ‘social fact’ of overwhelming scientific agreement, higher-educated conservatives shifted their perception of the scientific norm by 20 percentage points to 83% – almost in line with post-treatment liberals.

The added negative effect of conservatism plus high education was completely neutralised through exposure to the truth on scientific agreement around manmade climate change.

“Scientists as a group are still viewed as trustworthy and non-partisan across the political spectrum in the US, despite frequent attempts to discredit their work through ‘fake news’ denunciations and underhand lobbying techniques deployed by some on the right,” says van der Linden.

“Our study suggests that even in our so-called post-truth environment, hope is not lost for the fact. By presenting scientific facts in a socialised form, such as highlighting consensus, we can still shift opinion across political divides on some of the most pressing issues of our time.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Industrial Revolution Left A Damaging Psychological ‘Imprint’ On Today’s Populations

Industrial Revolution left a damaging psychological ‘imprint’ on today’s populations

source: www.cam.ac.uk

Study finds people in areas historically reliant on coal-based industries have more ‘negative’ personality traits. Psychologists suggest this cognitive die may well have been cast at the dawn of the industrial age.

The Industrial Revolution has a hidden psychological heritage, one that is imprinted on today’s psychological make-up of the regions of England and Wales

Jason Rentfrow

People living in the former industrial heartlands of England and Wales are more disposed to negative emotions such as anxiety and depressive moods, more impulsive and more likely to struggle with planning and self-motivation, according to a new study of almost 400,000 personality tests.

The findings show that, generations after the white heat of Industrial Revolution and decades on from the decline of deep coal mining, the populations of areas where coal-based industries dominated in the 19th century retain a “psychological adversity”.

Researchers suggest this is the inherited product of selective migrations during mass industrialisation compounded by the social effects of severe work and living conditions.

They argue that the damaging cognitive legacy of coal is “reinforced and amplified” by the more obvious economic consequences of high unemployment we see today. The study also found significantly lower life satisfaction in these areas.

The UK findings, published in the Journal of Personality and Social Psychology, are supported by a North American “robustness check”, with less detailed data from US demographics suggesting the same patterns of post-industrial personality traits.

“Regional patterns of personality and well-being may have their roots in major societal changes underway decades or centuries earlier, and the Industrial Revolution is arguably one of the most influential and formative epochs in modern history,” says co-author Dr Jason Rentfrow, from Cambridge’s Department of Psychology.

“Those who live in a post-industrial landscape still do so in the shadow of coal, internally as well as externally. This study is one of the first to show that the Industrial Revolution has a hidden psychological heritage, one that is imprinted on today’s psychological make-up of the regions of England and Wales.”

An international team of psychologists, including researchers from the Queensland University of Technology, University of Texas, University of Cambridge and the Baden-Wuerttemberg Cooperative State University, used data collected from 381,916 people across England and Wales during 2009-2011 as part of the BBC Lab’s online Big Personality Test.

The team analysed test scores by looking at the “big five” personality traits: extraversion, agreeableness, conscientiousness, neuroticism and openness. The results were further dissected by characteristics such as altruism, self-discipline and anxiety.

The data was also broken down by region and county, and compared with several other large-scale datasets including coalfield maps and a male occupation census of the early 19th century (collated through parish baptism records, where the father listed his job).

The team controlled for an extensive range of other possible influences – from competing economic factors in the 19th century and earlier, through to modern considerations of education, wealth and even climate.

However, they still found significant personality differences for those currently occupying areas where large numbers of men had been employed in coal-based industries from 1813 to 1820 – as the Industrial Revolution was peaking.

Neuroticism was, on average, 33% higher in these areas compared with the rest of the country. In the ‘big five’ model of personality, this translates as increased emotional instability, prone to feelings of worry or anger, as well as higher risk of common mental disorders such as depression and substance abuse.

In fact, in the further “sub-facet” analyses, these post-industrial areas scored 31% higher for tendencies toward both anxiety and depression.

Areas that ranked highest for neuroticism include Blaenau Gwent and Ceredigion in South Wales, and Hartlepool in England.

Conscientiousness was, on average, 26% lower in former industrial areas. In the ‘big five’ model, this manifests as more disorderly and less goal-oriented behaviours – difficulty with planning and saving money. The underlying sub-facet of ‘order’ itself was 35% lower in these areas.

The lowest three areas for conscientiousness were all in Wales (Merthyr Tydfil, Ceredigion and Gwynedd), with English areas including Nottingham and Leicester.

An assessment of life satisfaction was included in the BBC Lab questionnaire, which was an average of 29% lower in former industrial centres.

While researchers say there will be many factors behind the correlation between personality traits and historic industrialisation, they offer two likely ones: migration and socialisation (learned behaviour).

The people migrating into industrial areas were often doing so to find employment in the hope of escaping poverty and distressing situations of rural depression – those experiencing high levels of ‘psychological adversity’.

However, people that left these areas, often later on, were likely those with higher degrees of optimism and psychological resilience, say researchers.

This “selective influx and outflow” may have concentrated so-called ‘negative’ personality traits in industrial areas – traits that can be passed down generations through combinations of experience and genetics.

Migratory effects would have been exacerbated by the ‘socialisation’ of repetitive, dangerous and exhausting labour from childhood – reducing well-being and elevating stress – combined with harsh conditions of overcrowding and atrocious sanitation during the age of steam.

The study’s authors argue their findings have important implications for today’s policymakers looking at public health interventions.

“The decline of coal in areas dependent on such industries has caused persistent economic hardship – most prominently high unemployment. This is only likely to have contributed to the baseline of psychological adversity the Industrial Revolution imprinted on some populations,” says co-author Michael Stuetzer from Baden-Württemberg Cooperative State University, Germany.

“These regional personality levels may have a long history, reaching back to the foundations of our industrial world, so it seems safe to assume they will continue to shape the well-being, health, and economic trajectories of these regions.”

The team note that, while they focused on the negative psychological imprint of coal, future research could examine possible long-term positive effects in these regions born of the same adversity – such as the solidarity and civic engagement witnessed in the labour movement.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Genetics Study Adds Further Evidence That Education Reduces Risk of Alzheimer’s Disease

Genetics study adds further evidence that education reduces risk of Alzheimer’s disease

source: www.cam.ac.uk

The theory that education protects against Alzheimer’s disease has been given further weight by new research from the University of Cambridge, funded by the European Union. The study is published today in The BMJ.

Many studies have shown that certain risk factors are more common in people with Alzheimer’s disease, but determining whether these factors actually cause Alzheimer’s is more difficult

Hugh Markus

Alzheimer’s disease is the leading cause of dementia. Its chief hallmark is the build of ‘plaques’ and ‘tangles’ of misshapen proteins, which lead to the gradual death of brain cells. People affected by Alzheimer’s experience memory and communication problems, disorientation, changes in behaviour and progressive loss of independence.

The causes of Alzheimer’s are largely unknown, and attempts to develop drug treatments to halt or reverse its effects have been disappointing. This has led to increasing interest in whether it is possible to reduce the number of cases of Alzheimer’s disease by tackling common risk factors that can be modified. In fact, research from the Cambridge Institute of Public Health has shown that the incidence of Alzheimer’s is falling in the UK, probably due to improvements in education, and smoking reduction and better diet and exercise.

“Many studies have shown that certain risk factors are more common in people with Alzheimer’s disease, but determining whether these factors actually cause Alzheimer’s is more difficult,” says Professor Hugh Markus from the Department of Clinical Neurosciences at the University of Cambridge.

“For example, many studies have shown that the more years spent in full time education, the lower the risk of Alzheimer’s. But it is difficult to unravel whether this is an effect of education improving brain function, or whether it’s the case that people who are more educated tend to come from more wealthy backgrounds and therefore have a reduction in other risk factors that cause Alzheimer’s disease.”

Professor Markus led a study to unpick these factors using a technique known as ‘Mendelian randomisation’. This involves looking at an individual’s DNA and comparing genes associated with environmental risk factors – for example, genes linked to educational attainment or to smoking – and seeing which of these genes are also associated with Alzheimer’s disease. If a gene is associated with both, then it provides strong evidence that this risk factor really does cause the disease.

As part of a project known as CoSTREAM, researchers studied genetic variants that increase the risk of a variety of different environmental risk factors to see if these were more common in 17,000 patients with Alzheimer’s disease. They found the strongest association with genetic variants that predict higher educational attainment.

“This provides further strong evidence that education is associated with a reduced risk of Alzheimer’s disease,” says first author Dr Susanna Larsson, now based at the Karolinska Institute, Sweden. “It suggests that improving education could have a significant effect on reducing the number of people who suffer from this devastating disease.”

Exactly how education might reduce the risk of Alzheimer’s is uncertain. Previous studies have shown that the same amount of damage in the brain is associated with less severe and less frequent Alzheimer’s in people who have received more education. One possible explanation is the idea of ‘cognitive reserve’ – the ability to recruit alternative brain networks or to use brain structures or networks not normally used to compensate for brain ageing. Evidence suggests that education helps improve brain wiring and networks and hence could increase this reserve.

The researchers also looked at other environmental risk factors, including smoking, vitamin D, and alcohol and coffee consumption. However, their results proved inconclusive. This may be because genes that predispose to smoking, for example, have only a very small effect on behaviour, they say.

The study was supported by the European Union’s Horizon 2020 Research and Innovation Programme.

Reference
Larsson, SC et al. Modifiable pathways in Alzheimer’s disease: Mendelian randomisation analysis. BMJ; 7 Dec 2017; DOI: 10.1136/bmj.j5375 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Clean Energy: Experts Outline How Governments Can Successfully Invest Before It’s Too Late

Clean energy: experts outline how governments can successfully invest before it’s too late

source: www.cam.ac.uk

Researchers distil twenty years of lessons from clean energy funding into six ‘guiding principles’. They argue that governments must eschew constant reinventions and grant scientists greater influence before our “window of opportunity” to avert climate change closes.

We urgently need to take stock of policy initiatives around the world that aim to accelerate new energy technologies

Laura Diaz Anadon

Governments need to give technical experts more autonomy and hold their nerve to provide more long-term stability when investing in clean energy, argue researchers in climate change and innovation policy in a new paper published today.

Writing in the journal Nature, the authors from UK and US institutions have set out guidelines for investment in world-changing energy innovation based on an analysis of the last twenty years of “what works” in clean energy research programs.

Their six simple “guiding principles” also include the need to channel innovation into the private sector through formal tech transfer programmes, and to think in terms of lasting knowledge creation rather than ‘quick win’ potential when funding new projects.

The authors offer a stark warning to governments and policymakers: learn from and build on experience before time runs out, rather than constantly reinventing aims and processes for the sake of political vanity.

“As the window of opportunity to avert dangerous climate change narrows, we urgently need to take stock of policy initiatives around the world that aim to accelerate new energy technologies and stem greenhouse gas emissions,” said Laura Diaz Anadon, Professor of Climate Change Policy at the University of Cambridge.

“If we don’t build on the lessons from previous policy successes and failures to understand what works and why, we risk wasting time and money in a way that we simply can’t afford,” said Anadon, who authored the new paper with colleagues from the Harvard Kennedy School, as well as the University of Minnesota’s Prof Gabriel Chan.

Public investments in energy research have risen since the lows of the mid-1990s and early 2000s. OECD members spent US$16.6 billion on new energy research and development (R&D) in 2016 compared to $10b in 2010. The EU and other nations pledged to double clean energy investment as part of 2015’s Paris Climate Change Agreement.

Recently, the UK government set out its own Clean Growth Strategy, committing £2.5 billion between 2015 and 2021, with hundreds of million to be invested in new generations of small nuclear power stations and offshore wind turbines.

However, Anadon and colleagues point out that government funding for energy innovation has, in many cases, been highly volatile in the recent past: with political shifts resulting in huge budget fluctuations and process reinventions in the UK and US.

For example, the research team found that every single year between 1990 and 2017, one in five technology areas funded by the US Department of Energy (DoE) saw a budget shift of more than 30%. The Trump administration’s current plan is to slash 2018’s energy R&D budget by 35% across the board.

In the UK, every Prime Minister since 2000 has created new institutions to manage energy innovation and bridge the public and private sectors. Blair’s UK Carbon Trust; Brown’s Energy Technologies Institute; Cameron’s Catapults; May’s Faraday Challenge as part of the latest industrial Strategy.

“Experimentation has benefits, but also costs,” said Anadon. “Researchers are having to relearn new processes, people and programmes with every political transition – wasting time and effort for scientists, companies and policymakers.”

“Rather than repeated overhauls, existing programs should be continuously evaluated and updated. New programs should only be set up if they fill needs not currently met.”

More autonomy for project selection should be passed to active scientists, who are “best placed to spot bold but risky opportunities that managers miss,” say the authors of the new paper.

They point to projects instigated by the US National Labs producing more commercially-viable technologies than those dictated by DoE headquarters – despite the Labs holding a mere 4% of the DoE’s overall budget.

The six evidence-based guiding principles for clean energy investment are:

  • Give researchers and technical experts more autonomy and influence over funding decisions.
  • Build technology transfer into research organisations.
  • Focus demonstration projects on learning.
  • Incentivise international collaboration.
  • Adopt an adaptive learning strategy.
  • Keep funding stable and predictable.

From US researchers using the pace of Chinese construction markets to test energy reduction technologies, to the UK government harnessing behavioural psychology to promote energy efficiency, the authors highlight just a few examples of government investment that helped create or improve clean energy initiatives across the world.

“Let’s learn from experience on how to accelerate the transition to a cleaner, safer and more affordable energy system,” they write.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Pop-Up Mints and Coins Made From Prayers

Pop-up mints and coins made from prayers

source: www.cam.ac.uk

In the tumultuous upheaval of the English Civil War, Royalist castles under siege used ‘pop-up’ mints to make coins to pay their soldiers. A unique display at the Fitzwilliam Museum tells the centuries-old story of emergency currency made from gold, silver and compressed prayer books.

Emergency coins show how a micro-economy developed during times of siege.

Richard Kelleher

We’re used to the kind of circular coins that jangle in your pocket. But this one is lozenge-shaped and features a crude impression of a castle on its face. Its edges are sharp.

A silver shilling piece, it was made in 1648 during the bloody siege of Pontefract Castle. Today it’s one of 80 examples of currency on display at the Fitzwilliam Museum. The temporary exhibition – Currencies of Conflict – is thought to be the first dedicated exclusively to emergency money.

The focus is on coinage that reflects the turmoil of the English Civil War. But the exhibition also sets these coins within a wider context of 2,500 years of history and features some rarely shown items from the Fitzwilliam’s outstanding collection.

Between 1644 and 1649, the Royalist stronghold of Pontefract Castle was besieged three times by the Parliamentary forces led by Oliver Cromwell. Royalists loyal to King Charles 1 also held out at Carlisle, Newark and Scarborough Castles. All eventually fell to the Parliamentarians.

Examples of siege coinage from all four castles appear in the display. These coins were made by craftsmen working within the fortress walls, using metal obtained from melting down objects requisitioned from the occupants of the castle and town.

People, and especially soldiers, had to be paid to ensure their continued loyalty. “We don’t know how many emergency coins were made during these sieges but a contemporary journal entry from Carlisle suggests that £323 of shilling pieces were struck from requisitioned plate. They show how a micro-economy developed during times of siege,” said curator Richard Kelleher.

Although the quality and weight of the silver, and (rarely) gold, was generally good, the manufacture was often much less sophisticated. In temporary mints, pieces of metal were stamped with ‘dies’ of varied workmanship, from the crude designs at Carlisle to the accomplished work of the Newark engraver.

“In the emergency conditions of a siege, coins were sometimes diamond-shaped or hexagonal as these shapes were easier to cut to specific weights than conventionally minted coins which required the specialist machinery of the mint,” said Kelleher.

In the medieval period, numerous mints operated across England but by 1558 there was only one royal mint and it was in the Tower of London. During the Civil War, Charles I moved his court to Oxford, establishing a mint in the city. A stunning gold ‘triple unite’ (a coin worth £3 – one of the largest value coins ever minted) is an example of the fine workmanship of the Oxford mint.

On its face it shows a finely executed bust of the king holding a sword and olive branch, while the reverse carries the Oxford Declaration: “The Protestant religion, the laws of England, and the liberty of Parliament.” Another rare coin from Oxford is a silver pound coin weighing more than 120g showing the king riding a horse over the arms of his defeated enemies.

Also displayed is a silver medal, made during the short Protectorate headed by Oliver Cromwell. It commemorates the Battle of Dunbar of 1650 when Cromwell’s forces defeated an army loyal to Charles II. Its face shows the bust of Cromwell with battle scenes in the background, while the reverse shows the interior view of Parliament with the speaker sitting in the centre.

The earliest piece in the exhibition is an electrum coin dating from the 6th century BC. It originates from the kingdom of Lydia (western Turkey) and depicts a lion and a bull in combat. The earliest reference to coinage in the literature records a payment in coin by the Lydian king for a military purpose.

A Hungarian medal, commemorating the recapture of Budapest, provides a snapshot of a famous siege in progress. The walls are surrounded by cavalry and infantry complete with the machinery of siege warfare – artillery pieces – which have breached the walls.

This medal was also used as a vehicle for propaganda. The reverse carries the image of the Imperial eagle (representing the Habsburg Empire) defending its nest from an attacking dragon which represents the threat of the Ottoman Empire.

Much less elaborate are examples of coins made in circumstances when precious metals were in short supply. A 16th-century Dutch token is made from compressed prayer books and a piece from occupied Ghent in the First World War is made of card.

Extremely vulnerable to damp, these coins’ survival is little short of miraculous. During the siege of Leiden the mayor requisitioned all metal, including coins, for the manufacture of weapons and ammunition. In return, citizens were given token coins made from hymnals, prayer books and bibles.

Bringing the narrative of currency and conflict into the 20th century are paper currencies of the Second World War. Britain and its American allies issued currency for liberated areas of Italy and France, and for occupied Germany.

The temporary exhibition Currencies of Conflict: siege and emergency money from antiquity to WWII continues at the Fitzwilliam Museum until 23 February 2018. Admission is free.

Inset images: England, Charles 1 (1625-49) silver shilling siege piece, 1645, Carlisle; England, Charles 1 (1625-49) gold triple unite, 1643, struck at Oxford; Commonwealth (1649-60), silver medal of 1650 commemorating the Battle of Dunbar; Lydia, Croesus (561-546 BC), Gold stater. Foreparts of bull and lion facing each other; Leopold I (1658-1705) silver medal, ‘Budapest defended 1686’ by GF Nurnberger; Netherlands, Leiden, paper siege of 5 stuivers, 1574; Germany, Allied Military Currency, 1 mark, 1944.

 

 

 

 

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Sir Isaac Newton’s Cambridge Papers Added To UNESCO’s Memory Of The World Register

Sir Isaac Newton’s Cambridge papers added to UNESCO’s Memory of the World Register

The Cambridge papers of Sir Isaac Newton, including early drafts and Newton’s annotated copies of Principia Mathematica – a work that changed the history of science – have been added to UNESCO’s International Memory of the World Register.

Newton’s papers are among the world’s most important collections in the western scientific tradition and are one of the Library’s most treasured collections.

Katrina Dean

Held at Cambridge University Library, Newton’s scientific and mathematical papers represent one of the most important archives of scientific and intellectual work on universal phenomena. They document the development of his thought on gravity, calculus and optics, and reveal ideas worked out through painstaking experiments, calculations, correspondence and revisions.

In combination with alchemical papers at King’s College, Cambridge and his notebooks and correspondence at Trinity College, Cambridge and the Fitzwilliam Museum, this represents the largest and most important collection of Newton’s papers worldwide.

Katrina Dean, Curator of Scientific Collections at Cambridge University Library said: “Newton’s papers are among the world’s most important collections in the western scientific tradition and are one of the Library’s most treasured collections. They were the first items to be digitised and added to the Cambridge Digital Library in 2011 and featured in our 600th anniversary exhibition Lines of Thought last year. In 2017, their addition to the UNESCO International Memory of the World Register recognises their unquestionable international importance.”

The Memory of the World Project is an international initiative to safeguard the documentary heritage of humanity against collective amnesia, neglect, the ravages of time and climatic conditions, and wilful and deliberate destruction. It calls for the preservation of valuable archival, library and private collections all over the world, as well as the reconstitution of dispersed or displaced documentary heritage, and the increased accessibility to and dissemination of these items.

Newton’s Cambridge papers, and those at the Royal Society, now join the archive of Winston Churchill, held at Cambridge University’s Churchill Archives Centre, on the UNESCO Register. They also join Newton’s theological and alchemical papers at the National Library of Israel, which were added in 2015.

The chief attractions in the Cambridge collection are Newton’s own copies of the first edition of the Principia (1687), covered with his corrections, revisions and additions for the second edition.

The Cambridge papers also include significant correspondence with natural philosophers and mathematicians including Henry Oldenberg, Secretary of the Royal Society, Edmond Halley, the Astronomer Royal who persuaded Newton to publish Principia, Richard Bentley, the Master of Trinity College, and John Collins, mathematician and fellow of the Royal Society who became an important collector of Newton’s works.

Added Dean: “One striking illustration of Newton’s experimental approach is in his ‘Laboratory Notebook’, which includes details of his investigations into light and optics in order to understand the nature of colour. His essay ‘Of Colours’ includes a diagram that illustrates the experiment in which he inserted a bodkin into his eye socket to put pressure on the eyeball to try to replicate the sensation of colour in normal sight.”

Another important item is Newton’s so-called ‘Waste Book’, a large notebook inherited from his stepfather. From 1664, he used the blank pages for optical and mathematical calculations and gradually mastered the analysis of curved lines, surfaces and solids. By 1665, he had invented the method of calculus. Newton later used the dated, documentary evidence provided by the Waste Book to argue his case in the priority dispute with Gottfried Wilhelm Leibniz over the invention of the calculus.

Cambridge University Librarian Jess Gardner said: “Newton’s work and life continue to attract wonder and new perspectives on our place in the Universe. Cambridge University Library will continue to work with scholars and curators worldwide to make Newton’s papers accessible now and for future generations.”

Isaac Newton entered Trinity College as an undergraduate in 1661 and became a Fellow in 1667. In 1669, he became Lucasian Professor of Mathematics at Cambridge University, a position he held until 1701.

Among the more personal items in the Cambridge collections are Newton’s daily concerns as recorded in an undergraduate notebook which records Newton’s expenditure on white wine, wafers, shoe-strings and ‘a paire of stockings’, along with a guide to Latin pronunciation.

A notebook of 1662-1669 records Newton’s sins before and after Whitsunday of 1662, written in a coded shorthand and first deciphered between 1872 and 1888. Among them are ‘Eating an apple at Thy house’, ‘Robbing my mothers box of plums and sugar’ along with the more serious ‘Wishing death and hoping it to some’ before a list of his expenses. These included chemicals, two furnaces and a recent edition of one of the most comprehensive compilations of alchemical writings in the western tradition Theatrum chemicum, edited by the publisher Lazarus Zetzner.

Cambridge University Library is also hosting a series of talks open to the public by Sarah Dry and Patricia Fara on Newton’s manuscripts and Newton’s role in Enlightenment culture and polite society on December 7 and December 14 respectively. For details and bookings, see: http://www.lib.cam.ac.uk/using-library/whats


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

£5.4 Million Centre Will Help Transform The UK’s Construction Sector For The Digital Age

£5.4 million centre will help transform the UK’s construction sector for the digital age

source: www.cam.ac.uk

The Government have announced £5.4 million in funding to launch the Centre for Digital Built Britain at the University of Cambridge, which will help people make better use of cities by championing the digital revolution in the built environment. The Centre is part of a landmark government-led investment in growing the UK’s construction sector.

This is a wonderful opportunity to put the breadth of research and industry engagement expertise from Cambridge at the heart of Digital Built Britain.

Jennifer Schooling

The Centre is a partnership between the Department of Business, Energy & Industrial Strategy and the University to support the transformation of the construction sector using digital technologies to better plan, build, maintain and use infrastructure. It will focus on the ongoing transformation of the built environment through the digital tools, standards and processes that are collectively known as Building Information Modelling (BIM). BIM enables the people building and managing our transport networks, cities and major infrastructure projects to take advantage of advances in the digital world to intelligently deliver better services and end products for UK citizens.

Led by Professor Andy Neely, Pro-Vice-Chancellor: Enterprise and Business Relations, the Centre builds on the expertise and experience of faculty from the Cambridge Centre for Smart Infrastructure and Construction (CSIC), Cambridge Big Data, the Distributed Information and Automation Lab (DIAL), the Cambridge Service Alliance (CSA) and the Institute for Manufacturing. The Cambridge researchers work with a team of specialists from Digital Built Britain Programme and partners from industry and academia to develop and demonstrate policy and practical insights that will enable the exploitation of new and emerging technologies, data and analytics to enhance the natural and built environment, thereby driving up commercial competitiveness and productivity, as well as citizen quality of life and well-being.

“The Centre for Digital Built Britain will work in partnership with Government and industry to improve the performance, productivity and safety of construction through the better use of digital technologies,” said Professor Neely.

“The achievement of the BIM Task Group in delivering the Level 2 BIM programme has provided both the UK and increasingly a worldwide platform for the digitisation of the construction and services sectors.  We welcome the vast experience and capability Cambridge brings to the team and the creation of the Centre for Digital Built Britain,” said Dr Mark Bew MBE, Strategic Advisor to the Centre for Digital Built Britain.

“The construction and infrastructure sector are poised for a digital revolution, and Britain is well placed to lead it. Over the next decade advances in BIM will combine with the Internet of Things (IoT), data analytics, data-driven manufacturing and the digital economy to enable us to plan new buildings and infrastructure more effectively, build them at lower cost, operate and maintain them more efficiently, and deliver better outcomes to the people who use them,” said Dr Jennifer Schooling, Director of the Centre for Smart Infrastructure and Construction. “This is a wonderful opportunity to put the breadth of research and industry engagement expertise from Cambridge at the heart of Digital Built Britain.”

The UK is leading the world with its support of BIM implementation in the construction sector through its commitment to the Digital Built Britain Programme. By embedding Level 2 BIM in the government projects such as Crossrail, the programme has contributed significantly to Government’s £3 billion of efficiency savings between 2011 and 2015. Since 2016, all UK centrally funded projects require Level 2 BIM, which has achieved considerable cost savings for its construction procurement to date. Tasked with supporting innovation in the construction sector, the Construction Leadership Council has also put BIM at the heart of its sector strategy Construction 2025; which commits to cut built asset costs by 33 percent, and time and carbon by 50 percent. The Centre will continue and build on this transformative approach.

The Centre for Digital Built Britain will be based in the Maxwell Centre in West Cambridge and will be formally launched in Spring 2018.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Prehistoric Women’s Manual Work Was Tougher Than Rowing In Today’s Elite Boat Crews

Prehistoric women’s manual work was tougher than rowing in today’s elite boat crews

source: www.cam.ac.uk

The first study to compare ancient and living female bones shows that women from early agricultural eras had stronger arms than the rowers of Cambridge University’s famously competitive boat club. Researchers say the findings suggest a “hidden history” of gruelling manual labour performed by women that stretched across millennia.

By interpreting women’s bones in a female-specific context we can start to see how intensive, variable and laborious their behaviours were

Alison Macintosh

A new study comparing the bones of Central European women that lived during the first 6,000 years of farming with those of modern athletes has shown that the average prehistoric agricultural woman had stronger upper arms than living female rowing champions.

Researchers from the University of Cambridge’s Department of Archaeology say this physical prowess was likely obtained through tilling soil and harvesting crops by hand, as well as the grinding of grain for as much as five hours a day to make flour.

Until now, bioarchaeological investigations of past behaviour have interpreted women’s bones solely through direct comparison to those of men. However, male bones respond to strain in a more visibly dramatic way than female bones.

The Cambridge scientists say this has resulted in the systematic underestimation of the nature and scale of the physical demands borne by women in prehistory.

“This is the first study to actually compare prehistoric female bones to those of living women,” said Dr Alison Macintosh, lead author of the study published today in the journalScience Advances.

“By interpreting women’s bones in a female-specific context we can start to see how intensive, variable and laborious their behaviours were, hinting at a hidden history of women’s work over thousands of years.”

The study, part of the European Research Council-funded ADaPt (Adaption, Dispersals and Phenotype) Project, used a small CT scanner in Cambridge’s PAVE laboratory to analyse the arm (humerus) and leg (tibia) bones of living women who engage in a range of physical activity: from runners, rowers and footballers to those with more sedentary lifestyles.

The bones strengths of modern women were compared to those of women from early Neolithic agricultural eras through to farming communities of the Middle Ages.

“It can be easy to forget that bone is a living tissue, one that responds to the rigours we put our bodies through. Physical impact and muscle activity both put strain on bone, called loading. The bone reacts by changing in shape, curvature, thickness and density over time to accommodate repeated strain,” said Macintosh.

“By analysing the bone characteristics of living people whose regular physical exertion is known, and comparing them to the characteristics of ancient bones, we can start to interpret the kinds of labour our ancestors were performing in prehistory.”

Over three weeks during trial season, Macintosh scanned the limb bones of the Open- and Lightweight squads of the Cambridge University Women’s Boat Club, who ended up winning this year’s Boat Race and breaking the course record. These women, most in their early twenties, were training twice a day and rowing an average of 120km a week at the time.

The Neolithic women analysed in the study (from 7400-7000 years ago) had similar leg bone strength to modern rowers, but their arm bones were 11-16% stronger for their size than the rowers, and almost 30% stronger than typical Cambridge students.

The loading of the upper limbs was even more dominant in the study’s Bronze Age women (from 4300-3500 years ago), who had 9-13% stronger arm bones than the rowers but 12% weaker leg bones.

A possible explanation for this fierce arm strength is the grinding of grain. “We can’t say specifically what behaviours were causing the bone loading we found. However, a major activity in early agriculture was converting grain into flour, and this was likely performed by women,” said Macintosh.

“For millennia, grain would have been ground by hand between two large stones called a saddle quern. In the few remaining societies that still use saddle querns, women grind grain for up to five hours a day.

“The repetitive arm action of grinding these stones together for hours may have loaded women’s arm bones in a similar way to the laborious back-and-forth motion of rowing.”

However, Macintosh suspects that women’s labour was hardly likely to have been limited to this one behaviour.

“Prior to the invention of the plough, subsistence farming involved manually planting, tilling and harvesting all crops,” said Macintosh. “Women were also likely to have been fetching food and water for domestic livestock, processing milk and meat, and converting hides and wool into textiles.

“The variation in bone loading found in prehistoric women suggests that a wide range of behaviours were occurring during early agriculture. In fact, we believe it may be the wide variety of women’s work that in part makes it so difficult to identify signatures of any one specific behaviour from their bones.”

Dr Jay Stock, senior study author and head of the ADaPt Project, added: “Our findings suggest that for thousands of years, the rigorous manual labour of women was a crucial driver of early farming economies. The research demonstrates what we can learn about the human past through better understanding of human variation today.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Eye Contact With Your Baby Helps Synchronise Your Brainwaves

Eye contact with your baby helps synchronise your brainwaves

source: www.cam.ac.uk

Making eye contact with an infant makes adults’ and babies’ brainwaves ‘get in sync’ with each other – which is likely to support communication and learning – according to researchers at the University of Cambridge.

When the adult and infant are looking at each other, they are signalling their availability and intention to communicate with each other

Victoria Leong

When a parent and infant interact, various aspects of their behaviour can synchronise, including their gaze, emotions and heartrate, but little is known about whether their brain activity also synchronises – and what the consequences of this might be.

Brainwaves reflect the group-level activity of millions of neurons and are involved in information transfer between brain regions. Previous studies have shown that when two adults are talking to each other, communication is more successful if their brainwaves are in synchrony.

Researchers at the Baby-LINC Lab at the University of Cambridge carried out a study to explore whether infants can synchronise their brainwaves to adults too – and whether eye contact might influence this. Their results are published today in the Proceedings of National Academy of Sciences (PNAS).

The team examined the brainwave patterns of 36 infants (17 in the first experiment and 19 in the second) using electroencephalography (EEG), which measures patterns of brain electrical activity via electrodes in a skull cap worn by the participants. They compared the infants’ brain activity to that of the adult who was singing nursery rhymes to the infant.

In the first of two experiments, the infant watched a video of an adult as she sang nursery rhymes. First, the adult – whose brainwave patterns had already been recorded – was looking directly at the infant. Then, she turned her head to avert her gaze, while still singing nursery rhymes. Finally, she turned her head away, but her eyes looked directly back at the infant.

As anticipated, the researchers found that infants’ brainwaves were more synchronised to the adults’ when the adult’s gaze met the infant’s, as compared to when her gaze was averted Interestingly, the greatest synchronising effect occurred when the adults’ head was turned away but her eyes still looked directly at the infant. The researchers say this may be because such a gaze appears highly deliberate, and so provides a stronger signal to the infant that the adult intends to communicate with her.

In the second experiment, a real adult replaced the video. She only looked either directly at the infant or averted her gaze while singing nursery rhymes. This time, however, her brainwaves could be monitored live to see whether her brainwave patterns were being influenced by the infant’s as well as the other way round.

This time, both infants and adults became more synchronised to each other’s brain activity when mutual eye contact was established. This occurred even though the adult could see the infant at all times, and infants were equally interested in looking at the adult even when she looked away. The researchers say that this shows that brainwave synchronisation isn’t just due to seeing a face or finding something interesting, but about sharing an intention to communicate.

To measure infants’ intention to communicate, the researcher measured how many ‘vocalisations’ infants made to the experimenter. As predicted, infants made a greater effort to communicate, making more ‘vocalisations’, when the adult made direct eye contact – and individual infants who made longer vocalisations also had higher brainwave synchrony with the adult.

Dr Victoria Leong, lead author on the study said: “When the adult and infant are looking at each other, they are signalling their availability and intention to communicate with each other.  We found that both adult and infant brains respond to a gaze signal by becoming more in sync with their partner. This mechanism could prepare parents and babies to communicate, by synchronising when to speak and when to listen, which would also make learning more effective.”

Dr Sam Wass, last author on the study, said: “We don’t know what it is, yet, that causes this synchronous brain activity. We’re certainly not claiming to have discovered telepathy! In this study, we were looking at whether infants can synchronise their brains to someone else, just as adults can. And we were also trying to figure out what gives rise to the synchrony.

“Our findings suggested eye gaze and vocalisations may both, somehow, play a role. But the brain synchrony we were observing was at such high time-scales – of three to nine oscillations per second – that we still need to figure out how exactly eye gaze and vocalisations create it.”

This research was supported by an ESRC Transformative Research Grant to Dr Leong and Dr Wass.

Reference
Leong, V et al. Speaker gaze increases infant-adult connectivity. PNAS; 28 Nov 2017; DOI: 10.1101/108878


Researcher profile: Dr Victoria Leong

Dr Victoria Leong is an Affiliated Lecturer at Cambridge’s Department of Psychology, and also an Assistant Professor of Psychology at Nanyang Technological University, Singapore. Her research aims to understand how parents and infants communicate and learn from each other, and the brain mechanisms that help them to interact effectively as social partners.

“The Baby-LINC lab is designed to look like a home living room so that mothers and babies feel comfortable,” she says.  In the lab, they use a wireless EEG system to measure infants’ brain activity, which means that babies don’t have to be tethered to a computer and we can conduct recordings for longer periods of time. “This is invaluable if the baby needs a nap or a nappy change in-between doing our tasks!”

Dr Leong says she is passionate about “real-world neuroscience”. In other words, “understanding and not ignoring the very real – and often very messy – human social contexts that infiltrate brain processes”. This means that in addition to world class facilities and methods, the ability to collect robust data also depends on keeping the infants relaxed and happy. “Many a tantrum can be averted by the judicious and timely application of large soapy bubbles and rice cakes. The ability to blow large charming bubbles thereafter became a key criteria for recruiting research assistants!”

The research project came about “over a cup of tea [with Sam Wass] and a notepad to scratch out some frankly outlandish ideas about brain-to-brain synchrony”. They received £3,995 with the help of Cambridge Neuroscience and Cambridge Language Sciences for a pilot project and within a year went on to secure an ESRC Transformative Research Grant, which allowed them to significantly scale-up research operations, and to build the first mother-infant EEG hyperscanning facility in the UK (the Baby-LINC Lab).

“Cambridge is one of probably only a handful of highly-creative research environments in the world where young, untested post-doctoral researchers can organically come together, develop ambitious ideas, and have the support to try these out,” she says. “I am very proud of our humble beginnings, because they remind me that even a small handful of resources, wisely invested with hard work, can grow into world-class research.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Going Underground: Cambridge Digs Into The History of Geology With Landmark Exhibition

Going underground: Cambridge digs into the history of geology with landmark exhibition

source: www.cam.ac.uk

A box full of diamonds, volcanic rock from Mount Vesuvius, and the geology guide that Darwin packed for his epic voyage on the Beagle will go on display in Cambridge this week as part of the first major exhibition to celebrate geological map-making.

We show how for the first time people were encouraged to think about the secretive world beneath their feet.

Allison Ksiazkiewicz

Uncovering how the ground beneath our feet was mapped for the first time – and revealing some of the controversies and tragedies geology brought to the surface of intellectual debate, Landscapes Below opens to the public on Friday, November 24, at Cambridge University Library.

Featuring the biggest-ever object (1.9mx1.6m) to go on display at the Library: George Bellas Greenough’s 1819 A Geological Map of England and Wales (the first map produced by the Geological Society of London), as well as a visually stunning collection of maps from the earliest days of geology – the exhibition explores how these new subterranean visions of the British landscape influenced our understanding of the Earth. All the maps belonging to the library are going on display for the first time.

“I think the maps are beautiful objects, tell fascinating stories and frame geology in a new light,” said exhibition curator Allison Ksiazkiewicz. “This was a new take on nature and a new way of thinking about the landscape for those interested in nature.

“We show how the early pioneers of this new science wrestled with the ideas of a visual vocabulary – and how for the first time people were encouraged to think about the secretive world beneath their feet.”

As well as maps, Landscapes Below also brings together an extraordinary collection of fossils, artworks and a collection of 154 diamonds, on loan from the Sedgwick Museum of Earth Sciences. Displayed together for the first time, the diamonds were collected, arranged, and produced by Jacques Louis, Comte de Bournon who later became the Keeper of the Royal Mineral Collection for King Louis XVIII.

Another important exhibit on display for the first time is the first edition of George Cuvier and Alexandre Brongniart’s Researches on the Fossil Bones of Quadrupeds (1811), on loan from Trinity College. It examined the geology of the Paris Basin and revolutionised what was considered ‘young’ in geological terms.

Artists were also keen to accurately depict the geological landscape. After surviving Captain Cook’s ill-fated third voyage of discovery, artist, John Webber returned to England and travelled around the country painting landscapes and geological formations, as seen in Landscape of Rocks in Derbyshire. Christopher Packe’s A New Philosophico-Chorographical Chart of East-Kent (1743), on loan from the Geological Society of London, is a remarkable, engraved map that draws on early modern medicine in the interpretation of the surrounding landscape.

“The objects we’re putting on display show the many different applications of geological knowledge,” added Ksiazkiewicz. “Whether it’s a map showing the coal fields of Lancashire in the 1830s – or revealing how this new science was used for economic and military reasons.”

In many ways, the landscapes the earliest geologists worked among became battlegrounds as a scientific old guard – loyal to the established pursuits of mineralogy and chemistry – opposed a new generation of scientists intent on using the fossil record in the study of the Earth’s age and formation.

Exhibitions Officer Chris Burgess said: “Maps were central to the development of geology but disagreement between its leading figures was common. Maps of the period did not just show new knowledge but represented visible arguments about how that knowledge should be recorded.”

The exhibition also includes objects from those with rather tragic histories, including William Smith – whose famous 1815 Geological Map of England has been described as the ‘Magna Carta of geology’. Despite publishing the world’s first geological map (which is still used as the basis of such maps today), Smith was shunned by the scientific community for many years, became a bankrupt, and ended up in debtors’ prison.

John MacCulloch, who produced the Geological Map of Scotland, did not live to see his work published after his honeymoon carriage overturned and killed him at the age of 61. He spent 15 summers surveying Scotland, after convincing the Board of Ordnance to sponsor the project. There was some dispute about how MacCulloch calculated his mileage and spent the funds, and the Ordnance only paid for six summers’ worth of work. Five summers were paid for by the Treasury and four from his own pocket.

Added Ksiazkiewicz: “Not only do these maps and objects represent years of work by individuals looking to develop a new science of the Earth, they stir the imagination. You can imagine yourself walking across the landscape and absorbing all that comes with it – views, antiquities, fossils, and vegetation. And weather, there’s always weather.”

Landscapes Below runs from November 25, 2017 to March 29, 2018 at Cambridge University Library’s Milstein Exhibition Centre. Admission is free. Opening times are Mon-Fri 9am-6pm and Saturday 9am-16.30pm. Closed Sundays.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

How To Cut Your Lawn For Grasshoppers

How to cut your lawn for grasshoppers

source: www.cam.ac.uk

Picture a grasshopper landing randomly on a lawn of fixed area. If it then jumps a certain distance in a random direction, what shape should the lawn be to maximise the chance that the grasshopper stays on the lawn after jumping?

The grasshopper problem is a rather nice one, as it helps us try out techniques for the physics problems we really want to get to.

Adrian Kent

One could be forgiven for wondering what the point of such a question might be. But the solution, proposed by theoretical physicists in the UK and the US, has some intriguing connections to quantum theory, which describes the behaviour of particles at the atomic and sub-atomic scales. Systems based on the principles of quantum theory could lead to a revolution in computing, financial trading, and many other fields.

The researchers, from the University of Cambridge and the University of Massachusetts Amherst, used computational methods inspired by the way metals are strengthened by heating and cooling to solve the problem and find the ‘optimal’ lawn shape for different grasshopper jump distances. Their results are reported in the journal Proceedings of the Royal Society A.

For the mathematically-inclined gardeners out there, the optimal lawn shape changes depending on the distance of the jump. Counter-intuitively, a circular lawn is never optimal, and instead, more complex shapes, from cogwheels to fans to stripes, are best at retaining hypothetical grasshoppers. Interestingly, the shapes bear a resemblance to shapes seen in nature, including the contours of flowers, the patterns in seashells and the stripes on some animals.

“The grasshopper problem is a rather nice one, as it helps us try out techniques for the physics problems we really want to get to,” said paper co-author Professor Adrian Kent, of Cambridge’s Department of Applied Mathematics and Theoretical Physics. Kent’s primary area of research is quantum physics, and his co-author Dr Olga Goulko works in computational physics.

To find the best lawn, Goulko and Kent had to convert the grasshopper problem from a mathematical problem to a physics one, by mapping it to a system of atoms on a grid. They used a technique called simulated annealing, which is inspired by a process of heating and slowly cooling metal to make it less brittle. “The process of annealing essentially forces the metal into a low-energy state, and that’s what makes it less brittle,” said Kent. “The analogue in a theoretical model is you start in a random high-energy state and let the atoms move around until they settle into a low-energy state. We designed a model so that the lower its energy, the greater the chance the grasshopper stays on the lawn. If you get the same answer – in our case, the same shape – consistently, then you’ve probably found the lowest-energy state, which is the optimal lawn shape.”

For different jump distances, the simulated annealing process turned up a variety of shapes, from cogwheels for short jump distances, through to fan shapes for medium jumps, and stripes for longer jumps. “If you asked a pure mathematician, their first guess might be that the optimal shape for a short jump is a disc, but we’ve shown that’s never the case,” said Kent. “Instead we got some weird and wonderful shapes – our simulations gave us a complicated and rich array of structures.”

Goulko and Kent began studying the grasshopper problem to try to better understand the difference between quantum theory and classical physics. When measuring the spin – the intrinsic angular momentum – of two particles on two random axes for particular states, quantum theory predicts you will get opposite answers more often than any classical model allows, but we don’t yet know how big the gap between classical and quantum is in general. “To understand precisely what classical models do allow, and see how much stronger quantum theory is, you need to solve another version of the grasshopper problem, for lawns on a sphere,” said Kent. Having developed and tested their techniques for grasshoppers on a two-dimensional lawn, the authors plan to look at grasshoppers on a sphere in order to better understand the so-called Bell inequalities, which describe the classical-quantum gap.

The lawn shapes which Goulko and Kent found also echo some shapes found in nature. The famous mathematician and code-breaker Alan Turing came up with a theory in 1952 on the origin of patterns in nature, such as spots, stripes and spirals, and the researchers say their work may also help explain the origin of some patterns. “Turing’s theory involves the idea that these patterns arise as solutions to reaction-diffusion equations,” said Kent. “Our results suggest that a rich variety of pattern formation can also arise in systems with essentially fixed-range interactions. It may be worth looking for explanations of this type in contexts where highly regular patterns naturally arise and are not otherwise easily explained.”

Reference:
Olga Goulko and Adrian Kent. ‘The grasshopper problem.’ Proceedings of the Royal Society A (2017). DOI: http://dx.doi.org/10.1098/rspa.2017.0494


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Report Highlights Opportunities and Risks Associated With Synthetic Biology and Bioengineering

Report highlights opportunities and risks associated with synthetic biology and bioengineering

source: www.cam.ac.uk
Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems, potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Reference
Wintle, BC, Boehm, CR et al. A transatlantic perspective on 20 emerging issues in biological engineering. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Ancient Fish Scales and Vertebrate Teeth Share An Embryonic Origin

Ancient fish scales and vertebrate teeth share an embryonic origin

source: www.cam.ac.uk

Latest findings support the theory that teeth in the animal kingdom evolved from the jagged scales of ancient fish, the remnants of which can be seen today embedded in the skin of sharks and skate.

This ancient dermal skeleton has undergone considerable reductions and modifications through time

Andrew Gillis

In biology, one long-running debate has teeth: whether ancient fish scales moved into the mouth with the origin of jaws, or if the tooth had its own evolutionary inception.

Recent studies on species such as zebrafish showed scales and teeth developing from distinctly different clusters of cells in fish embryos, pouring cold water on ‘teeth from scales’ theories.

However, while most fish in the sea have bones, one ancient lineage – sharks, skates and rays – possess skeletons made entirely of cartilage.

These cartilaginous fish retain some primitive characteristics that have been lost in their bony counterparts, including small spiky scales embedded in their skin called ‘dermal denticles’ that bear a striking resemblance to jagged teeth.

Now, researchers at the University of Cambridge have used fluorescent markers to track cell development in the embryo of a cartilaginous fish – a little skate in this case – and found that these thorny scales are in fact created from the same type of cells as teeth: neural crest cells.

The findings, published in the journal PNAS, support the theory that, in the depths of early evolution, these ‘denticle’ scales were carried into the emerging mouths of jawed vertebrates to form teeth. Jawed vertebrates now make up 99% of all living vertebrates, from fish to mammals.

“The scales of most fish that live today are very different from the ancient scales of early vertebrates,” says study author Dr Andrew Gillis from Cambridge’s Department of Zoology and the Marine Biological Laboratory in Woods Hole.

“Primitive scales were much more tooth-like in structure, but have been retained in only a few living lineages, including that of cartilaginous fishes such as skates and sharks.

“Stroke a shark and you’ll find it feels rougher than other fish, as shark skin is covered entirely in dermal denticles. There’s evidence that shark skin was actually used as sandpaper as early as the Bronze Age,” says Gillis.

“By labelling the different types of cells in the embryos of skate, we were able to trace their fates. We show that, unlike most fish, the denticle scales of sharks and skate develop from neural crest cells, just like teeth.

“Neural crest cells are central to the process of tooth development in mammals. Our findings suggest a deep evolutionary relationship between these primitive fish scales and the teeth of vertebrates.

“Early jawless vertebrates were filter feeders – sucking in small prey items from the water. It was the advent of both jaws and teeth that allowed vertebrates to begin processing larger and more complex prey.”

The very name of these scales, dermal denticles, alludes to the fact that they are formed of dentine: a hard calcified tissue that makes up the majority of a tooth, sitting underneath the enamel.

The jagged dermal denticles on sharks and skate – and, quite possibly, vertebrate teeth – are remnants of the earliest mineralised skeleton of vertebrates: superficial armour plating.

This armour would have perhaps peaked some 400 million years ago in now-extinct jawless vertebrate species, as protection against predation by ferocious sea scorpions, or even their early jawed kin.

The Cambridge scientists hypothesise that these early armour plates were multi-layered: consisting of a foundation of bone and an outer layer of dentine – with the different layers deriving from different types of cells in unborn embryos.

These layers were then variously retained, reduced or lost in different vertebrate linages over the course of evolution. “This ancient dermal skeleton has undergone considerable reductions and modifications through time,” says Gillis.

“The sharks and skate have lost the bony under-layer, while most fish have lost the tooth-like dentine outer layer. A few species, such as the bichir, a popular fish in home aquariums, have retained aspects of both layers of this ancient external skeleton.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.