All posts by Admin

Cambridge Team Receives £5 Million To Help GPs Spot ‘Difficult-To-Diagnose’ Cancers

Cambridge team receives £5 million to help GPs spot ‘difficult-to-diagnose’ cancers

source: www.cam.ac.uk

Researchers in Cambridge are set to receive a £5m Cancer Research UK’s Catalyst Award to improve the early detection of cancers in GP surgeries. The CanTest team, led by Dr Fiona Walter from the University of Cambridge, will work with researchers in three UK sites and across the globe on a five year project that will help GPs to detect cancers in a primary care setting, enabling patients to benefit from innovative approaches and new technologies, and reducing the burden of referrals.

We know that GPs sometimes have to wait weeks for results before they can make any decisions for their patients. We’re trying to reduce this time by assessing ways that GPs could carry out the tests by themselves, as long as it’s safe and sensible to do so

Fiona Walter

The research will prioritise ‘difficult-to-diagnose’ cancers, which are also associated with poorer survival outcomes, and will look at both existing and novel technologies.

Dr Walter, from the Department of Public Health and Primary Care, says: “We know that GPs sometimes have to wait weeks for results before they can make any decisions for their patients. We’re trying to reduce this time by assessing ways that GPs could carry out the tests by themselves, as long as it’s safe and sensible to do so. We are open to assessing many different tests, and we’re excited to hear from potential collaborators.”

The Award aims to boost progress aligned to Cancer Research UK’s strategic priorities by building new collaborations within and between institutions, also involving researchers based at the University of Exeter, UCL (University College London), the University of Leeds and a number of international institutions.

“This is a fantastic opportunity to transform cancer diagnosis and we are delighted that Cancer Research UK is investing so substantially in primary care cancer research,” adds Dr Walter. “This award will enable us to nurture a new generation of researchers from a variety of backgrounds to work in primary care cancer diagnostics, creating an educational ‘melting pot’ to rapidly expand the field internationally.”

The Catalyst Award supports capacity building and collaboration in population health with up to £5 million awarded to enable teams to deliver impact over and above what they could do alone.

Sir Harpal Kumar, Cancer Research UK’s chief executive, said: “This collaboration will help us discover new and more effective ways to diagnose cancer by applying different methods to GP surgeries, and finding out what really works for them on the job.

“By investing in future experts in this field, it will allow us to continue searching for the best way to diagnose cancer patients for many years to come. This has potential not only to save GPs’ and patients’ time, but also to reduce the anxiety patients feel when waiting for their results.”

Adapted from a press release by Cancer Research UK.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Personality Traits Linked To Differences In Brain Structure

Personality traits linked to differences in brain structure

source: www.cam.ac.uk

Our personality may be shaped by how our brain works, but in fact the shape of our brain can itself provide surprising clues about how we behave – and our risk of developing mental health disorders – suggests a study published today.

Linking how brain structure is related to basic personality traits is a crucial step to improving our understanding of the link between the brain morphology and particular mood, cognitive, or behavioural disorders

Luca Passamonti

According to psychologists, the extraordinary variety of human personality can be broken down into the so-called ‘Big Five’ personality traits, namely neuroticism (how moody a person is), extraversion (how enthusiastic a person is), openness (how open-minded a person is), agreeableness (a measure of altruism), and conscientiousness (a measure of self-control).

In a study published today in the journal Social Cognitive and Affective Neuroscience, an international team of researchers from the UK, US, and Italy have analysed a brain imaging dataset from over 500 individuals that has been made publicly available by the Human Connectome Project, a major US initiative funded by the National Institutes of Health. In particular, the researchers looked at differences in the brain cortical anatomy (the structure of the outer layer of the brain) as indexed by three measures – the thickness, area, and amount of folding in the cortex – and how these measures related to the Big Five personality traits.

“Evolution has shaped our brain anatomy in a way that maximizes its area and folding at the expense of reduced thickness of the cortex,” explains Dr Luca Passamonti from the Department of Clinical Neurosciences at the University of Cambridge. “It’s like stretching and folding a rubber sheet – this increases the surface area, but at the same time the sheet itself becomes thinner. We refer to this as the ‘cortical stretching hypothesis’.”

“Cortical stretching is a key evolutionary mechanism that enabled human brains to expand rapidly while still fitting into our skulls, which grew at a slower rate than the brain,” adds Professor Antonio Terracciano from the Department of Geriatrics at the Florida State University. “Interestingly, this same process occurs as we develop and grow in the womb and throughout childhood, adolescence, and into adulthood: the thickness of the cortex tends to decrease while the area and folding increase.”

In addition, as we get older, neuroticism goes down – we become better at handling emotions. At the same time, conscientiousness and agreeableness go up – we become progressively more responsible and less antagonistic.

The researchers found that high levels of neuroticism, which may predispose people to develop neuropsychiatric disorders, were associated with increased thickness as well as reduced area and folding in some regions of the cortex such as the prefrontal-temporal cortices at the front of the brain.

In contrast, openness, which is a personality trait linked with curiosity, creativity and a preference for variety and novelty, was associated with the opposite pattern, reduced thickness and an increase in area and folding in some prefrontal cortices.

“Our work supports the notion that personality is, to some degree, associated with brain maturation, a developmental process that is strongly influenced by genetic factors,” says Dr Roberta Riccelli from Italy.

“Of course, we are continually shaped by our experiences and environment, but the fact that we see clear differences in brain structure which are linked with differences in personality traits suggests that there will almost certainly be an element of genetics involved,” says Professor Nicola Toschi from the University ‘Tor Vergata’ in Rome. “This is also in keeping with the notion that differences in personality traits can be detected early on during development, for example in toddlers or infants.”

The volunteers whose brains were imaged as part of the Human Connectome Project were all healthy individuals aged between 22 and 36 years with no history of neuro-psychiatric or other major medical problems. However, the relationship between differences in brain structure and personality traits in these people suggests that the differences may be even more pronounced in people who are more likely to experience neuro-psychiatric illnesses.

“Linking how brain structure is related to basic personality traits is a crucial step to improving our understanding of the link between the brain morphology and particular mood, cognitive, or behavioural disorders,” adds Dr Passamonti. “We also need to have a better understanding of the relation between brain structure and function in healthy people to figure out what is different in people with neuropsychiatric disorders.”

This is not the first time the researchers have found links between our brain structure and behaviour. A study published by the group last year found that the brains of teenagers with serious antisocial behaviour problems differ significantly in structure to those of their peers.

Reference
Riccelli, R et al. Surface-based morphometry reveals the neuroanatomical basis of the five-factor model. Social Cognitive and Affective Neuroscience; 25 Jan 2016; DOI: 10.1093/scan/nsw175


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Psychological ‘Vaccine’ Could Help Immunise Public Against ‘Fake News’ On Climate Change – Study

Psychological ‘vaccine’ could help immunise public against ‘fake news’ on climate change – study

 

source: www.cam.ac.uk

New research finds that misinformation on climate change can psychologically cancel out the influence of accurate statements. However, if legitimate facts are delivered with an “inoculation” – a warning dose of misinformation – some of the positive influence is preserved.

There will always be people completely resistant to change, but we tend to find there is room for most people to change their minds, even just a little

Sander van der Linden

In medicine, vaccinating against a virus involves exposing a body to a weakened version of the threat, enough to build a tolerance.

Social psychologists believe that a similar logic can be applied to help “inoculate” the public against misinformation, including the damaging influence of ‘fake news’ websites propagating myths about climate change.

A new study compared reactions to a well-known climate change fact with those to a popular misinformation campaign. When presented consecutively, the false material completely cancelled out the accurate statement in people’s minds – opinions ended up back where they started.

Researchers then added a small dose of misinformation to delivery of the climate change fact, by briefly introducing people to distortion tactics used by certain groups. This “inoculation” helped shift and hold opinions closer to the truth, despite the follow-up exposure to ‘fake news’.

The study on US attitudes found the inoculation technique shifted the climate change opinions of Republicans, Independents and Democrats alike.

Published in the journal Global Challenges, the study was conducted by researchers from the universities of Cambridge, UK, Yale and George Mason, US. It is one of the first on ‘inoculation theory’ to try and replicate a ‘real world’ scenario of conflicting information on a highly politicised subject.

“Misinformation can be sticky, spreading and replicating like a virus,” says lead author Dr Sander van der Linden, a social psychologist from the University of Cambridge and Director of the Cambridge Social Decision-Making Lab.

“We wanted to see if we could find a ‘vaccine’ by pre-emptively exposing people to a small amount of the type of misinformation they might experience. A warning that helps preserve the facts.

“The idea is to provide a cognitive repertoire that helps build up resistance to misinformation, so the next time people come across it they are less susceptible.”

Fact vs. Falsehood

To find the most compelling climate change falsehood currently influencing public opinion, van der Linden and colleagues tested popular statements from corners of the internet on a nationally representative sample of US citizens, with each one rated for familiarity and persuasiveness.

The winner: the assertion that there is no consensus among scientists, apparently supported by the Oregon Global Warming Petition Project. This website claims to hold a petition signed by “over 31,000 American scientists” stating there is no evidence that human CO2 release will cause climate change.

The study also used the accurate statement that “97% of scientists agree on manmade climate change”. Prior work by van der Linden has shown this fact about scientific consensus is an effective ‘gateway’ for public acceptance of climate change.

In a disguised experiment, researchers tested the opposing statements on over 2,000 participants across the US spectrum of age, education, gender and politics using the online platform Amazon Mechanical Turk.

In order to gauge shifts in opinion, each participant was asked to estimate current levels of scientific agreement on climate change throughout the study.

Those shown only the fact about climate change consensus (in pie chart form) reported a large increase in perceived scientific agreement – an average of 20 percentage points. Those shown only misinformation (a screenshot of the Oregon petition website) dropped their belief in a scientific consensus by 9 percentage points.

Some participants were shown the accurate pie chart followed by the erroneous Oregon petition. The researchers were surprised to find the two neutralised each other (a tiny difference of 0.5 percentage points).

“It’s uncomfortable to think that misinformation is so potent in our society,” says van der Linden. “A lot of people’s attitudes toward climate change aren’t very firm. They are aware there is a debate going on, but aren’t necessarily sure what to believe. Conflicting messages can leave them feeling back at square one.”

Psychological ‘inoculation’

Alongside the consensus fact, two groups in the study were randomly given ‘vaccines’:

  • A general inoculation, consisting of a warning that “some politically-motivated groups use misleading tactics to try and convince the public that there is a lot of disagreement among scientists”.
  • A detailed inoculation that picks apart the Oregon petition specifically. For example, by highlighting some of the signatories are fraudulent, such as Charles Darwin and members of the Spice Girls, and less than 1% of signatories have backgrounds in climate science.

For those ‘inoculated’ with this extra data, the misinformation that followed did not cancel out the accurate message.

The general inoculation saw an average opinion shift of 6.5 percentage points towards acceptance of the climate science consensus, despite exposure to fake news.

When the detailed inoculation was added to the general, it was almost 13 percentage points – two-thirds of the effect seen when participants were just given the consensus fact.

The research team point out that tobacco and fossil fuel companies have used psychological inoculation in the past to sow seeds of doubt, and to undermine scientific consensus in the public consciousness.

They say the latest study demonstrates that such techniques can be partially “reversed” to promote scientific consensus, and work in favour of the public good.

The researchers also analysed the results in terms of political parties. Before inoculation, the fake negated the factual for both Democrats and Independents. For Republicans, the fake actually overrode the facts by 9 percentage points.

However, following inoculation, the positive effects of the accurate information were preserved across all parties to match the average findings (around a third with just general inoculation; two-thirds with detailed).

“We found that inoculation messages were equally effective in shifting the opinions of Republicans, Independents and Democrats in a direction consistent with the conclusions of climate science,” says van der Linden.

“What’s striking is that, on average, we found no backfire effect to inoculation messages among groups predisposed to reject climate science, they didn’t seem to retreat into conspiracy theories.

“There will always be people completely resistant to change, but we tend to find there is room for most people to change their minds, even just a little.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Solar Storms Could Cost USA Tens of Billions of Dollars

Solar storms could cost USA tens of billions of dollars

source: www.cam.ac.uk

The daily economic cost to the USA from solar storm-induced electricity blackouts could be in the tens of billions of dollars, with more than half the loss from indirect costs outside the blackout zone, according to a new study led by University of Cambridge researchers.

Previous studies have focused on direct economic costs within the blackout zone, failing to take account of indirect domestic and international supply chain loss from extreme space weather.

According to the study, published in the journal Space Weather, on average the direct economic cost incurred from disruption to electricity represents just under a half of the total potential macroeconomic cost.

The paper was co-authored by researchers from the Cambridge Centre for Risk Studies at University of Cambridge Judge Business School, British Antarctic Survey, the British Geological Survey and the University of Cape Town.

Under the study’s most extreme blackout scenario, affecting two-thirds of the US population, the daily domestic economic loss could total $41.5 billion plus an additional $7 billion loss through the international supply chain.

Electrical engineering experts are divided on the possible severity of blackouts caused by “Coronal Mass Ejections,” or magnetic solar fields ejected during solar flares and other eruptions. Some believe that outages would last only hours or a few days because electrical collapse of the transmission system would protect electricity generating facilities, while others fear blackouts could last weeks or months because those transmission networks could in fact be knocked out and need replacement.

Extreme space weather events occur often, but only sometimes affecting Earth. The best-known geomagnetic storm affected Quebec in 1989, sparking the electrical collapse of the Hydro-Quebec power grid and causing a widespread blackout for about nine hours.

There was a very severe solar storm in 1859 known as the “Carrington event” (after the name of a British astronomer). A widely cited 2012 study by Pete Riley of Predictive Sciences Inc. said that the probability of another Carrington event occurring within the next decade is around 12 per cent; a 2013 report by insurer Lloyd’s, produced in collaboration with Atmospheric and Environmental Research, said that while the probability of an extreme solar storm is “relatively low at any given time, it is almost inevitable that one will occur eventually.”

“We felt it was important to look at how extreme space weather may affect domestic US production in various economic sectors, including manufacturing, government and finance, as well as the potential economic loss in other nations owing to supply chain linkages,” says study co-author Dr Edward Oughton of the Cambridge Centre for Risk Studies.

“It was surprising that there had been a lack of transparent research into these direct and indirect costs, given the uncertainty surrounding the vulnerability of electrical infrastructure to solar incidents.”

The study looks at three geographical scenarios for blackouts caused by extreme space weather, depending on the latitudes affected by different types of incidents.

If only extreme northern states are affected, with 8 per cent of the US population, the economic loss per day could reach $6.2 billion supplemented by an international supply chain loss of $0.8 billion. A scenario affecting 23 per cent of the population could have a daily cost of $16.5 billion plus $2.2 billion internationally, while a scenario affecting 44 per cent of the population could have a daily cost of $37.7 billion in the US plus $4.8 billion globally.

Manufacturing is the US economic sector most affected by those solar-induced blackouts, followed by government, finance and insurance, and property. Outside of the US, China would be most affected by the indirect cost of such US blackouts, followed by Canada and Mexico as these countries provide a greater proportion of raw materials, and intermediate goods and services, used in production by US firms.

Reference
Oughton, EJ et al. Quantifying the daily economic impact of extreme space weather due to failure in electricity transmission infrastructure. Space Weather; 18 Jan 2017; DOI: 10.1002/2016SW001491

Adapted from a press release by the Cambridge Judge Business School.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Graphene’s Sleeping Superconductivity Awakens

Graphene’s sleeping superconductivity awakens

source: www.cam.ac.uk

Since its discovery in 2004, scientists have believed that graphene may have the innate ability to superconduct. Now Cambridge researchers have found a way to activate that previously dormant potential.

It has long been postulated that graphene should undergo a superconducting transition, but can’t. The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on?

Jason Robinson

Researchers have found a way to trigger the innate, but previously hidden, ability of graphene to act as a superconductor – meaning that it can be made to carry an electrical current with zero resistance.

The finding, reported in Nature Communications, further enhances the potential of graphene, which is already widely seen as a material that could revolutionise industries such as healthcare and electronics. Graphene is a two-dimensional sheet of carbon atoms and combines several remarkable properties; for example, it is very strong, but also light and flexible, and highly conductive.

Since its discovery in 2004, scientists have speculated that graphene may also have the capacity to be a superconductor. Until now, superconductivity in graphene has only been achieved by doping it with, or by placing it on, a superconducting material – a process which can compromise some of its other properties.

But in the new study, researchers at the University of Cambridge managed to activate the dormant potential for graphene to superconduct in its own right. This was achieved by coupling it with a material called praseodymium cerium copper oxide (PCCO).

Superconductors are already used in numerous applications. Because they generate large magnetic fields they are an essential component in MRI scanners and levitating trains. They could also be used to make energy-efficient power lines and devices capable of storing energy for millions of years.

Superconducting graphene opens up yet more possibilities. The researchers suggest, for example, that graphene could now be used to create new types of superconducting quantum devices for high-speed computing. Intriguingly, it might also be used to prove the existence of a mysterious form of superconductivity known as “p-wave” superconductivity, which academics have been struggling to verify for more than 20 years.

The research was led by Dr Angelo Di Bernardo and Dr Jason Robinson, Fellows at St John’s College, University of Cambridge, alongside collaborators Professor Andrea Ferrari, from the Cambridge Graphene Centre; Professor Oded Millo, from the Hebrew University of Jerusalem, and Professor Jacob Linder, at the Norwegian University of Science and Technology in Trondheim.

“It has long been postulated that, under the right conditions, graphene should undergo a superconducting transition, but can’t,” Robinson said. “The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on? The question then becomes how do you know that the superconductivity you are seeing is coming from within the graphene itself, and not the underlying superconductor?”

Similar approaches have been taken in previous studies using metallic-based superconductors, but with limited success. “Placing graphene on a metal can dramatically alter the properties so it is technically no longer behaving as we would expect,” Di Bernardo said. “What you see is not graphene’s intrinsic superconductivity, but simply that of the underlying superconductor being passed on.”

PCCO is an oxide from a wider class of superconducting materials called “cuprates”. It also has well-understood electronic properties, and using a technique called scanning and tunnelling microscopy, the researchers were able to distinguish the superconductivity in PCCO from the superconductivity observed in graphene.

Superconductivity is characterised by the way the electrons interact: within a superconductor electrons form pairs, and the spin alignment between the electrons of a pair may be different depending on the type – or “symmetry” – of superconductivity involved. In PCCO, for example, the pairs’ spin state is misaligned (antiparallel), in what is known as a “d-wave state”.

By contrast, when graphene was coupled to superconducting PCCO in the Cambridge-led  experiment, the results suggested that the electron pairs within graphene were in a p-wave state. “What we saw in the graphene was, in other words, a very different type of superconductivity than in PCCO,” Robinson said. “This was a really important step because it meant that we knew the superconductivity was not coming from outside it and that the PCCO was therefore only required to unleash the intrinsic superconductivity of graphene.”

It remains unclear what type of superconductivity the team activated, but their results strongly indicate that it is the elusive “p-wave” form. If so, the study could transform the ongoing debate about whether this mysterious type of superconductivity exists, and – if so – what exactly it is.

In 1994, researchers in Japan fabricated a triplet superconductor that may have a p-wave symmetry using a material called strontium ruthenate (SRO). The p-wave symmetry of SRO has never been fully verified, partly hindered by the fact that SRO is a bulky crystal, which makes it challenging to fabricate into the type of devices necessary to test theoretical predictions.

“If p-wave superconductivity is indeed being created in graphene, graphene could be used as a scaffold for the creation and exploration of a whole new spectrum of superconducting devices for fundamental and applied research areas,” Robinson said. “Such experiments would necessarily lead to new science through a better understanding of p-wave superconductivity, and how it behaves in different devices and settings.”

The study also has further implications. For example, it suggests that graphene could be used to make a transistor-like device in a superconducting circuit, and that its superconductivity could be incorporated into molecular electronics. “In principle, given the variety of chemical molecules that can bind to graphene’s surface, this research can result in the development of molecular electronics devices with novel functionalities based on superconducting graphene,” Di Bernardo added.

The study, p-wave triggered superconductivity in single layer graphene on an electron-doped oxide superconductor, is published in Nature Communications. (DOI: 101038/NCOMMS14024).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Raspberry Pi Makers Release Own-Brand OS

Raspberry Pi makers release own-brand OS

source: www.bbc.co.uk

Raspberry PiImage copyrightAFP
Image captionThe first Raspberry Pi was released in 2012 and more than 10 million have now been sold

The makers of the Raspberry Pi computer have created a version of its graphical front end that can run on ordinary desktop computers.

The Pixel desktop has been re-worked so it runs on PCs and Apple Mac machines, said the Foundation.

People who use it on a Raspberry Pi and other machines will now get the same familiar software across both.

The Pi Foundation said the release also aided its plan to produce the “best” desktop computing experience.

Most popular

Raspberry Pi co-creator Eben Upton said the software should help schoolchildren who use the credit-card sized Pi in class or for their own projects but have to continue their work on PCs or Macs.

The Pi edition of Pixel and the version translated for bigger machines uses “exactly the same productivity software and programming tools, in exactly the same desktop environment”, he wrote.

“There is no learning curve, and no need to tweak… schoolwork to run on two subtly different operating systems,” he said.

Tim PeakeImage copyrightESA
Image captionUK astronaut Tim Peake took two Raspberry Pis to the International Space Station

In addition, he said, producing such a version of Pixel kept the Raspberry Pi foundation “honest” as it would help the organisation’s coders work out what bits of the user interface needed work.

Mr Upton said that because the core software underlying Pixel was based on a relatively old computer architecture, it should run on “vintage” machines.

He warned that the software was still “experimental” so might have bugs or other “minor issues” that might mean it does not run well on some machines.

Pixel was first released in September this year and overhauled the main graphical interface owners see and use when working with their Pi. It is based on a version of the open source Linux software known as Debian.

The desktop version lacks two programs – Minecraft and Mathematica – because the Pi organisation has not licensed those applications on any machines other than its own.

In April last year, the Raspberry Pi officially became the most popular British computer ever made. More than 10 million have now been sold.

The computer was first released in 2012 and is widely used as an educational tool for programming.

IMF Lending Conditions Curb Healthcare Investment In West Africa, Study Finds

IMF lending conditions curb healthcare investment in West Africa, study finds

source: www.cam.ac.uk

Research shows budget reduction targets and public sector caps, insisted on by the IMF as loan conditions, result in reduced health spending and medical ‘brain drain’ in developing West African nations.

We show that the IMF has undermined health systems – a legacy of neglect that affects West Africa’s progress towards achieving universal health coverage

Thomas Stubbs

A new study suggests that lending conditions imposed by the International Monetary Fund in West Africa squeeze “fiscal space” in nations such as Sierra Leone – preventing government investment in health systems and, in some cases, contributing to an exodus of medical talent from countries that need it most.

Researchers from the Universities of Cambridge, Oxford and the London School of Hygiene & Tropical Medicine analysed the IMF’s own primary documents to evaluate the relationship between IMF-mandated policy reforms – the conditions of loans – and government health spending in West African countries.

The team collected archival material, including IMF staff reports and government policy memoranda, to identify policy reforms in loan agreements between 1995 and 2014, extracting 8,344 reforms across 16 countries.

They found that for every additional IMF condition that is ‘binding’ – i.e. failure to implement means automatic loan suspension – government health expenditure per capita in the region is reduced by around 0.25%.

A typical IMF programme contains 25 such reforms per year, amounting to a 6.2% reduction in health spending for the average West African country annually.

The researchers say that this is often the result of a policy focus on budget deficit reduction over healthcare, or the funnelling of finance back into international reserves – all macroeconomic targets set by IMF conditions.

The authors of the new study, published in the journal Social Science and Medicine, say their findings show that the IMF “impedes progress toward the attainment of universal health coverage”, and that – under direct IMF tutelage – West African countries underfunded their health systems.

“The IMF proclaims it strengthens health systems as part of its lending programs,” said lead author Thomas Stubbs, from Cambridge’s Department of Sociology, who conducted the study with Prof Lawrence King. “Yet, inappropriate policy design in IMF programmes has impeded the development of public health systems in the region over the past two decades.”

A growing number of IMF loans to West Africa now include social spending targets to ensure that spending on health, education and other priorities are protected. These are not binding, however, and the study found that fewer than half are actually met.

“Stringent IMF-mandated austerity measures explain part of this trend,” said Stubbs. “As countries engage in fiscal belt-tightening to meet the IMF’s macroeconomic targets, few funds are left for maintaining health spending at adequate levels.”

The study also shows that the 16 West African countries experienced a combined total of 211 years with IMF conditions between 1995 and 2014. Some 45% of these included conditions stipulating layoffs or caps on public-sector recruitment and limits to the wage bill.

The researchers uncovered correspondence from national governments to the IMF arguing that imposed conditions are hindering recruitment of healthcare staff, something they found was often borne out by World Health Organisation (WHO) data. For example:

  • In 2004, Cabo Verde told the IMF that meeting their fiscal targets would interrupt recruitment of new doctors. The country later reported to the WHO a 48% reduction in physician numbers between 2004 and 2006.
  • In 2005, a series of IMF conditions aimed to reduce Ghana’s public sector wage bill. The Ghanaian Minister of Finance wrote to the IMF that “at the current level of remuneration, the civil service is losing highly productive employees, particularly in the health sector”. Wage ceilings remained until late-2006, and the number of physicians in Ghana halved.

“IMF-supported reforms have stopped many African countries hiring, retaining or paying healthcare staff properly,” said co-author Alexander Kentikelenis, based at the University of Oxford.

“Macroeconomic targets set by the IMF – for example, on budget deficit reduction – crowd out health concerns, so governments do not adequately invest in health.”

The IMF’s extended presence in West Africa – on average 13 out of 20 years per country – has caused considerable controversy among public health practitioners, say the researchers.

“While critics stress inappropriate or dogmatic policy design that undermines health system development, the IMF has argued its reforms bolster health policy,” said Stubbs.

“We show that the IMF has undermined health systems – a legacy of neglect that affects West Africa’s progress towards achieving universal health coverage, a key objective of the United Nation’s Sustainable Development Goals.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

New Report On Macro-Economic Impact of Brexit Qestions Treasury Forecasts

New report on macro-economic impact of Brexit questions Treasury forecasts

source: www.cam.ac.uk

Economists at the Centre for Business Research (CBR) have challenged the assumptions of the Treasury in their new forecast for the UK economy and the impact of Brexit in 2017.

There is a public demand for independent, non-partisan research, conducted outside government and the political arena. There is an important role for University-based research; this should feed into the process of deliberation as Brexit unfolds

Simon Deakin

The economists have also been working with lawyers at the CBR to explore the possible impact of Brexit. They warn that the UK is in danger of remaining a low wage, low skill country unless it can create the conditions for a reorientation of its economic model post-Brexit.

In a new podcast for the CBR, based at the Cambridge Judge Business School, Graham Gudgin, one of the authors of the new report: The Macro-Economic Impact of Brexit: Using the CBR Macro-Economic Model of the UK Economy (UKMOD), and Simon Deakin, Director of the CBR and Professor of Law at the University of Cambridge, discuss how the UK economy is likely to perform in 2017 and what would be the best model for leaving the EU.

Gudgin says that, according to The Treasury, we should have been in recession by now, but we are not. “Things are maybe a bit delayed but the whole succession of investment announcements we have had from Nissan, Microsoft and others suggests that companies are taking a much more sanguine view of this than the Treasury and others have suggested,” he says.

Possible options: from the EEA to bespoke trade deals

Deakin explains the four possible options the UK government could pursue for leaving the EU: re-joining (or remaining in) the EEA (European Economic Area); becoming a member of the EU’s customs union; undertaking a series of bespoke trade deals, such as Switzerland has; or, if none of the above apply, defaulting to the rules of the World Trade Organization. The first two of these would mean accepting the free movement of persons, which the current policy of the British government appears to rule out.

Gudgin suggests that, since the policy of migration control is likely to be maintained, the UK will try to negotiate a new trade deal with the EU, perhaps along the lines of the recent EU-Canada agreement. Some argue that this could take a decade or more to achieve, but Gudgin takes the view that since we are starting from an existing free-trade situation, the task is much easier.

No quick agreement is however likely and whether the EU and the UK can negotiate transitional arrangements to bridge that gap remains to be seen. Any new trade deal will also have to be conducted within the framework of WTO rules, says Gudgin, which will add to the complexity of the negotiations.

Deakin explains: “Nearly every European country is either in the single market or in the customs union. For example, Norway, via the EEA, is in the single market but not the customs union; Turkey is in the customs union but not the single market.  There are a number of other options. Switzerland is not part of the European Economic Area, but has a number of bilateral trade deals with the EU.

“These are conditional on Switzerland allowing free movement of labour (which a recent Swiss referendum vetoed) and capital. Countries in the single market, including those in the EEA, must conform to EU rules and regulations regarding product standards, labour laws and environmental protection, among other things.

“Customs union membership implies internal free trade and a single external tariff, but countries outside the EU which are in that position, such as Turkey, cannot make their own trade deals with third countries. If we went for that option post-Brexit, we would not be bound by all the rules of the single market but we couldn’t do our own trade deals with third countries.

The WTO also have rules on how migrant workers may be treated in host states which are not that dissimilar to those operating in the EU’s single market

Simon Deakin

“If we were in the EEA we couldn’t avoid rules on free movement of labour or capital. You either accept the four freedoms, the movement of goods, services, people and capital over borders, or you don’t; you can’t cherry pick. The UK could try for a Swiss style option where you try to have free movement but then modify it somewhat, but the Swiss have had to sign up to most aspects of free movement in order to get access to the single market. To get around the rules of free movement of labour and capital it is highly likely the UK would have to be outside the EEA.

“We could still sign up to the customs union, Turkey isn’t subject to the rules on free movement of labour, for example, nor is the EU required to accept free movement of persons from Turkey into the EU, but then we wouldn’t have the freedom to do trade deals with third countries, which the UK has said it wants to have; that is why the International Trade Department was brought back.

“WTO rules do not require member states to accept free movement of labour, they do, however, contain some rules on issues like state aids, to prevent distortions of international trade. The WTO also have rules on how migrant workers may be treated in host states which are not that dissimilar to those operating in the EU’s single market, and are highly contentious for the same reasons. WTO rules on these issues are generally not as strict as EU laws and do not form part of UK domestic law. International law obligations cannot be enforced in the same way as EU laws can be. However, the WTO option is not a blank slate for the UK.”

Trade deal within two years – unlikely

Deakin goes on to say that as things stand there is uncertainty over what Brexit might mean, even if it is possible to identify some of the main features of each of the principal options: “Lawyers can say what the general framework is for each of these four options, EEA, customs union, Swiss option, WTO, but until we know more about how the government will wish to conduct its negotiations with the EU and about the EU’s position going forward it is hard to make predictions. There are many issues we don’t have a clear answer to.

“We can sketch out broadly what happens for each of these main options but I think there is a case for more research to be done. It is most unlikely that there will be a trade deal negotiated within two years of triggering Article 50, and as the process of negotiation and deliberation unfolds new issues will arise. These may crop up at sector level, particular industries may have issues that need to be worked through, and individual companies may raise points about their position and if they receive guarantees from government there will be issues of state aids to consider under both EU and WTO law. At the moment we just don’t have a good set of answers to these questions.”

Deakin says a transitional agreement with the EU would need to be a one-off bespoke arrangement as there is no provision for such an agreement within EU treaties: “We are bound by EU law until we leave, we are bound by international law to maintain the treaties that we have signed up to until we withdraw from them. Until the European Communities Act is repealed we must apply EU law domestically and even after the so called Great Repeal Act, which the government has promised to bring in, is implemented, many of the same provisions will be replicated within UK law.

Transitional agreement?

“There is talk of a so-called transitional agreement and that could involve staying in the EEA, while things are worked out, but there is no obligation on the side of the EU to offer us a transitional deal.  This would have to be a bespoke arrangement as it is not provided for at the moment under the EU treaties. It remains to be seen if that sort of soft landing is possible, let’s see what is put on the table after negotiations between the UK and the EU begin. Whatever happens, we need to understand the institutional impact of Brexit in order to get a better understanding of what its economic effects will be.

“We do need independent research to be carried out on this question because so far most of the research that has been done on this has been by one or other side of the Brexit argument. The government has its own researchers in the civil service and of course this is objective, high quality research; the OBR is doing independent economic forecasting. However, there is a public demand for independent, non-partisan research, conducted outside government and the political arena. Thus there is an important role for University-based research; this should feed into the process of deliberation as Brexit unfolds”.

We have looked very carefully at what the Treasury has said about this and we find its work very flawed and very partisan

Graham Gudgin

Gudgin agrees with Deakin that better research and economic forecasting models are needed. He thinks that in reality the only option for the UK to leave the EU, other than the WTO fall-back, is under the terms of the so called Canadian model.

“There are probably only two practical options. One is a free trade agreement along the lines of the one Canada has just signed, or else no agreement on trade in which case you fall back on WTO rules. The impact of both of those is pretty uncertain. We have looked very carefully at what the Treasury has said about this and we find its work very flawed and very partisan. It is not objective. I agree with Deakin that we need some more objective economic work on this, the whole debate has been coloured by a lot of hyperbolic discussion.

The Treasury: four quarters of recession?

“The Treasury said there would be four quarters of recession, we have had six months since the Brexit vote, we should have been in recession by now, but we are not. Things are maybe a bit delayed but the whole succession of investment announcements we have had from Nissan, Microsoft and others suggests that companies are taking a much more sanguine view of this than the Treasury and others have suggested.

“We have looked at the Nissan deal in terms of what degree of currency depreciation you would need to offset the 10 per cent tariff that motor manufacturers could face under WTO rules, and the answer to us is that it looks like a 15 per cent depreciation of sterling would offset a 10 per cent tariff. We have already had a 12 per cent depreciation so we are pretty well there. This may have been what the government was relying upon: it is the currency depreciation that bridges that gap.”

Gudgin says that the EU has not been very good at agreeing free trade deals with third countries: “Theresa May has said very clearly that there will be control over migration and she rightly recognises that was the key point in the referendum. The EEA and Swiss bilateral treaties all depend on free movement of labour. It shows just how difficult even the Swiss approach is.

“The Canadian model is a free trade agreement which any country can have with the EU, but historically the EU has not been good at having free trade agreements with others. It doesn’t have a free trade agreement with China or the US, and some people such as the Economists For Brexit see the EU as being a highly protectionist organisation. If the EU has a free trade agreement with Canada, good heavens, they surely can have one with the UK.”

Gudgin explains these predictions in the podcast:

  • “2017 won’t be a great year but growth of GDP will be between 1.0 and 1.5 per cent rather than the 2 per cent it would have been without Brexit. It could even be 2 per cent but we don’t yet really know much about company investment intentions. GDP growth is slowing but will not be too bad.
  • “The sterling depreciation of 10 to 12 per cent will mean inflation will rise to about 3 per cent by the end of 2017. It will be higher than it has been for some years. The big question is will inflation get out of hand and we don’t think it will. Remember most countries have been trying to increase their inflation up to 2 per cent to get their exchange rates down. The UK has done it in one bound.
  • “The UKMOD equations tell us wages will start to rise as prices rise. We are pretty close to full employment, so workers have bargaining power. The Bank of England published its forecast for wages recently and we agree wages will rise to something like 3 per cent by the end of 2017.”

The above text was originally posted as a blog on the Judge Business School website. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Cambridge Study Named As People’s Choice For Science Magazine’s ‘Breakthrough of the Year 2016’

Cambridge study named as People’s Choice for Science magazine’s ‘Breakthrough of the Year 2016’

source: www.cam.ac.uk

Cambridge research that will enable scientists to grow and study embryos in the lab for almost two weeks has been named as the People’s Choice for Science magazine’s ‘Breakthrough of the Year 2016’

It’s a natural human instinct to be curious about where we come from, but until now, technical hurdles have meant there’s been a huge gap in our understanding of how embryos develop

Magdalena Zernicka-Goetz

The work, led by Professor Magdalena Zernicka-Goetz from the Department of Physiology, Development and Neuroscience at the University of Cambridge, was the focus of parallel publications earlier this year in the journals Nature Cell Biologyand Nature.

Professor Zernicka-Goetz and colleagues developed a new technique that allows embryos to develop in vitro, in the absence of maternal tissue, beyond the implantation stage (when the embryo would normally implant into the womb). This will allow researchers to analyse for the first time key stages of human embryo development up to 13 days after fertilisation. The technique could open up new avenues of research aimed at helping improve the chances of success of IVF.

“It’s a wonderful honour to have been given such public recognition for our work,” says Professor Zernicka-Goetz, whose work was funded by Wellcome. “It’s a natural human instinct to be curious about where we come from, but until now, technical hurdles have meant there’s been a huge gap in our understanding of how embryos develop. We hope that our technique will crack open this ‘black box’ and allow us to learn more about our development.”

Dr Marta Shahbazi, one of the co-first authors of the Nature Cell Biology paper, also from Cambridge, adds: “In the same year where scientists have found evidence of gravitational waves, it’s amazing that the public has chosen our work as the most important scientific breakthrough. While our study will help satisfy our scientific curiosity, it is likely to help us better understand what happens in miscarriage and why the success rates for IVF are so low.”

The work builds on research pioneered by Professor Sir Robert Edwards, for which he was awarded the Nobel Prize in physiology or medicine in 2010. Professor Edwards developed the technique known as in vitro fertilisation (IVF), demonstrating that it was possible to fertilise an egg and culture it in the laboratory for the first six days of development. His work led to the first ever ‘test tube baby’, Louise Brown.

The award has been welcomed by Dr Jim Smith, Director of Science at Wellcome: “I’m really pleased to see Magda’s fantastic work recognised by Science, and we send our warmest congratulations to her and her team. In almost doubling the time we can culture human embryos in the lab, she has created completely new opportunities for developmental biologists to understand how we develop. It’s a great achievement, and Wellcome is proud to have supported her ground-breaking work.”

Science – Breakthrough of the Year 2016


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

‘Glue’ That Makes Plant Cell Wall Strong Could Hold The Key To Wooden Skyscrapers

‘Glue’ that makes plant cell walls strong could hold the key to wooden skyscrapers

source: www.cam.ac.uk

Molecules 10,000 times narrower than the width of a human hair could hold the key to making possible wooden skyscrapers and more energy-efficient paper production, according to research published today in the journal Nature Communications. The study, led by a father and son team at the Universities of Warwick and Cambridge, solves a long-standing mystery of how key sugars in cells bind to form strong, indigestible materials.

We knew the answer must be elegant and simple. And in fact, it was

Paul Dupree

The two most common large molecules – or ‘polymers’ – found on Earth are cellulose and xylan, both of which are found in the cell walls of materials such as wood and straw. They play a key role in determining the strength of materials and how easily they can be digested.

For some time, scientists have known that these two polymers must somehow stick together to allow the formation of strong plant walls, but how this occurs has, until now, remained a mystery: xylan is a long, winding polymer with so-called ‘decorations’ of other sugars and molecules attached, so how could this adhere to the thick, rod-like cellulose molecules?

“We knew the answer must be elegant and simple,” explains Professor Paul Dupree from the Department of Biochemistry at the University of Cambridge, who led the research. “And in fact, it was. What we found was that cellulose induces xylan to untwist itself and straighten out, allowing it to attach itself to the cellulose molecule. It then acts as a kind of ‘glue’ that can protect cellulose or bind the molecules together, making very strong structures.”

The finding was made possible due to an unexpected discovery several years ago in Arabidopsis, a small flowering plant related to cabbage and mustard. Professor Dupree and colleagues showed that the decorations on xylan can only occur on alternate sugar molecules within the polymer – in effect meaning that the decorations only appear on one side of xylan. This led the team of researchers to survey other plants in the Cambridge University Botanic Garden and discover that the phenomenon appears to occur in all plants, meaning it must have evolved in ancient times, and must be important.

To explore this in more detail, they turned to an imaging technique known as solid state nuclear magnetic resonance (ssNMR), which is based on the same physics as hospital MRI scanners, but can reveal structure at the nanoscale. However, while ssNMR can image carbon, it requires a particular heavy isotope of carbon, carbon-13. This meant that the team had to grow their plants in an atmosphere enriched with a special form of carbon dioxide – carbon-13 dioxide.

Professor Ray Dupree – Paul Dupree’s father, and a co-author on the paper – supervised the work at the University of Warwick’s ssNMR laboratory. “By studying these molecules, which are over 10,000 times narrower than the width of a human hair, we could see for the first time how cellulose and xylan slot together and why this makes for such strong cell walls.”

Understanding how cellulose and xylan fit together could have a dramatic effect on industries as diverse as biofuels, paper production and agriculture, according to Paul Dupree.

“One of the biggest barriers to ‘digesting’ plants – whether that’s for use as biofuels or as animal feed, for example – has been breaking down the tough cellular walls,” he says. “Take paper production – enormous amounts of energy are required for this process. A better understanding of the relationship between cellulose and xylan could help us vastly reduce the amount of energy required for such processes.”

But just as this could improve how easily materials can be broken down, the discovery may also help them create stronger materials, he says. There are already plans to build houses in the UK more sustainably using wood, and Paul Dupree is involved in the Centre for Natural Material Innovation at the University of Cambridge, which is looking at whether buildings as tall as skyscrapers could be built using modified wood.

The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).

Reference
Simmons, TJ et al. Folding of xylan onto cellulose fibrils in plant cell walls revealed by solid-state NMR. Nature Communications; Date; DOI: 10.1038/ncomms13902


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Patients Show Considerable Improvements After Treatment For Newly-Defined Movement Disorder

Patients show considerable improvements after treatment for newly-defined movement disorder

source: www.cam.ac.uk

DNA sequencing has defined a new genetic disorder that affects movement, enabling patients with dystonia — a disabling condition that affects voluntary movement — to be targeted for treatment that brings remarkable improvements, including restoring independent walking.

We can [now] aim for a more ‘precision medicine approach’ to target treatment with deep brain stimulation to those likely to benefit: most importantly, we would anticipate improvement in many of those treated

Lucy Raymond

A team of researchers from UCL Great Ormond Street Institute of Child Health, University of Cambridge and the NIHR Rare Disease Bioresource have identified mutations in a gene, called KMT2B, in 28 patients with dystonia. In most cases, the patients — many of whom were young children who were thought to have a diagnosis of cerebral palsy — were unable to walk.

Remarkably, for some patients, treatment with deep brain stimulation, in which electrical impulses are delivered to a specific brain region involved in movement, either restored or significantly improved independent walking and improved hand and arm movement. In one patient, improvements have been sustained over six years.

Given these observations, the team now suggest that testing for mutations in the gene should form part of standard testing for patients with dystonia, as this is emerging to be one of the commonest genetic causes of childhood-onset dystonia.

The research is published in Nature Genetics.

Dystonia is one of the commonest movement disorders and is thought to affect 70,000 people in the UK alone. It can cause a wide range of disabling symptoms, including painful muscle spasms and abnormal postures, and can affect walking and speech.

Through research testing of patients, the team discovered a region of chromosome 19 that was deleted from the genome of some patients with childhood-onset dystonia. Together with the NIHR Rare Disease Bioresource and international collaborators, the team then identified abnormal genetic changes in genomes from a further 18 patients in one gene, called KMT2B, where affected patients carried a mutated in their DNA.

“Through DNA sequencing, we have identified a new genetic movement disorder that can be treated with deep brain stimulation. This can dramatically improve the lives of children with the condition and enable them to have a wider range of movement with long-lasting effects,” says Dr Manju Kurian, paediatric neurologist at Great Ormond Street Hospital and Wellcome Trust-funded researcher at UCL Great Ormond Street Institute of Child Health.

“Our results, though in a relatively small group of patients, show the power of genomic research not only to identify new diseases, but also to reveal possible approaches that will allow other patients to benefit.”

The KMT2B protein is thought to alter the activity of other genes. The team believes that the mutations impair the ability of the KMT2B protein to carry out its normal, crucial role in controlling the expression of genes involved in voluntary movement.

A number of patients were previously thought to have cerebral palsy prior to confirmation of their genetic diagnosis. Such uncertainty could be addressed by looking for KMT2B mutations as part of a diagnostic approach.

Although affected patients have been found to have a mutation in their DNA, this severe condition is rarely inherited from either parent but usually occurs for the first time in the affected child.

“Most patients show a progressive disease course with worsening dystonia over time,” continues Dr Kurian. “Many patients did not show any response to the usual medications that we use for dystonia so we knew we would have to consider other strategies. We know, from our experience with other patients with dystonia, that deep brain stimulation might improve our patient’s symptoms, so were keen to see what response patients would have to this type of treatment.”

“Remarkably nearly all patients who had deep brain stimulation showed considerable improvements. One patient was able to walk independently within two weeks; in five patients, the improvement has lasted for more than three years. It is an astounding result.”

Given the dramatic effects seen in their patients with this newly defined genetic condition, the team propose that referral for assessment of deep brain stimulation should be considered for all patients with a mutation in KMT2B. In the future, the team hopes that, by diagnosing additional patients, the full spectrum of this new condition will be more apparent and patients and their families might see real benefit.

“It is only through the amazing generosity and efforts of patients and their families that we can begin to search for better answers and treatments: we admire their contribution,” says Professor Lucy Raymond, Professor of Medical Genetics and Neurodevelopment at the University of Cambridge. “Through participating in our research, they have helped us to identify patients with KMT2B-related dystonia, meaning we can aim for a more ‘precision medicine approach’ to target treatment with deep brain stimulation to those likely to benefit: most importantly, we would anticipate improvement in many of those treated.

“The lesson from our study is simple and clear: because confirming this diagnosis has implications for therapy, we should test all patients with suspected genetic dystonia for mutations in KMT2B.”

Reference
Meyer E, Carss KJ, Rankin J et al. Mutations in the Histone Methyltransferase Gene, KMT2B Cause Early Onset Dystonia. Nature Genetics; 19 Dec 2016


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

System is Failing to Prevent Deaths Following Police Custody and Prison, Study Suggests

System is failing to prevent deaths following police custody and prison, study suggests

source: www.cam.ac.uk

Poor access to health care and confusion over post-detention care may have contributed to more than 400 deaths following police custody and prison detention since 2009, a new report has claimed. Here, in an article first published on The Conversation, report authors Loraine Gelsthorpe and Nicola Padfield of Cambridge’s Faculty of Law, along with their colleague Jake Phillips from Sheffield Hallam University, discuss their findings.

Deaths post-detention should also be subject to similar levels of investigation as those that occur in police custody and prison

Getting released from prison or police custody can be a huge shock to those who have been incarcerated. Our new researchgives an indication of just how vulnerable these people can be. We found that over a seven-year period, 400 people died of a suspected suicide within 48 hours of leaving police detention.

The number of people dying in prisonsand in police custody has been increasing for several years. There is, rightly, a statutory obligation for every death that occurs within a state institution to be investigated by an independent body. So each death in a prison is investigated by the Prisons and Probation Ombudsman (PPO), while the equivalent in police stations are investigated by the Independent Police Complaints Commission (IPCC).

But for people who die shortly after release from police or prison custody, their deaths are not subject to statutory investigation and are too often invisible.

A dangerous transition

Our research, published by the Equality and Human Rights Commission, looked into non-natural deaths of people who have been released from police detention or prison custody. We found that the data on these deaths is contingent upon the relevant institutions (prisons, police or probation) finding out about the death in the first place – and this can be difficult.

We examined two sets of data: IPCC data on suspected suicides that occurred within 48 hours of release from police detention and data from the National Offender Management Service on deaths of people under probation supervision, which includes those released from prison. We also conducted interviews with 15 custody sergeants – police officers who are responsible for the welfare of a detainee while in a police station – prison officers and others such as representatives of police and crime commissioners (PCCs) and Public Health England.

The IPCC data suggest that 400 people died between 2009 and 2016 of a suspected suicide within 48 hours of release, although this number declined between the years 2014-15 and 2015-16, as the graph below shows. People who had been detained on suspicion of sex offences accounted for 32% of the 400 total suspected suicides.

We also examined a selection of 41 investigations and summaries of investigations into apparent post-release suicides that were provided to us by the IPCC. Half of these people had pre-existing mental health conditions. These referrals also pointed to inadequate risk assessment, record keeping and onward referral to relevant community-based care providers such as mental health or drug treatment providers.

We then looked at deaths that had occurred within 28 days of release from prison. Despite some issues with the accuracy and completeness of the data, we identified 66 people between 2010 and 2015 who had died from non-natural causes within 28 days of leaving prison. The numbers are small and so it is difficult to draw wider conclusions, but we found that 44 of those 66 died from a drug-related death. Of the 66, 35 had served a sentence for an acquisitive offence such as theft, shoplifting or robbery, offences which are commonly associated with drug use.

We also analysed investigations conducted between 2010 and 2015 by the PPO into deaths that occurred in approved premises, also known as bail hostels, within 28 days of release from custody. These investigations seek to understand what, if anything, could have been done to prevent the death. This highlighted problems with supporting drug-using offenders, a lack of confidence among staff and a failure to create a smooth transition from prison into the community.

Staff under strain

These analyses only tell part of the story. Our discussions with custody officers painted a complex picture. They argued that they were getting better at identifying people in custody with mental health conditions but that their ability to deal with them effectively was restricted by factors beyond their control such as a lack of appropriate treatment for people after leaving their care and an inadequate number of beds in mental health hospitals. They told us that the risk assessment tool they use for identifying such people was not fit for purpose because it did not go into enough detail and that they would benefit from additional mental health training. They were also strongly in favour of the responsibility for healthcare commissioning in police stations being handed to the NHS, rather than PCCs, a proposal which was dropped in December 2015.

The story from prison staff was similar, but they also talked about the use of new psychoactive substances and the negative effects these substances are having on mental health and safety in the prison.

Problems also exist when it comes to the provision of community-based care after people are released. These include cuts to community mental health services and drug services, as well as recent changes to the probation service, which have seen 70% of the service outsourced to the private sector. Such reforms have made communication between prisons and probation providers more difficult. These budget cuts and public sector reforms are having a serious impact on the ability of criminal justice agencies to deal with these issues and prevent any future deaths.

There needs to be an improvement in the way in which data on non-natural deaths is collected. Deaths post-detention should also be subject to similar levels of investigation as those that occur in police custody and prison. It would be naive to suggest that all deaths of people leaving state detention can be investigated, but there is scope for more oversight from both the IPCC and PPO, at least while they are adjusting to life back in the community. At the same time, the government must maintain investment in mental health and drug services to help prevent those most vulnerable when they are released from detention from taking their own life.

This article was originally published on The Conversation. Read the original article.

Professor Loraine Gelsthorpe is Deputy Director of the Institute of Criminology, University of Cambridge.

Nicola Padfield is Master, Fitzwilliam College, Cambridge, and a Reader in Criminal and Penal Justice, University of Cambridge.

Jake Phillips is a Senior Lecturer in Criminology, Sheffield Hallam University.

 

 

 

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Larger Brain Size Linked To Longer Life In Deer

Larger brain size linked to longer life in deer

source: www.cam.ac.uk

The size of a female animals’ brain may determine whether they live longer and have more healthy offspring, according to new research led by the University of Cambridge.

We found that some of the cross-species predictions about brain size held for female red deer, and that none of the predictions were supported in male red deer. This indicates that each sex likely experiences its own set of trade-offs with regard to brain size.

Corina Logan

The study, published in the Royal Society Open Science journal, shows that female red deer with larger brains live longer and have more surviving offspring than those with smaller brains. Brain size is heritable and is passed down through the generations. This is the first extensive study of individual differences in brain size in wild mammals and draws on data comparing seven generations of deer.

Across species of mammals, brain size varies widely. This is thought to be a consequence of specific differences in the benefits and costs of a larger brain. Mammals with larger brains may, for example, have greater cognitive abilities that enable them to adapt better to environmental changes or they may have longer lifespans. But there may also be disadvantages: for instance, larger brains require more energy, so individuals that possess them may show reduced fertility.

The researchers, based at the University of Cambridge’s Zoology Department and Edinburgh University’s Institute of Evolutionary Biology, wanted to test if they could find more direct genetic or non-genetic evidence of the costs and benefits of large brain size by comparing the longevity and survival of individuals of the same species with different sized brains. Using the skulls of 1,314 wild red deer whose life histories and breeding success had been monitored in the course of a long-term study on the Isle of Rum, they found that females with larger endocranial volumes lived longer and produced more surviving offspring in the course of their lives.

Lead author Dr Corina Logan, a Gates Cambridge Scholar and Leverhulme Early Career Research Fellow in Cambridge’s Department of Zoology, says: “The reasons for the association between brain size and longevity are not known, but other studies have suggested that larger brains are a consequence of the longer-lived species having longer developmental periods in which the brain can grow. These hypotheses were generated from cross-species correlations; however, testing such hypotheses requires investigations at the within-species level, which is what we did.”

Dr Logan adds: “We found that some of the cross-species predictions about brain size held for female red deer, and that none of the predictions were supported in male red deer. This indicates that each sex likely experiences its own set of trade-offs with regard to brain size.”

The study also showed that females’ relative endocranial volume is smaller than that of males, despite evidence of selection for larger brains in females.

“We think this is likely due to sex differences in the costs and benefits related to larger brains,” adds Dr Logan. “We don’t know what kinds of trade-offs each sex might encounter, but we assume there must be variables that constrain brain size that are sex specific, which is why we see selection in females, but not males.”

Professor Tim Clutton-Brock, who set up the Rum Red Deer study with Fiona Guinness in 1972 and initiated the work on brain size, points out that the reason that this kind of study has not been conducted before is that it requires long term records of a large number of individuals across multiple generations and data of this kind are still rare in wild animals.

Reference
C.J. Logan, R. Stanley, A.M. Thompson, T.H. Clutton-Brock. Endocranial volume is heritable and is associated with longevity and fitness in a wild mammal. Royal Society Open Science; 14 Dec 2016; 10.1098/rsos.160622


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Cambridge To Play Major Role In €400m EU Food Innovation Project

Cambridge to play major role in €400m EU food innovation project

source: www.cam.ac.uk

The University of Cambridge is one of a number of British universities and companies that have won access to a £340 million EU Innovation programme to change the way we eat, grow and distribute food.

Our joint goal is in making the entire food system more resilient in the context of a changing climate, and improving health and nutrition for people across the world

Howard Griffiths

The project, called EIT Food, has ambitious aims to cut by half the amount of food waste in Europe within a decade, and reduce ill health caused by diet by 2030. It has received €400 million (£340m) of EU research funding, matched by 1.2 billion euros (£1 billion) of funding from industry and other sources over seven years.

The project is funded by the European Institute of Innovation and Technology (EIT), and will have a regional headquarters at the University of Reading to co-ordinate innovation, cutting edge education programmes and support start-ups in the ‘north west’ sector of Europe, covering the UK, Ireland and Iceland.

The Europe-wide scheme was put together by a partnership of 50 food business and research organisations from within Europe’s food sector, which provides jobs for 44 million people. Cambridge is part of one of five regional hubs across Europe. Already confirmed as core partners in the UK-based ‘Co-Location Centre’ (CLC) alongside Cambridge are academic centres Matís, Queen’s University Belfast and the University of Reading, as well as businesses ABP Food Group, PepsiCo and The Nielsen Company. Further partners are expected to be announced in the next year.

Professor Howard Griffiths, co-chair of the Global Food Security Strategic Research Initiative at the University of Cambridge, who will lead Cambridge’s involvement in the EIT, said: “Sustainability is a top-level agenda which is engaging both global multinational food producers and academics. Our joint goal is in making the entire food system more resilient in the context of a changing climate, and improving health and nutrition for people across the world.”

EIT Food will set up four programmes to target broad societal challenges, including:

  • personalised healthy food
  • the digitalization of the food system
  • consumer-driven supply chain development, customised products and new technology in farming, processing and retail
  • resource-efficient processes, making food more sustainable by eliminating waste and recycling by-products throughout the food chain.

EIT Food will also organize international entrepreneurship programmes for students, and develop a unique interdisciplinary EIT labelled Food System MSc for graduates. Thousands of students and food professionals will be trained via workshops, summer schools and online educational programmes like MOOCs (Massive Open Online Courses) and SPOCs (Specialized Private Online Courses).

Peter van Bladeren, Vice President Nestec, Global head Regulatory and Scientific Affairs for Nestlé and Chair of the Interim Supervisory Board of EIT Food, said: “EIT Food is committed to create the future curriculum for students and food professionals as a driving force for innovation and business creation; it will give the food manufacturing sector, which accounts for 44 million jobs in Europe, a unique competitive edge.”

Adapted from a press release by the University of Reading


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Antarctic Ice Sheet Study Reveals 8,000-Year Record of Climate Change

Antarctic Ice Sheet study reveals 8,000-year record of climate change

source: www.cam.ac.uk

An international team of researchers has found that the Antarctic Ice Sheet plays a major role in regional and global climate variability – a discovery that may also help explain why sea ice in the Southern Hemisphere has been increasing despite the warming of the rest of the Earth.

The Antarctic Ice Sheet has experienced much greater natural variability in the past than previously anticipated.

Michael Weber

Results of the study, co-authored by Michael Weber, a paleoclimatologist and visiting scientist at the University of Cambridge, along with colleagues from the USA, New Zealand and Germany, are published this week in the journal Nature.

Global climate models that look at the last several thousand years have failed to account for the amount of climate variability captured in the paleoclimate record, according to lead author Pepijn Bakker, a climate modeller from the MARUM Center for Marine Environmental Studies at the University of Bremen in Germany.

The researchers first turned their attention to the Scotia Sea. “Most icebergs calving off the Antarctic Ice Sheet travel through this region because of the atmospheric and oceanic circulation,” explained Weber. “The icebergs contain gravel that drop into the sediment on the ocean floor – and analysis and dating of such deposits shows that for the last 8,000 years, there were centuries with more gravel and those with less.”

The research team’s hypothesis is that climate modellers have historically overlooked one crucial element in the overall climate system. They discovered that the centuries-long phases of enhanced and reduced Antarctic ice mass loss documented over the past 8,000 years have had a cascading effect on the entire climate system.

Using sophisticated computer modelling, the researchers traced the variability in iceberg calving (ice that breaks away from glaciers) to small changes in ocean temperatures.

“There is a natural variability in the deeper part of the ocean adjacent to the Antarctic Ice Sheet that causes small but significant changes in temperatures,” said co-author Andreas Schmittner, a climate modeller from Oregon State University. “When the ocean temperatures warm, it causes more direct melting of the ice sheet below the surface, and it increases the number of icebergs that calve off the ice sheet.”

Those two factors combine to provide an influx of fresh water into the Southern Ocean during these warm regimes, according to Peter Clark, a paleoclimatologist from Oregon State University, and co-author on the study.

“The introduction of that cold, fresh water lessens the salinity and cools the surface temperatures, at the same time, stratifying the layers of water,” he said. “The cold, fresh water freezes more easily, creating additional sea ice despite warmer temperatures that are down hundreds of meters below the surface.”

The discovery may help explain why sea ice is currently expanding in the Southern Ocean despite global warming, the researchers say.

“This response is well-known, but what is less-known is that the input of fresh water also leads to changes far away in the northern hemisphere, because it disrupts part of the global ocean circulation,” explained Nick Golledge from the University of Wellington, New Zealand, an ice-sheet modeller and study co-author. “Meltwater from the Antarctic won’t just raise global sea level, but might also amplify climate changes around the world. Some parts of the North Atlantic may end up with warmer temperatures as a consequence of part of Antarctica melting.”

Golledge used a computer model to simulate how the Antarctic Ice Sheet changed as it came out of the last ice age and into the present, warm period.

“The integration of data and models provides further evidence that the Antarctic Ice Sheet has experienced much greater natural variability in the past than previously anticipated,” added Weber. “We should therefore be concerned that it will possibly act very dynamically in the future, too, specifically when it comes to projecting future sea-level rise.”

Two years ago Weber led another study, also published in Nature, which found that the Antarctic Ice Sheet collapsed repeatedly and abruptly at the end of the Last Ice Age to 19,000 to 9,000 years ago.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Global Graphics SE : Global Graphics Acquires TTP Meteor Limited

source: http://finance.yahoo.com/

PRESS RELEASE – REGULATED INFORMATION

GLOBAL GRAPHICS ACQUIRES TTP METEOR LIMITED.

Cambridge (UK), 5 December 2016 (18.00 CET): Global Graphics SE (GLOG.BR) the developers of software for digital printing including the Harlequin RIP®, announces today that it has acquired the entire issued share capital of TTP Meteor Limited (“Meteor”), specialists in printhead driver systems, from the TTP Group plc (“TTP”) based near Cambridge, UK.

Meteor enables industrial inkjet, graphic arts and commercial printing applications through the provision of world-leading drive electronics and software.  Through strong relationships with industrial inkjet printhead manufacturers including Fujifilm Dimatix, Konica Minolta, Kyocera, Ricoh, SII Printek, Toshiba TEC and Xaar, Meteor supplies production-ready electronics and software to print equipment manufacturers world-wide.

TTP has been involved in developing leading edge printing technologies since 1987.  From 2006, printhead drive electronics have been supplied under the Meteor brand. With this acquisition, Meteor becomes a wholly-owned subsidiary of Global Graphics SE.  Meteor will operate as an independent, standalone entity and will continue to be led by Clive Ayling who has developed Meteor into a successful business.  It is expected that the company name will change from TTP Meteor Limited to Meteor Inkjet Limited.

Gary Fry, Global Graphics` CEO said, “Meteor has established itself as an influential player in the inkjet market and has a deep understanding of the science and engineering underpinning digital printing.  This acquisition is strategically important for Global Graphics because it means we can offer a broader solution to inkjet press manufacturers by combining our software solutions with Meteor`s industrial printhead driver solutions.

“Healthy growth is predicted for the inkjet segment of digital printing where there continues to be a vast amount of innovation as jetting technology is applied to an increasingly diverse range of applications such as ceramics, textiles or décor.  Global Graphics is already emerging as an important partner to the industry`s leading manufacturers and Meteor adds to our capability, making us a very compelling proposition in the market.  We already share joint customers and our goal is to substantially grow this base.”

Clive Ayling, Meteor`s managing director said, “Meteor has an established record of success in delivering robust and reliable printhead driving solutions for a myriad of applications.  Global Graphics values this success and recognizes the importance of our independence in delivering the diverse range of solutions our customers require.  We are looking forward to working with Global Graphics to accelerate the growth of our business whilst we continue to deliver the world-class products and support that our customers have come to expect.”

Rob Day, TTP`s head of print technology said, “having nurtured Meteor through its initial technology and IP development phase into a mature and profitable business, we are delighted to see it become a subsidiary of Global Graphics. There is immense strategic synergy between the two companies. This will allow the Meteor team to broaden its offering to existing customers, and accelerate the acquisition of new ones. TTP`s Print Technology Division will continue to develop novel printing systems and print technology, and looks forward to working with Meteor`s products in future systems integration projects.”

Consideration for the acquisition is £1.2 million in cash followed by a maximum deferred consideration of £3.6 million.  The deferred consideration is payable in cash and is contingent on revenue during the ten year period from 6 December 2016 until 31 December 2026.

For the year ended 31 March 2016, Meteor generated sales of £2.5 million and a loss before tax of £0.2 million.  For the eight months ended 25 November 2016 Meteor`s management accounts showed sales of £2.5 million and a profit before tax of £0.3 million.  The acquisition is expected to be earnings enhancing in the financial year ending 31 December 2017.

Ends

About Global Graphics
Global Graphics (GLOG.BR) http://www.globalgraphics.com is a leading developer of software platforms on which our partners create solutions for digital printing, digital document and PDF applications. Customers include HP, Corel, Quark, Kodak and AgfaThe roots of the company go back to 1986 and to the iconic university town of Cambridge, and, today the majority of the R&D team is still based near here. There are also offices near Boston, Massachusetts and in Tokyo.

About Meteor
With offices near Cambridge, Meteor (http://www.meteorinkjet.com) is the leading independent supplier of industrial inkjet printhead driving solutions. Working closely with all major industrial inkjet printhead manufacturers, Meteor supplies production-ready electronics and software to printer OEMs and print system integrators world-wide.

About TTP
The Technology Partnership plc (TTP) is a world-leading technology and product development company. TTP works closely with its partners to create new business based on advances in technology and engineering innovation. TTP`s technology lies behind many products and processes in areas as diverse as biotechnology, medical devices, instrumentation, communications, digital printing, consumer & industrial products, cleantech and security systems.  Contact: Rob Day, Print Technology Group Manager, rob.day@ttp.com.

Harlequin, the Harlequin logo, the Harlequin RIP, are trademarks of Global Graphics Software Limited which may be registered in certain jurisdictions.  Global Graphics is a trademark of Global Graphics S.E. which may be registered in certain jurisdictions. All other brand and product names are the registered trademarks or trademarks of their respective owners.

Contacts

Jill Taylor Graeme Huttley
Corporate Communications Director Chief Financial Officer
Tel: +44 (0)1223 926489 Tel: +44 (0)1223 926472
Email: jill.taylor@globalgraphics.com Email: graeme.huttley@globalgraphics.com

Chris Grayling Unveils Plans For Fully Privatised Rail Line

Chris Grayling unveils plans for fully privatised rail line

source: https://www.theguardian.com

Single company to own track and run trains on Oxford–Cambridge route in first such operation since 1990s privatisation

A worker wearing a Network Rail jacket
Network Rail owns Britain’s railway tracks. When they were privately owned by Railtrack, there were several fatal crashes. Photograph: Jonathan Brady/PA

The government has unveiled plans for a fully privatised railway line, with track and trains operated by the same company.

A new line linking Oxford and Cambridge will not be developed by Network Rail, the owner of Britain’s rail infrastructure. Instead, a new entity will be responsible for track and infrastructure, as well as operating train services, under proposals drawn up by the transport secretary, Chris Grayling.

“What we are doing is taking this line out of Network Rail’s control,” Grayling told BBC Radio 4’s Today programme. “Network Rail has got a huge number of projects to deliver at the moment … I want it to happen quicker. This is an essential corridor for this country. On that route we are going to bring in private finance, in a form to be decided.”

In a keynote speech later on Tuesday, Grayling will outline further how the government plans to reunite the operation of tracks and trains, which are currently the respective responsibility of publicly owned Network Rail and private train operating companies (TOCs). He will also outline how future rail franchises will have to create integrated operating teams between TOCs and Network Rail.

The RMT union said Grayling’s rail plans would recreate privatisation chaos that it claims he introduced in the prison system as justice secretary.

On Tuesday morning, Grayling told parliament in a written statement: “I am going to establish East West Rail as a new and separate organisation, to accelerate the permissions needed to reopen the route, and to secure private-sector involvement to design, build and operate the route as an integrated organisation.”

He said he intended to build on two major reports into the rail industry, the 2011 McNulty report and the 2015 Shaw report, that advocated cost-cutting, devolution and bringing in private finance. He added: “But there is much more to do.”

While officials at the Department for Transport have disputed reports that Grayling is seeking more immediate challenges to Network Rail, unions pledged to fight the proposed changes.

The RMT general secretary, Mick Cash, said: “This is a politician who doesn’t believe in the public sector, who spent five years at the justice department and left a prison system in chaos and now wants to do the same on the railways. And we are not going to stand idly by and watch that happen.

“This is a slippery slope to privatisation and the breakup of Network Rail and we are deeply concerned about it.”

Grayling denied he was intent on privatisation. “I don’t intend to sell off the existing rail network. I don’t intend to privatise Network Rail again,” he told Today. He said the Oxford and Cambridge rail link would be developed by a separate company outside Network Rail in the same way that the Crossrail link had been developed in London.

Rail privatisation has consistently polled as deeply unpopular with supporters of all parties, and privately owned Railtrack’s management of the track and infrastructure from 1994 to 2002 remains associated with fatal train crashes including Potters Bar and Hatfield, in Hertfordshire.

Despite this, Grayling hopes the restored Oxford to Cambridge line, axed following the Beeching cuts in the 1960s, will be the first integrated rail operation in Britain since privatisation in the 1990s. Funding towards restoring the rail link between the cities, on a route that will have a branch to Milton Keynes, and eventually extend to Norwich and Ipswich, was announced in the autumn statement last month.

A new organisation, East West Rail, will be created early in 2017 to secure investment and build the line, eventually becoming a private company that will operate train services.

In the meantime, Grayling will demand that Network Rail and TOCs work more closely together in the interests of passengers, with proposals for more “vertical integration” to be built into the upcoming South Eastern and East Midlands franchises (pdf).

Grayling will say: “I believe it will mean they run better on a day-to-day basis … Our railway is much better run by one joined-up team of people. They don’t have to work for the same company. They do have to work in the same team.”

A lack of communication and shared incentives between track and train operators has been identified as one factor in the long-running problems at Southern, the commuter network to the south of London, which has been plagued by delays and cancelled services.

In September, Grayling appointed Chris Gibb to run a board with a £20m fund to ensure rail maintenance worked in tandem with Southern’s train operations.

Cash said the government was “dragging the railways back to the failed and lethal Railtrack model”.

“The idea that what Britain’s railways need is more privatisation is ludicrous,” he said. “The introduction of the profit motive into infrastructure raises again the spectre of Hatfield and Potters Bar and the other grotesque failures that led to the creation of Network Rail.”

The general secretary of the train drivers’ union Aslef, Mick Whelan, said: “The failures and tragedies of the Railtrack era remind us that infrastructure should never be run for profit. I’m concerned that in 2016, [Grayling] is seriously considering a return to one of the darkest times in the history of Britain’s railways.”

The idea of alliances between TOCs and Network Rail had been tried recently without success, Whelan said. “What he is proposing is a desperate, half-baked reform that will only add another layer of unnecessary complexity to the rail industry.”

Cutting Welfare To Protect The Economy Ignores Lessons of History, Researchers Claim

Cutting welfare to protect the economy ignores lessons of history, researchers claim

source: www.cam.ac.uk

Amid ongoing welfare cuts, researchers argue that investment in health and social care have been integral to British economic success since 1600.

There needs to be an end to this idea of setting economic growth in opposition to the goal of welfare provision. The suggestion of history is that they seem to feed each other.

Simon Szreter

Cutting welfare and social care budgets during times of economic hardship is an “historically obsolete” strategy that ignores the very roots of British prosperity, a group of Cambridge academics have warned.

Writing in the leading medical journal, The Lancet, a team of researchers argue that squeezing health and welfare spending in order to reduce taxes, and on the basis that these are luxuries that can only be afforded when times are good, overlooks a critical lesson of British history – namely that they are central to the nation’s economic success.

The authors are all part of a group based at St John’s College, University of Cambridge, which is studying the causes of health inequalities and looking at how research in this area can be used to inform policy interventions.

Drawing on recent research, they argue that the concept of a British welfare state, widely thought to have begun after the Second World War, actually dates back to a “precocious welfare system” forged during the reign of Elizabeth I, which was fundamental to England’s emergence as “the most dynamic economy in the world”.

While the Chancellor of the Exchequer has said that there will be no further welfare savings during the present Parliament beyond those already announced, the paper is directly critical of the continuation of those existing policies, which have reduced welfare spending overall in the name of economic austerity.

Referring to the statement made by the former Prime Minister, David Cameron, that “you can only have a strong NHS if you have a strong economy”, the authors argue: “The narrow view that spending on the National Health Service and social care is largely a burden on the economy is blind to the large national return to prosperity that comes from all citizens benefiting from a true sense of social security.”

They continue: “There are signs that Theresa May subscribes to the same historically obsolete view. Despite her inaugural statement as Prime Minister, her Chancellor’s autumn statement signals continuing austerity with further cuts inflicted on the poor and their children, the vulnerable, and infirm older people.”

By contrast, the paper argues that a universalist approach of progressively-funded health and welfare spending is an integral part of economic growth, and something that modern states cannot afford to do without. That conclusion is echoed in a new educational film, developed from work by Simon Szreter, Professor of History & Public Policy at Cambridge and a co-author of the Lancet piece.

“We are arguing from history that there needs to be an end to this idea of setting economic growth in opposition to the goal of welfare provision,” Professor Szreter said. “A healthy society needs both, and the suggestion of history is that they seem to feed each other.”

Perhaps surprisingly, the paper traces that feedback loop to the Tudor era, and specifically the Elizabethan Poor Laws in 1598 and 1601. These enshrined in law an absolute “right of relief” for every subject of the Crown, funding the policy with a community tax and applying both through the local Parish.

The authors say that this not only represented the world’s first social security system, but also made the elderly less reliant on their children for support, increased labour mobility, enabled urban growth and eased Britain’s transition to an industrial economy. The system also maintained a level of demand by supporting the purchasing power of the poor when food prices rose.

Rather than stifling Britain’s economy, the paper argues that the system was therefore essential to helping the country to become the most urbanized society in the world, and the world’s leading economy, between 1600 and 1800. Although the population more than doubled during this time, key indicators of prosperity – such as life expectancy – actually improved.

“Overall, it facilitated the most sustained period of rising economic prosperity in the nation’s history,” the authors observe.

The authors go on to link the economic growth that the nation experienced under the welfare state after 1945 with similar universalist principles of progressively-funded health and welfare provision, arguing that these stimulated a dynamic period of per capita economic growth, and cut the rich-poor divide to an all-time low during the 1970s.

Conversely, they argue that the economy has stagnated when such principles have been abandoned. The Poor Law Amendment Act of 1834 overhauled the earlier Elizabethan Laws in an effort to prevent abuses of the system that were felt to be draining the pockets of honest taxpayers. Infamously, this involved providing relief through workhouses in which the appalling conditions, seared into social consciousness by authors like Charles Dickens, were so bad that only the truly destitute sought their help.

The study suggests that there is no evidence that this approach, which came close to criminalising the poor, actually brought about much economic benefit. In fact, British growth rates gradually fell behind the country’s rivals’ after 1870 – and only recovered after 1950, in the postwar decades of the revived, universalist welfare state.

The authors also point out that to cut welfare budgets because this will relieve taxation on “hard-working families” implies that those who need welfare are somehow unproductive. Just as the Victorian 1834 measures attempted to address a perceived problem with the “idle poor”, current strategies often dub benefits claimants, directly or indirectly, as “scroungers”.

“The interests of the poor and the wealthy are not mutually opposed in a zero-sum game,” the authors conclude. “Investment in policies that develop human and social capital will underpin economic opportunities and security for the whole population.”

The paper, Health and welfare as a burden on the state? The dangers of forgetting history is published in The Lancet.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Enhanced CRISPR Lets Scientists Explore All Steps of Health and Disease In Every Cell Type

Enhanced CRISPR lets scientists explore all steps of health and disease in every cell type

source: www.cam.ac.uk

Researchers from the Wellcome Trust Sanger Institute and the University of Cambridge have created sOPTiKO, a more efficient and enhanced inducible CRISPR genome editing platform. Today, in the journal Development, they describe how the freely available single-step system works in every cell in the body and at every stage of development. This new approach will aid researchers in developmental biology, tissue regeneration and cancer.

In the past we have been hampered by the fact we could study a gene’s function only in a specific tissue. Now you can knock out the same gene in parallel in a diversity of cell types with different functions

Alessandro Bertero

Two complementary methods were developed: sOPiTKO is a knock-out system that turns off genes by disrupting the DNA, while sOPTiKD is a knock-down system that silences the action of genes by disrupting the RNA. Using these two methods, scientists can turn off or silence genes in any cell type, at any stage of a cell’s development from stem cell to fully differentiated adult cell. These systems will allow researchers world-wide to rapidly and accurately explore the changing role of genes as the cells develop into tissues such as liver, skin or heart, and discover how this contributes to health and disease.

The body contains approximately 37 trillion cells, yet the human genome only contains around 20,000 genes. So, to produce every tissue and cell type in the body, different combinations of genes must operate at different moments in the development of an organ or tissue. Being able to turn off genes at specific moments in a cell’s development allows their changing roles to be investigated.

Professor Ludovic Vallier, one of the senior authors of the study from the Wellcome Trust–Medical Research Council Cambridge Stem Cell Institute at the University of Cambridge and the Sanger Institute said: “As a cell develops from being stem cell to being a fully differentiated adult cell, the genes within it take on different roles. Before, if we knocked out a gene, we could only see what effect this had at the very first step. By allowing the gene to operate during the cell’s development and then knocking it out with sOPTiKO at a later developmental step, we can investigate exactly what it is doing at that stage.”

The sOPTiKO and sOPTiKD methods allow scientists to silence the activity of more than one gene at a time, so researchers are now able to investigate the role of whole families of related genes by knocking down the activity of all of them at once.

Dr Alessandro Bertero, one of the first authors of the study from the Cambridge Stem Cell Institute, said: “In the past we have been hampered by the fact we could study a gene’s function only in a specific tissue. Now you can knock out the same gene in parallel in a diversity of cell types with different functions.”

In addition, the freely available system allows experiments to be carried out far more rapidly and cheaply. sOPTiKO is highly flexible so that it can be used in every tissue in the body without needing to create a new system each time. sOPiTKD allows vast improvements in efficiency: it can be used to knock down more than one gene at a time. Before, to silence the activity of three genes, researchers had to knock down one gene, grow the cell line, and repeat for the next gene, and again for the next. Now it can do it all in one step, cutting a nine-month process down to just one to two months.

Reference
Bertero A et al. (2016) Optimized inducible shRNA and CRISPR/Cas9 platforms for in vitro studies of human development using hPSCs. Development 143: 4405-4418. doi:10.1242/dev.138081

Adapted from a press release by the Wellcome Trust Sanger Institute.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Inspiring Images Invite You Into The World of Engineering

Inspiring images invite you into the world of engineering

source: www.cam.ac.uk

It could be a crystal ball from a mythical age showing the swirling mists of time, but James Macleod’s  image, which has won this year’s Department of Engineering Photography Competition, actually shows graphene being processed in alcohol to produce conductive ink.

These photos show how some scientific applications and processes can convey stark beauty

Philip Guildford

Graphene is a sheet form of carbon that is a single atom thick, which can be produced by successively peeling thin layers off graphite using tape until an individual atomic layer is left. In the ink produced here, powdered graphite is mixed with alcohol then forced at high pressure through micrometre-scale capillaries made of diamond.

This was the first time that Macleod, a 32 year old technician at the Department, had entered the competition. His is one of more than 140 images that showcase the breadth of research taking place there.

The competition, sponsored by ZEISS, international leaders in the fields of optics and optoelectronics, has been held annually for the last 12 years. The panel of judges included Roberto Cipolla, Professor of Information Engineering, Dr Allan McRobie, Reader in Engineering, Professor David Cardwell, Head of Department, Dr Kenneth Png, Senior Applications Engineer at Carl Zeiss Microscopy and Philip Guildford, Director of Research.

Second prize went to Toby Call for his photo showing bacteria on a graphene coated carbon foam anodic surface. The bacteria (shown in red) produce conductive nanowires to connect to the surface. Also captured in the top left of the image is a ciliate (tiny protozoan) which either feed on the abundant electricity producing bacteria, or compete for resources.

Simon Stent was awarded third prize for an image showing a 2-km map of a power tunnel network in London. Due for completion in 2018, the network will channel up to six 400 kV electricity cables underground, doubling power capacity to the city. The image was captured and processed by a low-cost robotic device. Each of the 12 columns in the image spans a distance of 170 metres and shows the full 360 degree tunnel circumference unwrapped.

Mr Guildford says: “This year’s entries form yet another collection of incredible images that offer us an insight into the varied world of engineering. These photos show how some scientific applications and processes can convey stark beauty.  From tiny particles and microscopic images, to sections of tunnel on the Crossrail project in London, these photos represent the full spectrum of engineering.”

Some of the images submitted to the competition are tiny, and can only be viewed properly through a microscope, while others are on a much grander scale. Behind them all lies a passion for the subject matter being studied by the photographer.

The winning images can be viewed online via the Department’s Flickr pages, where they can be accessed alongside dozens of other entries.

Environmentally-Friendly Graphene Textiles Could Enable Wearable Electronics

Environmentally-friendly graphene textiles could enable wearable electronics

 

Conductive textile
source: www.cam.ac.uk

A new method for producing conductive cotton fabrics using graphene-based inks opens up new possibilities for flexible and wearable electronics, without the use of expensive and toxic processing steps.

Turning cotton fibres into functional electronic components can open to an entirely new set of applications from healthcare and wellbeing to the Internet of Things

Felice Torrisi

Wearable, textiles-based electronics present new possibilities for flexible circuits, healthcare and environment monitoring, energy conversion, and many others. Now, researchers at the Cambridge Graphene Centre (CGC) at the University of Cambridge, working in collaboration with scientists at Jiangnan University, China, have devised a method for depositing graphene-based inks onto cotton to produce a conductive textile. The work, published in the journal Carbon, demonstrates a wearable motion sensor based on the conductive cotton.

Cotton fabric is among the most widespread for use in clothing and textiles, as it is breathable and comfortable to wear, as well as being durable to washing. These properties also make it an excellent choice for textile electronics. A new process, developed by Dr Felice Torrisi at the CGC, and his collaborators, is a low-cost, sustainable and environmentally-friendly method for making conductive cotton textiles by impregnating them with a graphene-based conductive ink.

Based on Dr Torrisi’s work on the formulation of printable graphene inks for flexible electronics, the team created inks of chemically modified graphene flakes that are more adhesive to cotton fibres than unmodified graphene. Heat treatment after depositing the ink on the fabric improves the conductivity of the modified graphene.  The adhesion of the modified graphene to the cotton fibre is similar to the way cotton holds coloured dyes, and allows the fabric to remain conductive after several washes.

Although numerous researchers around the world have developed wearable sensors, most of the current wearable technologies rely on rigid electronic components mounted on flexible materials such as plastic films or textiles. These offer limited compatibility with the skin in many circumstances, are damaged when washed and are uncomfortable to wear because they are not breathable.

“Other conductive inks are made from precious metals such as silver, which makes them very expensive to produce and not sustainable, whereas graphene is both cheap, environmentally-friendly, and chemically compatible with cotton,” explains Dr Torrisi.

Co-author Professor Chaoxia Wang of Jiangnan University adds: “This method will allow us to put electronic systems directly into clothes. It’s an incredible enabling technology for smart textiles.”

Electron microscopy image of a conductive graphene/cotton fabric. Credit: Jiesheng Ren

The work done by Dr Torrisi and Prof Wang, together with students Tian Carey and Jiesheng Ren, opens a number of commercial opportunities for graphene-based inks, ranging from personal health technology, high-performance sportswear, military garments, wearable technology/computing and fashion.

“Turning cotton fibres into functional electronic components can open to an entirely new set of applications from healthcare and wellbeing to the Internet of Things,” says Dr Torrisi “Thanks to nanotechnology, in the future our clothes could incorporate these textile-based electronics and become interactive.”

Graphene is carbon in the form of single-atom-thick membranes, and is highly conductive. The group’s work is based on the dispersion of tiny graphene sheets, each less than one nanometre thick, in a water-based dispersion. The individual graphene sheets in suspension are chemically modified to adhere well to the cotton fibres during printing and deposition on the fabric, leading to a thin and uniform conducting network of many graphene sheets. This network of nanometre flakes is the secret to the high sensitivity to strain induced by motion. A simple graphene-coated smart cotton textile used as a wearable strain sensor has been shown to reliably detect up to 500 motion cycles, even after more than 10 washing cycles in normal washing machine.

The use of graphene and other related 2D materials (GRMs) inks to create electronic components and devices integrated into fabrics and innovative textiles is at the centre of new technical advances in the smart textiles industry. Dr Torrisi and colleagues at the CGC are also involved in the Graphene Flagship, an EC-funded, pan-European project dedicated to bringing graphene and GRM technologies to commercial applications.

Graphene and GRMs are changing the science and technology landscape with attractive physical properties for electronics, photonics, sensing, catalysis and energy storage. Graphene’s atomic thickness and excellent electrical and mechanical properties give excellent advantages, allowing deposition of extremely thin, flexible and conductive films on surfaces and – with this new method – also on textiles. This combined with the environmental compatibility of graphene and its strong adhesion to cotton make the graphene-cotton strain sensor ideal for wearable applications.

The research was supported by grants from the European Research Council’s Synergy Grant, the International Research Fellowship of the National Natural Science Foundation of China and the Ministry of Science and Technology of China. The technology is being commercialised by Cambridge Enterprise, the University’s commercialisation arm.

Reference
Ren, J. et al. Environmentally-friendly conductive cotton fabric as flexible strain sensor based on hot press reduced graphene oxide. Carbon; 19 Oct 2016; DOI: 10.1016/j.carbon.2016.10.045


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Connected Cambridge Co-Creation (C4) Special Interest Group Launches

Connected Cambridge Co-Creation (C4) Special Interest Group

This group has been inspired by the positive interaction between Connected Cambridge and the ARU Enterprise Society. At our Entrepreneurs on the Move meeting on November 30th we undertook to:

  • form a sub group of Connected Cambridge which supports very early stage enterprise projects like those from ARU Enterprise Society;
  • offer (a modest amount of) advice, mentoring, encouragement, contacts;
  • engage with ARU Enterprise Society events and activities and welcome them into Connected Cambridge Events;
  • sustain and develop  the relationship in the long term

Come and join us!!

for further information contact Mark Layzell on mlayzell@hiteamgroup.com and join our dedicated LI group by clicking here

Opinion: Latest Brexit Legal Challenge Will Not Be ‘Back Door’ To Single Market

Opinion: Latest Brexit legal challenge will not be ‘back door’ to Single Market

 

source: www.cam.ac.uk

Failure to invoke Article 127 of the EEA Agreement will not keep the UK in a Single Market by the back door after Brexit. The UK is only a contracting party to that agreement for limited purposes, says Cambridge professor of European Law.

It would be contrary to the purpose of the agreement for it to regulate relations between the UK and the EU27

Kenneth Armstrong

The think-tank British Influence is said to be contemplating a judicial review arguing that the UK remains a contracting party to the European Economic Area (EEA) agreement and so will retain membership of the Single Market even after Brexit.

British Influence suggest that only if the UK notifies its intention to withdraw from the EEA agreement in terms of Article 127 of that agreement would the UK ‘leave’ the Single Market.

“The UK’s obligations under the EEA agreement may not lapse when the UK leaves the EU. But the UK only has limited obligations arising under that agreement. For all aspects relating to customs and compliance with the Single Market rules, it is the EU, not the UK, that exercises rights and duties under the agreement,” says Kenneth Armstrong, Professor of European Law and the Director of the Centre for European Legal Studies at the University of Cambridge.

“Although the UK is a contracting party to the EEA agreement alongside the EU, it is only a party for those aspects of the agreement that fall within the legal powers of the UK. EU membership means that the legal powers of the UK are limited, especially in respect of customs and Single Market rules which have been taken over from the Member States and are exercised on their behalf by the EU.”

If the litigants were, nonetheless, successful in persuading a court that the UK was entitled to exercise the rights of a contracting party, Professor Armstrong suggests they may not be enforceable against the EU27 but only against the three European Free Trade Association (EFTA) states:

“The agreement is between the EU and the Member States on one side, and Norway, Iceland and Liechtenstein on the other side. This means that the UK was a contracting party as a Member State and only in relation to the three EFTA states. It would be contrary to the purpose of the agreement for it to regulate relations between the UK and the EU27. It is for the EU treaties alone to regulate that relationship subject to the supervision of the European Court of Justice.”

The EEA Agreement is an “association agreement” that comprehensively deals with a wide range of issues of cooperation between the EU and EFTA, says Armstrong. It is not limited to the Single Market.

“Because they have such a wide scope, association agreements must be signed not just by the EU as a legal entity but also by its Member States for those areas of the agreement where Member States retain their own sovereign powers. But as regards customs duties and the common rulebook of the Single Market, these powers have been transferred to the EU and are exercised collectively at EU level.

“It is not enough to say that the UK is a ‘contracting party’ and then draw the inference that this gives the UK continuing access to the Single Market. It is only a contracting party for certain purposes and within the legal limits of its powers at the time the agreement was reached. At that time, the UK was an EU Member State and the EU had taken over responsibilities for customs and the Single Market rulebook,” concludes Armstrong.

“I would be very surprised if this litigation changed the political course of Brexit.”

 

The EEA Agreement was signed by the EU, its Member States and three EFTA states (without Switzerland) on 17 March 1993, and ratified by the UK on 15 November 1993.

Article 127 of the EEA Agreement states:

Each Contracting Party may withdraw from this Agreement provided it gives at least twelve months’ notice in writing to the other Contracting Parties.

Immediately after the notification of the intended withdrawal, the other Contracting Parties shall convene a diplomatic conference in order to envisage the necessary modifications to bring to the Agreement. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Environmentally-Friendly Graphene Textiles Could Enable Wearable Electronics

Environmentally-friendly graphene textiles could enable wearable electronics

Conductive textile
source: www.cam.ac.uk

A new method for producing conductive cotton fabrics using graphene-based inks opens up new possibilities for flexible and wearable electronics, without the use of expensive and toxic processing steps.

Turning cotton fibres into functional electronic components can open to an entirely new set of applications from healthcare and wellbeing to the Internet of Things

Felice Torrisi

Wearable, textiles-based electronics present new possibilities for flexible circuits, healthcare and environment monitoring, energy conversion, and many others. Now, researchers at the Cambridge Graphene Centre (CGC) at the University of Cambridge, working in collaboration with scientists at Jiangnan University, China, have devised a method for depositing graphene-based inks onto cotton to produce a conductive textile. The work, published in the journal Carbon, demonstrates a wearable motion sensor based on the conductive cotton.

Cotton fabric is among the most widespread for use in clothing and textiles, as it is breathable and comfortable to wear, as well as being durable to washing. These properties also make it an excellent choice for textile electronics. A new process, developed by Dr Felice Torrisi at the CGC, and his collaborators, is a low-cost, sustainable and environmentally-friendly method for making conductive cotton textiles by impregnating them with a graphene-based conductive ink.

Based on Dr Torrisi’s work on the formulation of printable graphene inks for flexible electronics, the team created inks of chemically modified graphene flakes that are more adhesive to cotton fibres than unmodified graphene. Heat treatment after depositing the ink on the fabric improves the conductivity of the modified graphene.  The adhesion of the modified graphene to the cotton fibre is similar to the way cotton holds coloured dyes, and allows the fabric to remain conductive after several washes.

Although numerous researchers around the world have developed wearable sensors, most of the current wearable technologies rely on rigid electronic components mounted on flexible materials such as plastic films or textiles. These offer limited compatibility with the skin in many circumstances, are damaged when washed and are uncomfortable to wear because they are not breathable.

“Other conductive inks are made from precious metals such as silver, which makes them very expensive to produce and not sustainable, whereas graphene is both cheap, environmentally-friendly, and chemically compatible with cotton,” explains Dr Torrisi.

Co-author Professor Chaoxia Wang of Jiangnan University adds: “This method will allow us to put electronic systems directly into clothes. It’s an incredible enabling technology for smart textiles.”

Electron microscopy image of a conductive graphene/cotton fabric. Credit: Jiesheng Ren

The work done by Dr Torrisi and Prof Wang, together with students Tian Carey and Jiesheng Ren, opens a number of commercial opportunities for graphene-based inks, ranging from personal health technology, high-performance sportswear, military garments, wearable technology/computing and fashion.

“Turning cotton fibres into functional electronic components can open to an entirely new set of applications from healthcare and wellbeing to the Internet of Things,” says Dr Torrisi “Thanks to nanotechnology, in the future our clothes could incorporate these textile-based electronics and become interactive.”

Graphene is carbon in the form of single-atom-thick membranes, and is highly conductive. The group’s work is based on the dispersion of tiny graphene sheets, each less than one nanometre thick, in a water-based dispersion. The individual graphene sheets in suspension are chemically modified to adhere well to the cotton fibres during printing and deposition on the fabric, leading to a thin and uniform conducting network of many graphene sheets. This network of nanometre flakes is the secret to the high sensitivity to strain induced by motion. A simple graphene-coated smart cotton textile used as a wearable strain sensor has been shown to reliably detect up to 500 motion cycles, even after more than 10 washing cycles in normal washing machine.

The use of graphene and other related 2D materials (GRMs) inks to create electronic components and devices integrated into fabrics and innovative textiles is at the centre of new technical advances in the smart textiles industry. Dr Torrisi and colleagues at the CGC are also involved in the Graphene Flagship, an EC-funded, pan-European project dedicated to bringing graphene and GRM technologies to commercial applications.

Graphene and GRMs are changing the science and technology landscape with attractive physical properties for electronics, photonics, sensing, catalysis and energy storage. Graphene’s atomic thickness and excellent electrical and mechanical properties give excellent advantages, allowing deposition of extremely thin, flexible and conductive films on surfaces and – with this new method – also on textiles. This combined with the environmental compatibility of graphene and its strong adhesion to cotton make the graphene-cotton strain sensor ideal for wearable applications.

The research was supported by grants from the European Research Council’s Synergy Grant, the International Research Fellowship of the National Natural Science Foundation of China and the Ministry of Science and Technology of China. The technology is being commercialised by Cambridge Enterprise, the University’s commercialisation arm.

Reference
Ren, J. et al. Environmentally-friendly conductive cotton fabric as flexible strain sensor based on hot press reduced graphene oxide. Carbon; 19 Oct 2016; DOI: 10.1016/j.carbon.2016.10.045


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

New Imaging Technique MeasuresToxicity of Proteins Associated With Alzheimer’s and Parkinson’s Diseases

New imaging technique measures toxicity of proteins associated with Alzheimer’s and Parkinson’s diseases

source: www.cam..ac.uk

A new super-resolution imaging technique allows researchers to track how surface changes in proteins are related to neurodegenerative diseases such as Alzheimer’s and Parkinson’s diseases.

These proteins start out in a relatively harmless form, but when they clump together, something important changes.

Steven Lee

Researchers have developed a new imaging technique that makes it possible to study why proteins associated with Alzheimer’s and Parkinson’s diseases may go from harmless to toxic. The technique uses a technology called multi-dimensional super-resolution imaging that makes it possible to observe changes in the surfaces of individual protein molecules as they clump together. The tool may allow researchers to pinpoint how proteins misfold and eventually become toxic to nerve cells in the brain, which could aid in the development of treatments for these devastating diseases.

The researchers, from the University of Cambridge, have studied how a phenomenon called hydrophobicity (lack of affinity for water) in the proteins amyloid-beta and alpha synuclein – which are associated with Alzheimer’s and Parkinson’s respectively – changes as they stick together. It had been hypothesised that there was a link between the hydrophobicity and toxicity of these proteins, but this is the first time it has been possible to image hydrophobicity at such high resolution. Details are reported in the journal Nature Communications.

“These proteins start out in a relatively harmless form, but when they clump together, something important changes,” said Dr Steven Lee from Cambridge’s Department of Chemistry, the study’s senior author. “But using conventional imaging techniques, it hasn’t been possible to see what’s going on at the molecular level.”

In neurodegenerative diseases such as Alzheimer’s and Parkinson’s, naturally-occurring proteins fold into the wrong shape and clump together into filament-like structures known as amyloid fibrils and smaller, highly toxic clusters known as oligomers which are thought to damage or kill neurons, however the exact mechanism remains unknown.

For the past two decades, researchers have been attempting to develop treatments which stop the proliferation of these clusters in the brain, but before any such treatment can be developed, there first needs to be a precise understanding of how oligomers form and why.

“There’s something special about oligomers, and we want to know what it is,” said Lee. “We’ve developed new tools that will help us answer these questions.”

When using conventional microscopy techniques, physics makes it impossible to zoom in past a certain point. Essentially, there is an innate blurriness to light, so anything below a certain size will appear as a blurry blob when viewed through an optical microscope, simply because light waves spread when they are focused on such a tiny spot. Amyloid fibrils and oligomers are smaller than this limit so it’s very difficult to directly visualise what is going on.

However, new super-resolution techniques, which are 10 to 20 times better than optical microscopes, have allowed researchers to get around these limitations and view biological and chemical processes at the nanoscale.

Lee and his colleagues have taken super-resolution techniques one step further, and are now able to not only determine the location of a molecule, but also the environmental properties of single molecules simultaneously.

Using their technique, known as sPAINT (spectrally-resolved points accumulation for imaging in nanoscale topography), the researchers used a dye molecule to map the hydrophobicity of amyloid fibrils and oligomers implicated in neurodegenerative diseases. The sPAINT technique is easy to implement, only requiring the addition of a single transmission diffraction gradient onto a super-resolution microscope. According to the researchers, the ability to map hydrophobicity at the nanoscale could be used to understand other biological processes in future.

The research was supported by the Medical Research Council, the Engineering and Physical Sciences Research Council, the Royal Society and the Augustus Newman Foundation.

Reference
Marie N. Bongiovanni et al. ‘Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping.’ Nature Communications (2016). DOI: 10.1038/NCOMMS13544 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.