All posts by Admin

The Beauty of Engineering

The beauty of engineering

 

source: www.cam.ac.uk

Crystal tigers, metal peacock feathers and a ‘nano man’ are just some of the striking images featured in the Department of Engineering’s annual photo competition, the winners of which have been announced today.

The competition, sponsored by ZEISS (Scanning electron microscopy division), international leaders in the fields of optics and optoelectronics, has been held annually for the last 13 years. See more of this year’s winners here.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

‘Mini Liver Tumours’ Created In a Dish For The First Time

‘Mini liver tumours’ created in a dish for the first time

source: www.cam.ac.uk

Scientists have created mini biological models of human primary liver cancers, known as organoids, in the lab for the first time. In a paper published in Nature Medicine, the tiny laboratory models of tumours were used to identify a new drug that could potentially treat certain types of liver cancer.

Primary liver cancer is the second most lethal cancer worldwide. To better understand the biology of the disease and develop potential treatments, researchers need models that can grow in the lab and accurately reflect how the tumours behave in patients. Previously, cultures of cells had been used but these are hard to maintain and fail to recreate the 3D structure and tissue architecture of human tumours.

The researchers created the mini tumours (up to 0.5mm) – termed ‘tumouroids’ – to mimic the three most common forms of primary liver cancer. The tumour cells were surgically removed from eight patients and grown in a solution containing specific nutrients and substances which prevent healthy cells out-competing the tumour cells.

The team, from the Wellcome/Cancer Research UK Gurdon Institute in Cambridge, used the tumouroids to test the efficacy of 29 different drugs, including those currently used in treatment and drugs in development. One compound, a type of protein inhibitor, was found to inhibit the activation of a protein called ERK in two of the three types of tumouroids, a crucial step in the development of liver cancer.

The researchers then tested this compound in vivo, transplanting two types of tumouroids into mice and treating them with the drug. A marked reduction in tumour growth was seen in mice treated with the drug, identifying a potential novel treatment for some types of primary liver cancer.

The tumouroids were able to preserve tissue structure as well as the gene expression patterns of the original human tumours from which they were derived. The individual subtypes of three different types of liver cancer, as well as the different tumour tissues which they came from, were all still distinguishable even after they had been grown in a dish for a long time. As the tumouroids retain the biological features of their parent tumour, they could play an important role in developing personalised medicine for patients.

The creation of biologically accurate models of tumours will also reduce the number of animals needed in certain experiments. Animal studies will still be required to validate findings, but the tumouroids will allow scientists to explore key questions about the biology of liver cancer in cultures rather than mice.

Lead researcher Dr Meritxell Huch, a Wellcome Sir Henry Dale Fellow from the Gurdon Institute, said: “We had previously created organoids from healthy liver tissue, but the creation of liver tumouroids is a big step forward for cancer research. They will allow us to understand much more about the biology of liver cancer and, with further work, could be used to test drugs for individual patients to create personalised treatment plans.”

Dr Andrew Chisholm, Head of Cellular and Developmental Sciences at Wellcome said: “This work shows the power of organoid cultures to model human cancers. It is impressive to see just how well the organoids are able to mimic the biology of different liver tumour types, giving researchers a new way of investigating this disease.  These models are vital for the next generation of cancer research, and should allow scientists to minimise the numbers of animals used in research.”

Dr Vicky Robinson, Chief Executive of the NC3Rs which partially funded the work, said: “We are pleased to see that the funds from our annual 3Rs prize, sponsored by GlaxoSmithKline, have furthered Dr Huch’s research. Each year the prize recognises exceptional science which furthers the 3Rs, and the work being conducted by Meri and her team is continuing to make progress in this area. This new breakthrough involving liver cancer organoids has the potential to reduce the number of animals required in the early stages of liver cancer research, and provide more biologically accurate models of human tumours.”

This work was funded by a National Centre for the Replacement, Refinement and Reduction of Animals in Research (NC3Rs) research prize, Wellcome and Cancer Research UK Cambridge Centre.

Reference
Broutier, L et al. Human primary liver cancer–derived organoid cultures for diseasemodelling and drug screening. Nature Medicine; 13 Nov 2017; DOI: 10.1038/nm.4438

Press release from Wellcome.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Keyhole Surgery More Effective Than Open Surgery For Ruptured Aneurysm

Keyhole surgery more effective than open surgery for ruptured aneurysm

The use of keyhole surgery to repair ruptured abdominal aortic aneurysm is both clinically and cost effective and should be adopted more widely, concludes a randomised trial published by The BMJ today.

More than 1000 people a year in the UK require emergency surgery to repair a ruptured abdominal aortic aneurysm. Without repair, ruptured aneurysm is nearly always fatal

Michael Sweeting

This is the first randomised trial comparing the use of keyhole (endovascular) aneurysm repair versus traditional open surgery to repair ruptured aneurysm, with full midterm follow-up.

Abdominal aortic aneurysm is a swelling of the aorta – the main blood vessel that leads away from the heart, down through the abdomen to the rest of the body. If the artery wall ruptures, the risk of death is high, and emergency surgery is needed.

Three recent European randomised trials showed that keyhole repair does not reduce the high death rate up to three months after surgery compared with open repair. However, mid-term outcomes (three months to three years) of keyhole repair are still uncertain.

An international research team set out to assess three year clinical outcomes and cost effectiveness of a strategy of keyhole repair (whenever the shape of the aorta allows this) versus open repair for patients with suspected ruptured abdominal aortic aneurysm who were part of the IMPROVE trial.

Dr Michael Sweeting from the Department of Public Health and Primary Care at the University of Cambridge, who was involved in the trial, says: “More than 1000 people a year in the UK require emergency surgery to repair a ruptured abdominal aortic aneurysm. Without repair, ruptured aneurysm is nearly always fatal. However, surgery is not without its own significant risks, so we are always looking at ways of reducing the risk to the patient. One option is keyhole surgery, but until now not enough was known about how its outcomes compare to regular, open surgery beyond one year after repair.”

The trial involved 613 patients from 30 vascular centres (29 in the UK – including at Addenbrooke’s Hospital, Cambridge – and one in Canada) with a clinical diagnosis of ruptured aneurysm, of whom 316 were randomised to a strategy of keyhole repair and 297 to open repair.

Deaths were monitored for an average of 4.9 years and were similar in both groups three months after surgery. At three years, there were fewer deaths in the keyhole group than in the open repair group, leading to lower mortality (48% vs 56%). However, after seven years there was no clear difference between the groups.

The need for repeat surgery (‘reinterventions’) related to the aneurysm occurred at a similar rate in both groups, with about 28% of each group needing at least one reintervention after three years.

Average quality of life was higher in the keyhole group in the first year, but by three years was similar across the groups. This early higher average quality of life, coupled with the lower mortality at three years, led to a gain in average quality adjusted life years or QALYs (a measure of healthy years lived) at three years in the keyhole versus the open repair group.

The keyhole group also spent fewer days in hospital (14.4 versus 20.5 in the open repair group) and had lower overall costs (£16,900 versus £19,500 in the open repair group).

The researchers point to some study limitations, such as sample size and midterm data focusing on aneurysm-related events, which may have led to some bias. Nevertheless, they say compared with open repair, there are clear benefits associated with keyhole surgery.

“These findings show that, in the first three years after repair, keyhole surgery can improve outcomes and quality of life for patients compared to open surgery, and is more cost effective and requires less time in hospital – important factors to consider for our stretched health services,” adds Dr Sweeting.

Reference
Comparative clinical effectiveness and cost effectiveness of endovascular strategy v open repair for ruptured abdominal aortic aneurysm: three year results of the IMPROVE randomised trial. BMJ; 15 Nov 2017; DOI:

Adapted from a press release from The BMJ.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Children With Disabilities Are Being Denied Equal Opportunities For a Quality Education Across The World, Including In The UK

Children with disabilities are being denied equal opportunities for a quality education across the world, including in the UK

source: www.cam.ac.uk

Researchers from the Faculty of Education have produced a new report on the current state of education for children with disabilities in both England and India. Here, Dr Nidhi Singal, one of the report’s authors, outlines some of the key statistics, and argues that teachers need better training and more support “underpinned by principles of inclusion”.

We need to invest in inclusive teaching and learning processes and not just changes to school infrastructure

Nidhi Singal

Countries, with both developed and developing economies, need to do more to ensure that children with disabilities not only access education, but also benefit from quality education.

In England, while children with special educational needs and disabilities (SEND) access school, multiple concerns have been raised in relation to their learning and quality of life in school. The educational attainments of these children are significantly lower than for those without SEND at every level of the national curriculum.

In 2017 the Department for Education reported that, at Key Stage 2 level, only 14% of children with SEND reached the expected level for reading, writing and maths (in contrast to 62% of children without SEND).

Socially, there has been an increase in incidents of bullying and hate crime in relation to children with SEND and the National Society for the Prevention of Cruelty to Children highlights that they are significantly more likely to face abuse. Official statistics note that children with social, emotional, mental health needs are nine times more likely to face permanent exclusion from school.

The World Health Organisation in collaboration with the World Bank recently emphasised that 15% of the world’s population, approximately one billion, live with some form of disability. Estimates for the number of children under the age of 14 living with disabilities range between 93m and 150m.

Across the world, people with disabilities have poorer health outcomes, lower educational achievements, less economic participation and higher rates of poverty than people without disabilities. This is partly because people with disabilities experience significant barriers in accessing basic services, including health, education and employment.

Amongst these, education is paramount as it has significant economic, social and individual returns. Education has the potential to lift people out of chronic poverty. Accessing quality education can improve learning outcomes which leads to positive economic growth. The Global Monitoring Report calculates that if all students living in low income countries were to leave school with basic reading skills there would be a 12% reduction in world poverty.

Additionally, education has the potential to create more equitable and healthy societies. For example, evidence shows educating mothers reduces early births, lowers infant mortality rates and improves child nutrition.

Furthermore, inclusive education is integral to creating societies that are interconnected, based on values of social justice, equity of opportunities and freedom. The Sustainable Development Goals have given a considerable boost to this vision of “inclusive and equitable quality education” with significant international proclamations and national legislations being drawn up. Nevertheless, children with disabilities continue to remain the most difficult to reach.

Including children with disabilities in education systems, and ensuring quality education, is a moral and ethical commitment with considerable benefits both at the individual and national level. The International Labour Organisation estimates that the exclusion of persons with disabilities from the work force costs nations up to 7% of the national GDP. Other estimates from China suggest that every additional year of schooling in rural areas means a 5-8% wage increase for the person with disabilities.

While there is a long way to go, there is little question that educational access is on an upward trajectory in many low and middle income countries. According to official data from India over the last five years there has been approximately 16% increase in the numbers of children with disabilities enrolled in mainstream primary schools.

Nonetheless, children who are most like to be excluded, even in states with high enrolment rates are those with disabilities. They are also most likely to drop out before completing five years of primary schooling and are least likely to transition to secondary school or higher education.

Across the globe, learning for children with disabilities remains a significant challenge. In order to address this, we need to invest in inclusive teaching and learning processes and not just changes to school infrastructure. Teachers need better training and support underpinned by principles of inclusion. Significantly, children with disabilities must be respected as important partners in creating better schools for all.

The report has been produced for the World Innovation Summit for Education and will be presented this week at the summit in Doha.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Archaeologists Uncover Rare 2,000-Year-Old Sundial During Roman Theatre Excavation

Archaeologists uncover rare 2,000-year-old sundial during Roman theatre excavation

source: www.cam.ac.uk

A 2,000-year-old intact and inscribed sundial – one of only a handful known to have survived – has been recovered during the excavation of a roofed theatre in the Roman town of Interamna Lirenas, near Monte Cassino, in Italy.

“Not only have we been able to identify the individual who commissioned the sundial, we have also been able to determine the specific public office he held in relation to the likely date of the inscription”

Alessandro Launaro

Not only has the sundial survived largely undamaged for more than two millennia, but the presence of two Latin texts means researchers from the University of Cambridge have been able to glean precise information about the man who commissioned it.

The sundial was found lying face down by students of the Faculty of Classics as they were excavating the front of one of the theatre’s entrances along a secondary street. It was probably left behind at a time when the theatre and town was being scavenged for building materials during the Medieval to post-Medieval period. In all likelihood it did not belong to the theatre, but was removed from a prominent spot, possibly on top of a pillar in the nearby forum.

“Less than a hundred examples of this specific type of sundial have survived and of those, only a handful bear any kind of inscription at all – so this really is a special find,” said Dr Alessandro Launaro, a lecturer at the Faculty of Classics at Cambridge and a Fellow of Gonville & Caius College.

“Not only have we been able to identify the individual who commissioned the sundial, we have also been able to determine the specific public office he held in relation to the likely date of the inscription.”

The base prominently features the name of M(arcus) NOVIUS M(arci) F(ilius) TUBULA [Marcus Novius Tubula, son of Marcus], whilst the engraving on the curved rim of the dial surface records that he held the office of TR(ibunus) PL(ebis) [Plebeian Tribune] and paid for the sundial D(e) S(ua) PEC(unia) (with his own money).

The nomen Novius was quite common in Central Italy. On the other hand, the cognomen Tubula (literally ‘small trumpet’) is only attested at Interamna Lirenas.

But even more striking is the specific public office Tubula held in relation to the likely date of the inscription. Various considerations about the name of the individual and the lettering style comfortably place the sundial’s inscription at a time (mid 1st c. BC onwards) by which the inhabitants of Interamna had already been granted full Roman citizenship.

“That being the case, Marcus Novius Tubula, hailing from Interamna Lirenas, would be a hitherto unknown Plebeian Tribune of Rome,” added Launaro. “The sundial would have represented his way of celebrating his election in his own hometown.”

Carved out from a limestone block (54 x 35 x 25 cm), the sundial features a concave face, engraved with 11 hour lines (demarcating the twelve horae of daylight) intersecting three day curves (giving an indication of the season with respect to the time of the winter solstice, equinox and summer solstice). Although the iron gnomon (the needle casting the shadow) is essentially lost, part of it is still preserved under the surviving lead fixing. This type of ‘spherical’ sundial was relatively common in the Roman period and was known as hemicyclium.

“Even though the recent archaeological fieldwork has profoundly affected our understanding of Interamna Lirenas, dispelling long-held views about its precocious decline and considerable marginality, this was not a town of remarkable prestige or notable influence,” added Launaro. “It remained an average, middle-sized settlement, and this is exactly what makes it a potentially very informative case-study about conditions in the majority of Roman cities in Italy at the time”.

“In this sense, the discovery of the inscribed sundial not only casts new light on the place Interamna Lirenas occupied within a broader network of political relationships across Roman Italy, but it is also a more general indicator of the level of involvement in Rome’s own affairs that individuals hailing from this and other relatively secondary communities could aspire to.”

The ongoing archaeological project at Interamna Lirenas continues to add new evidence about important aspects of the Roman civilization, stressing the high levels of connectivity and integration (political, social, economic and cultural) which it featured.

The 2017 excavation, directed by Dr Launaro (Gonville & Caius College) and Professor Martin Millett (Fitzwilliam College), both from the Faculty of Classics, in partnership with Dr Giovanna Rita Bellini of the Italian Soprintendenza Archeologia, Belle Arti e Paesaggio per le Province di Frosinone, Latina e Rieti, is part of a long-standing collaboration with the British School at Rome and the Comune of Pignataro Interamna and has benefitted from the generous support of the Isaac Newton Trust and Mr Antonio Silvestro Evangelista.

Inset image: The find spot near the former roofed theatre in Interamna Lirenas


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Sheep Are Able To Recognise Human Faces From Photographs

Sheep are able to recognise human faces from photographs

source: www.cam.ac.uk

Sheep can be trained to recognise human faces from photographic portraits – and can even identify the picture of their handler without prior training – according to new research from scientists at the University of Cambridge.

We’ve shown that sheep have advanced face-recognition abilities, comparable with those of humans and monkeys

Jenny Morton

Sheep can be trained to recognise human faces from photographic portraits – and can even identify the picture of their handler without prior training – according to new research from scientists at the University of Cambridge.

The study, published today in the journal Royal Society: Open Science, is part a series of tests given to the sheep to monitor their cognitive abilities. Because of the relatively large size of their brains and their longevity, sheep are a good animal model for studying neurodegenerative disorders such as Huntington’s disease.

The ability to recognise faces is one of the most important human social skills. We recognise familiar faces easily, and can identify unfamiliar faces from repeatedly presented images. As with some other animals such as dogs and monkeys, sheep are social animals that can recognise other sheep as well as familiar humans. Little is known, however, about their overall ability to process faces.

Researchers from Cambridge’s Department of Physiology, Development and Neuroscience trained eight sheep to recognise the faces of four celebrities (Fiona Bruce, Jake Gyllenhaal, Barack Obama and Emma Watson) from photographic portraits displayed on computer screens.

Training involved the sheep making decisions as they moved around a specially-designed pen. At one end of the pen, they would see two photographs displayed on two computer screens and would receive a reward of food for choosing the photograph of the celebrity (by breaking an infrared beam near the screen); if they chose the wrong photograph, a buzzer would sound and they would receive no reward. Over time, they learn to associate a reward with the celebrity’s photograph.

After training, the sheep were shown two photograph – the celebrity’s face and another face. In this test, sheep correctly chose the learned celebrity face eight times out of ten.

In these initial tests, the sheep were shown the faces from the front, but to test how well they recognised the faces, the researchers next showed them the faces at an angle. As expected, the sheep’s performance dropped, but only by about 15% – a figure comparable to that seen when humans perform the task.

Finally, the researchers looked at whether sheep were able to recognise a handler from a photograph without pre-training. The handlers typically spend two hours a day with the sheep and so the sheep are very familiar with them. When a portrait photograph of the handler was interspersed randomly in place of the celebrity, the sheep chose the handler’s photograph over the unfamiliar face seven out of ten times.

During this final task the researchers observed an interesting behaviour. Upon seeing a photographic image of the handler for the first time – in other words, the sheep had never seen an image of this person before – the sheep did a ‘double take’. The sheep checked first the (unfamiliar) face, then the handler’s image, and then unfamiliar face again before making a decision to choose the familiar face, of the handler.

“Anyone who has spent time working with sheep will know that they are intelligent, individual animals who are able to recognise their handlers,” says Professor Jenny Morton, who led the study. “We’ve shown with our study that sheep have advanced face-recognition abilities, comparable with those of humans and monkeys.

“Sheep are long-lived and have brains that are similar in size and complexity to those of some monkeys. That means they can be useful models to help us understand disorders of the brain, such as Huntington’s disease, that develop over a long time and affect cognitive abilities. Our study gives us another way to monitor how these abilities change, particularly in sheep who carry the gene mutation that causes Huntington’s disease.”

Professor Morton’s team recently began studying sheep that have been genetically modified to carry the mutation that causes Huntington’s disease.

Huntington’s disease affects more than 6,700 people in the UK. It is an incurable neurodegenerative disease that typically begins in adulthood. Initially, the disease affects motor coordination, mood, personality and memory, as well as other complex symptoms including impairments in recognising facial emotion. Eventually, patients have difficulty in speech and swallowing, loss of motor function and die at a relatively early age. There is no known cure for the disease, only ways to manage the symptoms.

The research was supported by the CHDI Foundation, Inc., a US-based charitable trust that supports biomedical research related to Huntington’s disease.

Reference
Knolle, F et al. Sheep recognize familiar and unfamiliar human faces from two-dimensional images. Royal Society Open Science; 8 Nov 2017; DOI: 10.1098/rsos.171228


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Fully Integrated Circuits Printed Directly Onto Fabric

Fully integrated circuits printed directly onto fabric

source: www.cam.ac.uk

Researchers have successfully incorporated washable, stretchable and breathable electronic circuits into fabric, opening up new possibilities for smart textiles and wearable electronics. The circuits were made with cheap, safe and environmentally friendly inks, and printed using conventional inkjet printing techniques.

Turning textile fibres into functional electronic components can open to an entirely new set of applications from healthcare and wellbeing to the Internet of Things.

Felice Torrisi

The researchers, from the University of Cambridge, working with colleagues in Italy and China, have demonstrated how graphene – a two-dimensional form of carbon – can be directly printed onto fabric to produce integrated electronic circuits which are comfortable to wear and can survive up to 20 cycles in a typical washing machine.

The new textile electronic devices are based on low-cost, sustainable and scalable inkjet printing of inks based on graphene and other two-dimensional materials, and are produced by standard processing techniques. The results are published in the journal Nature Communications.

Based on earlier work on the formulation of graphene inks for printed electronics, the team designed low-boiling point inks, which were directly printed onto polyester fabric. Additionally, they found that modifying the roughness of the fabric improved the performance of the printed devices. The versatility of this process allowed the researchers to design not only single transistors but all-printed integrated electronic circuits combining active and passive components.

Most wearable electronic devices that are currently available rely on rigid electronic components mounted on plastic, rubber or textiles. These offer limited compatibility with the skin in many circumstances, are damaged when washed and are uncomfortable to wear because they are not breathable.

“Other inks for printed electronics normally require toxic solvents and are not suitable to be worn, whereas our inks are both cheap, safe and environmentally-friendly, and can be combined to create electronic circuits by simply printing different two-dimensional materials on the fabric,” said Dr Felice Torrisi of the Cambridge Graphene Centre, the paper’s senior author.

“Digital textile printing has been around for decades to print simple colourants on textiles, but our result demonstrates for the first time that such technology can also be used to print the entire electronic integrated circuits on textiles,” said co-author Professor Roman Sordan of Politecnico di Milano. “Although we demonstrated very simple integrated circuits, our process is scalable and there are no fundamental obstacles to the technological development of wearable electronic devices both in terms of their complexity and performance.“

“The printed components are flexible, washable and require low power, essential requirements for applications in wearable electronics,” said PhD student Tian Carey, the paper’s first author.

The work opens up a number of commercial opportunities for two-dimensional material inks, ranging from personal health and well-being technology, to wearable energy harvesting and storage, military garments, wearable computing and fashion.

“Turning textile fibres into functional electronic components can open to an entirely new set of applications from healthcare and wellbeing to the Internet of Things,” said Torrisi. “Thanks to nanotechnology, in the future our clothes could incorporate these textile-based electronics, such as displays or sensors and become interactive.”

The use of graphene and other related 2D material (GRM) inks to create electronic components and devices integrated into fabrics and innovative textiles is at the centre of new technical advances in the smart textiles industry. The teams at the Cambridge Graphene Centre and Politecnico di Milano are also involved in the Graphene Flagship, an EC-funded, pan-European project dedicated to bringing graphene and GRM technologies to commercial applications.

The research was supported by grants from the Graphene Flagship, the European Research Council’s Synergy Grant, The Engineering and Physical Science Research Council, The Newton Trust, the International Research Fellowship of the National Natural Science Foundation of China and the Ministry of Science and Technology of China. The technology is being commercialised by Cambridge Enterprise, the University’s commercialisation arm.

Reference:
Tian Carey et al. ‘Fully inkjet-printed two-dimensional material field-effect heterojunctions for wearable and textile electronics.’ Nature Communications (2017). DOI: 10.1038/s41467-017-01210-2


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Cambridge BID Renewal Vote: Businesses Back Vision For City

Cambridge BID renewal vote: businesses back vision for city

BUSINESS

Cambridge BID's Ian Sandison
Cambridge BID’s Ian Sandison
source: http://www.cambridge-news.co.uk/business/business-news/cambridge-bid-renewal-vote-businesses-13856256

Businesses have backed Cambridge BID’s vision for a “world class” city experience by handing the organisation a new five-year mandate.

The business improvement district (BID) organisation, which was first formed in April 2013, will now run until at least March 2023 following a renewal ballot, which saw 80 per cent of voters from the retail, leisure, business and education sectors in the area back the BID.

Cambridge BID is funded by businesses themselves, with some 1,100 firms coming together to pay a proportional levy, thereby creating a £4.5m pot to be invested in Cambridge over the coming years.

In total, 45 per cent of businesses voted in the renewal ballot – a 35 per cent increase in turn-out compared with the term one ballot.

Cambridge BID events include the popular night markets and open air cinema

Ian Sandison, Chairman of Cambridge BID, said: “This is excellent news for Cambridge as it means that £4.5million of investment will be generated for the city over the next five years. We are very pleased with the outcome and look forward to delivering our existing projects as well as the services and initiatives outlined in the exciting new business plan. Our vision is to create a world-class experience for all who visit, live and work in Cambridge, a global city.”

“I would like to take this opportunity to thank the business community for showing faith in the work of Cambridge BID, as well as our partners such as Visit Cambridge & Beyond, Cambridge City Council and others with whom we have worked closely and will continue to do so.”

The vote took place last month, and as part of its renewed mandate Cambridge BID has set out a new business plan , which includes extending the BID area to incorporate the CB1 area and Station Square. It hopes this will help connect what it sees as a “key gateway of the city” to its historic centre, while also enabling projects that will attract new talent and new businesses to Cambridge.

Three new key work streams – welcome, experience and support – are identified in the plan, while existing initiatives, including the annual Mystery Shop, the business cost-saving scheme, the seven-day-a-week rapid response cleaning service, the provision of Christmas lights, the payment of CAMBAC membership (for BID levy payers who join the CAMBAC radio scheme), the monthly Cambridge Performance reports and continued support of Street Aid, the taxi marshals and the Street Pastors are all set to continue.

Cllr Lewis Herbert, leader of Cambridge City Council, added: “This result secures the future of a number of vital projects for city businesses and Cambridge, and will generate major additional investment and community benefits for our great city.

“The future Cambridge BID plans have been overwhelmingly endorsed by local businesses, which is just reward for what the BID partnership has already achieved and will now achieve.

“We look forward to continuing our joint work with the BID on exciting new projects over the next five years, creating a city centre everyone enjoys each time they are here and keeps them coming back for more.”

Périgord Black Truffle Cultivated In The UK For The First Time

Périgord black truffle cultivated in the UK for the first time

source: www.cam.ac.uk

The Mediterranean black truffle, one of the world’s most expensive ingredients, has been successfully cultivated in the UK, as climate change threatens its native habitat.

Even though humans have been eating truffles for centuries, we know remarkably little about how they grow and how they interact with their host trees.

Ulf Büntgen

Researchers from the University of Cambridge and Mycorrhizal Systems Ltd (MSL) have confirmed that a black truffle has been successfully cultivated in the UK for the first time: the farthest north that the species has ever been found. It was grown as part of a programme in Monmouthshire, South Wales, run by MSL in collaboration with local farmers. The results of the programme, reported in the journal Climate Research, suggest that truffle cultivation may be possible in many parts of the UK.

After nine years of waiting, the truffle was harvested in March 2017 by a trained dog named Bella. The aromatic fungus was growing within the root system of a Mediterranean oak tree that had been treated to encourage truffle production. Further microscopic and genetic analysis confirmed that Bella’s find was indeed a Périgord black truffle (Tuber melanosporum).

The black truffle is one of the most expensive delicacies in the world, worth as much as £1,700 per kilogram. Black truffles are prized for their intense flavour and aroma, but they are difficult and time-consuming to grow and harvest, and are normally confined to regions with a Mediterranean climate. In addition, their Mediterranean habitat has been affected by drought due to long-term climate change, and yields are falling while the global demand continues to rise. The truffle industry is projected to be worth £4.5 billion annually in the next 10-20 years.

Black truffles grow below ground in a symbiotic relationship with the root system of trees in soils with high limestone content. They are found mostly in northern Spain, southern France and northern Italy, where they are sniffed out by trained dogs or pigs. While they can form naturally, many truffles are cultivated by inoculating oak or hazelnut seedlings with spores and planting them in chalky soils. Even through cultivation, there is no guarantee that truffles will grow.

“It’s a risky investment for farmers – even though humans have been eating truffles for centuries, we know remarkably little about how they grow and how they interact with their host trees,” said paper co-author Professor Ulf Büntgen of Cambridge’s Department of Geography. “Since the system is underground, we can’t see how truffles are affected by different environmental conditions, or even when the best time to water them is. There’s been no science behind it until now, so progress is slow.”

In partnership with local farmers, Büntgen’s co-author Dr Paul Thomas from MSL and the University of Stirling has been cultivating truffles in the UK for the past decade. In 2015, MSL successfully cultivated a UK native Burgundy truffle, but this is the first time the more valuable black Périgord truffle has been cultivated in such a northern and maritime climate. Its host tree is a Mediterranean oak that was planted in 2008. Before planting, the tree was inoculated with truffle spores, and the surrounding soil was made less acidic by treating it with lime.

“This is one of the best-flavoured truffle species in the world and the potential for industry is huge,” said Thomas. “We planted the trees just to monitor their survival, but we never thought this Mediterranean species could actually grow in the UK – it’s an incredibly exciting development.”

The researchers have attributed the fact that black truffles are able to grow so far outside their native Mediterranean habitat to climate change. “Different species respond to climate change on different scales and at different rates, and you often get an ecological mismatch,” said Büntgen. “For instance, insects can move quickly, while the vegetation they depend on may not. It’s possible that truffles are one of these fast-shifting species.”

“This cultivation has shown that the climatic tolerance of truffles is much broader than previously thought, but it’s likely that it’s only possible because of climate change, and some areas of the UK – including the area around Cambridge – are now suitable for the cultivation of this species,” said Thomas. “While truffles are a very valuable crop, together with their host trees, they are also a beneficial component for conservation and biodiversity.”

The first harvested truffle, which weighed 16 grams, has been preserved for posterity, but in future, the truffles will be distributed to restaurants in the UK.

Reference: 
Paul Thomas and Ulf Büntgen. ‘New UK truffle find as a result of climate change.’ Climate Research (2017). DOI: 10.3354/cr01494. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Scientists Identify Mechanism That Helps Us Inhibit Unwanted Thoughts

Scientists identify mechanism that helps us inhibit unwanted thoughts

source: www.cam.ac.uk

Scientists have identified a key chemical within the ‘memory’ region of the brain that allows us to suppress unwanted thoughts, helping explain why people who suffer from disorders such as anxiety, post-traumatic stress disorder (PTSD), depression, and schizophrenia often experience persistent intrusive thoughts when these circuits go awry.

Our ability to control our thoughts is fundamental to our wellbeing. When this capacity breaks down, it causes some of the most debilitating symptoms of psychiatric diseases

Michael Anderson

We are sometimes confronted with reminders of unwanted thoughts — thoughts about unpleasant memories, images or worries. When this happens, the thought may be retrieved, making us think about it again even though we prefer not to. While being reminded in this way may not be a problem when our thoughts are positive, if the topic was unpleasant or traumatic, our thoughts may be very negative, worrying or ruminating about what happened, taking us back to the event.

“Our ability to control our thoughts is fundamental to our wellbeing,” explains Professor Michael Anderson from the Medical Research Council Cognition and Brain Sciences Unit, which recently transferred to the University of Cambridge. “When this capacity breaks down, it causes some of the most debilitating symptoms of psychiatric diseases: intrusive memories, images, hallucinations, ruminations, and pathological and persistent worries. These are all key symptoms of mental illnesses such as PTSD, schizophrenia, depression, and anxiety.”

Professor Anderson likens our ability to intervene and stop ourselves retrieving particular memories and thoughts to stopping a physical action. “We wouldn’t be able to survive without controlling our actions,” he says. “We have lots of quick reflexes that are often useful, but we sometimes need to control these actions and stop them from happening. There must be a similar mechanism for helping us stop unwanted thoughts from occurring.”

A region at the front of the brain known as the prefrontal cortex is known to play a key role in controlling our actions and has more recently been shown to play a similarly important role in stopping our thoughts. The prefrontal cortex acts as a master regulator, controlling other brain regions – the motor cortex for actions and the hippocampus for memories.

In research published today in the journal Nature Communications, a team of scientists led by Dr Taylor Schmitz and Professor Anderson used a task known as the ‘Think/No-Think’ procedure to identify a significant new brain process that enables the prefrontal cortex to successfully inhibit our thoughts.

In the task, participants learn to associate a series of words with a paired, but otherwise unconnected, word, for example ordeal/roach and moss/north. In the next stage, participants are asked to recall the associated word if the cue is green or to suppress it if the cue is red; in other words, when shown ‘ordeal’ in red, they are asked to stare at the word but to stop themselves thinking about the associated thought ‘roach’.

Using a combination of functional magnetic resonance imaging (fMRI) and magnetic resonance spectroscopy, the researchers were able to observe what was happening within key regions of the brain as the participants tried to inhibit their thoughts. Spectroscopy enabled the researchers to measure brain chemistry, and not just brain activity, as is usually done in imaging studies.

Professor Anderson, Dr Schmitz and colleagues showed that the ability to inhibit unwanted thoughts relies on a neurotransmitter – a chemical within the brain that allows messages to pass between nerve cells – known as GABA. GABA is the main ‘inhibitory’ neurotransmitter in the brain, and its release by one nerve cell can suppress activity in other cells to which it is connected. Anderson and colleagues discovered that GABA concentrations within the hippocampus – a key area of the brain involved in memory – predict people’s ability to block the retrieval process and prevent thoughts and memories from returning.

“What’s exciting about this is that now we’re getting very specific,” he explains. “Before, we could only say ‘this part of the brain acts on that part’, but now we can say which neurotransmitters are likely important – and as a result, infer the role of inhibitory neurons – in enabling us to stop unwanted thoughts.”

“Where previous research has focused on the prefrontal cortex – the command centre – we’ve shown that this is an incomplete picture. Inhibiting unwanted thoughts is as much about the cells within the hippocampus – the ‘boots on the ground’ that receive commands from the prefrontal cortex. If an army’s foot-soldiers are poorly equipped, then its commanders’ orders cannot be implemented well.”

The researchers found that even within his sample of healthy young adults, people with less hippocampal GABA (less effective ‘foot-soldiers’) were less able to suppress hippocampal activity by the prefrontal cortex—and as a result much worse at inhibiting unwanted thoughts.

The discovery may answer one of the long-standing questions about schizophrenia. Research has shown that people affected by schizophrenia have ‘hyperactive’ hippocampi, which correlates with intrusive symptoms such as hallucinations. Post-mortem studies have revealed that the inhibitory neurons (which use GABA) in the hippocampi of these individuals are compromised, possibly making it harder for the prefrontal cortex to regulate activity in this structure. This suggests that the hippocampus is failing to inhibit errant thoughts and memories, which may be manifest as hallucinations.

According to Dr Schmitz: “The environmental and genetic influences that give rise to hyperactivity in the hippocampus might underlie a range of disorders with intrusive thoughts as a common symptom.”

In fact, studies have shown that elevated activity in the hippocampus is seen in a broad range of conditions such as PTSD, anxiety and chronic depression, all of which include a pathological inability to control thoughts – such as excessive worrying or rumination.

While the study does not examine any immediate treatments, Professor Anderson believes it could offer a new approach to tackling intrusive thoughts in these disorders. “Most of the focus has been on improving functioning of the prefrontal cortex,” he says, “but our study suggests that if you could improve GABA activity within the hippocampus, this may help people to stop unwanted and intrusive thoughts.”

The research was funded by the Medical Research Council.

Reference
Schmitz, TW et al. Hippocampal GABA enables inhibitory control over unwanted thoughts. Nature Communications; 3 Nov 2017; DOI: 10.1038/s41467-017-00956-z


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Oldest Recorded Solar Eclipse Helps Date The Egyptian Pharaohs

Oldest recorded solar eclipse helps date the Egyptian pharaohs

source: www.cam.ac.uk

Researchers have pinpointed the date of what could be the oldest solar eclipse yet recorded. The event, which occurred on 30 October 1207 BC, is mentioned in the Bible and could have consequences for the chronology of the ancient world.

If these words are describing a real observation, then a major astronomical event was taking place – the question for us to figure out is what the text actually means.

Colin Humphreys

Using a combination of the biblical text and an ancient Egyptian text, the researchers were then able to refine the dates of the Egyptian pharaohs, in particular the dates of the reign of Ramesses the Great. The results are published in the Royal Astronomical Society journal Astronomy & Geophysics.

The biblical text in question comes from the Old Testament book of Joshua and has puzzled biblical scholars for centuries. It records that after Joshua led the people of Israel into Canaan – a region of the ancient Near East that covered modern-day Israel and Palestine – he prayed: “Sun, stand still at Gibeon, and Moon, in the Valley of Aijalon. And the Sun stood still, and the Moon stopped, until the nation took vengeance on their enemies.”

“If these words are describing a real observation, then a major astronomical event was taking place – the question for us to figure out is what the text actually means,” said paper co-author Professor Sir Colin Humphreys from the University of Cambridge’s Department of Materials Science & Metallurgy, who is also interested in relating scientific knowledge to the Bible.

“Modern English translations, which follow the King James translation of 1611, usually interpret this text to mean that the sun and moon stopped moving,” said Humphreys, who is also a Fellow of Selwyn College. “But going back to the original Hebrew text, we determined that an alternative meaning could be that the sun and moon just stopped doing what they normally do: they stopped shining. In this context, the Hebrew words could be referring to a solar eclipse, when the moon passes between the earth and the sun, and the sun appears to stop shining. This interpretation is supported by the fact that the Hebrew word translated ‘stand still’ has the same root as a Babylonian word used in ancient astronomical texts to describe eclipses.”

Humphreys and his co-author, Graeme Waddington, are not the first to suggest that the biblical text may refer to an eclipse, however, earlier historians claimed that it was not possible to investigate this possibility further due to the laborious calculations that would have been required.

Independent evidence that the Israelites were in Canaan between 1500 and 1050 BC can be found in the Merneptah Stele, an Egyptian text dating from the reign of the Pharaoh Merneptah, son of the well-known Ramesses the Great. The large granite block, held in the Egyptian Museum in Cairo, says that it was carved in the fifth year of Merneptah’s reign and mentions a campaign in Canaan in which he defeated the people of Israel.

Earlier historians have used these two texts to try to date the possible eclipse, but were not successful as they were only looking at total eclipses, in which the disc of the sun appears to be completely covered by the moon as the moon passes directly between the earth and the sun. What the earlier historians failed to consider was that it was instead an annular eclipse, in which the moon passes directly in front of the sun, but is too far away to cover the disc completely, leading to the characteristic ‘ring of fire’ appearance. In the ancient world, the same word was used for both total and annular eclipses.

The researchers developed a new eclipse code, which takes into account variations in the Earth’s rotation over time. From their calculations, they determined that the only annular eclipse visible from Canaan between 1500 and 1050 BC was on 30 October 1207 BC, in the afternoon. If their arguments are accepted, it would not only be the oldest solar eclipse yet recorded, it would also enable researchers to date the reigns of Ramesses the Great and his son Merneptah to within a year.

“Solar eclipses are often used as a fixed point to date events in the ancient world,” said Humphreys. Using these new calculations, the reign of Merneptah began in 1210 or 1209 BC. As it is known from Egyptian texts how long he and his father reigned for, it would mean that Ramesses the Great reigned from 1276-1210 BC, with a precision of plus or minus one year, the most accurate dates available. The precise dates of the pharaohs have been subject to some uncertainty among Egyptologists, but this new calculation, if accepted, could lead to an adjustment in the dates of several of their reigns and enable us to date them precisely.

Reference
Colin Humphreys and Graeme Waddington. ‘Solar eclipse of 1207 BC helps to date pharaohs.’ Astronomy & Geophysics (2017). DOI: 10.1093/astrogeo/atx178.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Skin Found To Play a Role In Controlling Blood Pressure

Skin found to play a role in controlling blood pressure

source: www.cam.ac.uk

Skin plays a surprising role in helping regulate blood pressure and heart rate, according to scientists at the University of Cambridge and the Karolinska Institute, Sweden. While this discovery was made in mice, the researchers believe it is likely to be true also in humans.

Nine of ten cases of high blood pressure appear to occur spontaneously, with no known cause

Randall Johnson

In a study published in the open access journal eLife, the researchers show that skin – our largest organ, typically covering two square metres in humans – helps regulate blood pressure and heart rate in response to changes in the amount of oxygen available in the environment.

High blood pressure is associated with cardiovascular disease, such as heart attack and stroke. For the vast majority of cases of high blood pressure, there is no known cause. The condition is often associated with reduced flow of blood through small blood vessels in the skin and other parts of the body, a symptom which can get progressively worse if the hypertension is not treated.

Previous research has shown that when a tissue is starved of oxygen – as can happen in areas of high altitude, or in response to pollution, smoking or obesity, for example – blood flow to that tissue will increase. In such situations, this increase in blood flow is controlled in part by the ‘HIF’ family of proteins.

To investigate what role the skin plays in the flow of blood through small vessels, a team of researchers from Cambridge and Sweden exposed mice to low-oxygen conditions. These mice had been genetically modified so that they are unable to produce certain HIF proteins in the skin.

“Nine of ten cases of high blood pressure appear to occur spontaneously, with no known cause,” says Professor Randall Johnson from the Department of Physiology, Development and Neuroscience at the University of Cambridge. “Most research in this areas tends to look at the role played by organs such as the brain, heart and kidneys, and so we know very little about what role other tissue and organs play.

“Our study was set up to understand the feedback loop between skin and the cardiovascular system. By working with mice, we were able to manipulate key genes involved in this loop.”

The researchers found that in mice lacking one of two proteins in the skin (HIF-1α or HIF-2α), the response to low levels of oxygen changed compared to normal mice and that this affected their heart rate, blood pressure, skin temperature and general levels of activity. Mice lacking specific proteins controlled by the HIFs also responded in a similar way.

In addition, the researchers showed that even the response of normal, healthy mice to oxygen starvation was more complex than previously thought. In the first ten minutes, blood pressure and heart rate rise, and this is followed by a period of up to 36 hours where blood pressure and heart rate decrease below normal levels. By around 48 hours after exposure to low levels of oxygen, blood pressure and heart rate levels had returned to normal.

Loss of the HIF proteins or other proteins involved in the response to oxygen starvation in the skin, was found to dramatically change when this process starts and how long it takes.

“These findings suggest that our skin’s response to low levels of oxygen may have substantial effects on the how the heart pumps blood around the body,” adds first author Dr Andrew Cowburn, also from Cambridge. “Low oxygen levels – whether temporary or sustained – are common and can be related to our natural environment or to factors such as smoking and obesity. We hope that our study will help us better understand how the body’s response to such conditions may increase our risk of – or even cause – hypertension.”

Professor Johnson adds: “Given that skin is the largest organ in our body, it perhaps shouldn’t be too surprising that it plays a role in regulation such a fundamental mechanism as blood pressure. But this suggests to us that we may need to take a look at other organs and tissues in the body and see how they, too, are implicated.”

The study was funded by Wellcome.

Reference
Cowburn, AS et al. Cardiovascular adaptation to hypoxia and the role of peripheral resistance. eLife; 19 Oct 2017; DOI: 10.7554/eLife.28755


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

‘Scars’ Left By Icebergs Record West Antarctic Ice Retreat

‘Scars’ left by icebergs record West Antarctic ice retreat

Thousands of marks on the Antarctic seafloor, caused by icebergs which broke free from glaciers more than ten thousand years ago, show how part of the Antarctic Ice Sheet retreated rapidly at the end of the last ice age as it balanced precariously on sloping ground and became unstable. Today, as the global climate continues to warm, rapid and sustained retreat may be close to happening again and could trigger runaway ice retreat into the interior of the continent, which in turn would cause sea levels to rise even faster than currently projected.

Today, the Pine Island and Thwaites glaciers are grounded in a very precarious position, and major retreat may already be happening.

Matthew Wise

Researchers from the University of Cambridge, the British Antarctic Survey and Stockholm University imaged the seafloor of Pine Island Bay, in West Antarctica. They found that, as seas warmed at the end of the last ice age, Pine Island Glacier retreated to a point where its grounding line – the point where it enters the ocean and starts to float – was perched precariously at the end of a slope.

Breakup of the floating ‘ice shelf’ in front of the glacier left tall ice ‘cliffs’ at its edge. The height of these cliffs made them unstable, triggering the release of thousands of icebergs into Pine Island Bay, and causing the glacier to retreat rapidly until its grounding line reached a restabilising point in shallower water.

Today, as warming waters caused by climate change flow underneath the floating ice shelves in Pine Island Bay, the Antarctic Ice Sheet is once again at risk of losing mass from rapidly retreating glaciers. Significantly, if ice retreat is triggered, there are no relatively shallow points in the ice sheet bed along the course of Pine Island and Thwaites glaciers to prevent possible runaway ice retreat into the interior of West Antarctica. The results are published in the journal Nature.

“Today, the Pine Island and Thwaites glaciers are grounded in a very precarious position, and major retreat may already be happening, caused primarily by warm waters melting from below the ice shelves that jut out from each glacier into the sea,” said Matthew Wise of Cambridge’s Scott Polar Research Institute, and the study’s first author. “If we remove these buttressing ice shelves, unstable ice thicknesses would cause the grounded West Antarctic Ice Sheet to retreat rapidly again in the future. Since there are no potential restabilising points further upstream to stop any retreat from extending deep into the West Antarctic hinterland, this could cause sea-levels to rise faster than previously projected.”

Pine Island Glacier and the neighbouring Thwaites Glacier are responsible for nearly a third of total ice loss from the West Antarctic Ice Sheet, and this contribution has increased greatly over the past 25 years. In addition to basal melt, the two glaciers also lose ice by breaking off, or calving, icebergs into Pine Island Bay.

Today, the icebergs that break off from Pine Island and Thwaites glaciers are mostly large table-like blocks, which cause characteristic ‘comb-like’ ploughmarks as these large multi-keeled icebergs grind along the sea floor. By contrast, during the last ice age, hundreds of comparatively smaller icebergs broke free of the Antarctic Ice Sheet and drifted into Pine Island Bay. These smaller icebergs had a v-shaped structure like the keel of a ship and left long and deep single scars in the sea floor.

High-resolution imaging techniques, used to investigate the shape and distribution of ploughmarks on the sea floor in Pine Island Bay, allowed the researchers to determine the relative size and drift direction of icebergs in the past. Their analysis showed that these smaller icebergs were released due to a process called marine ice-cliff instability (MICI). More than 12,000 years ago, Pine Island and Thwaites glaciers were grounded on top of a large wedge of sediment and were buttressed by a floating ice shelf, making them relatively stable even though they rested below sea level.

Eventually, the floating ice shelf in front of the glaciers ‘broke up’, which caused them to retreat onto land sloping downward from the grounding lines to the interior of the ice sheet. This exposed tall ice ‘cliffs’ at their margin with an unstable height, and resulted in the rapid retreat of the glaciers from marine ice cliff instability between 12,000 and 11,000 years ago. This occurred under climate conditions that were relatively similar to those of today.

“Ice-cliff collapse has been debated as a theoretical process that might cause West Antarctic Ice Sheet retreat to accelerate in the future,” said co-author Dr Robert Larter, from the British Antarctic Survey. “Our observations confirm that this process is real and that it occurred about 12,000 years ago, resulting in rapid retreat of the ice sheet into Pine Island Bay.”

Today, the two glaciers are getting ever closer to the point where they may become unstable, resulting once again in rapid ice retreat.

The research has been funded in part by the UK Natural Environment Research Council (NERC)

Reference: 
Matthew G. Wise et al. ‘Evidence of marine ice-cliff instability in Pine Island Bay from iceberg-keel plough marks.’ Nature (2017). DOI: 10.1038/nature24458


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Machine Learning Used To Predict Earthquakes In a Lab Setting

Machine learning used to predict earthquakes in a lab setting

source: www.cam.ac.uk

A group of researchers from the UK and the US have used machine learning techniques to successfully predict earthquakes. Although their work was performed in a laboratory setting, the experiment closely mimics real-life conditions, and the results could be used to predict the timing of a real earthquake.

This is the first time that machine learning has been used to analyse acoustic data to predict when an earthquake will occur.

Colin Humphreys

The team, from the University of Cambridge, Los Alamos National Laboratory and Boston University, identified a hidden signal leading up to earthquakes and used this ‘fingerprint’ to train a machine learning algorithm to predict future earthquakes. Their results, which could also be applied to avalanches, landslides and more, are reported in the journal Geophysical Review Letters.

For geoscientists, predicting the timing and magnitude of an earthquake is a fundamental goal. Generally speaking, pinpointing where an earthquake will occur is fairly straightforward: if an earthquake has struck a particular place before, the chances are it will strike there again. The questions that have challenged scientists for decades are how to pinpoint when an earthquake will occur, and how severe it will be. Over the past 15 years, advances in instrument precision have been made, but a reliable earthquake prediction technique has not yet been developed.

As part of a project searching for ways to use machine learning techniques to make gallium nitride (GaN) LEDs more efficient, the study’s first author, Bertrand Rouet-Leduc, who was then a PhD student at Cambridge, moved to Los Alamos National Laboratory in New Mexico to start a collaboration on machine learning in materials science between Cambridge University and Los Alamos. From there the team started helping the Los Alamos Geophysics group on machine learning questions.

The team at Los Alamos, led by Paul Johnson, studies the interactions among earthquakes, precursor quakes (often very small earth movements) and faults, with the hope of developing a method to predict earthquakes. Using a lab-based system that mimics real earthquakes, the researchers used machine learning techniques to analyse the acoustic signals coming from the ‘fault’ as it moved and search for patterns.

The laboratory apparatus uses steel blocks to closely mimic the physical forces at work in a real earthquake, and also records the seismic signals and sounds that are emitted. Machine learning is then used to find the relationship between the acoustic signal coming from the fault and how close it is to failing.

The machine learning algorithm was able to identify a particular pattern in the sound, previously thought to be nothing more than noise, which occurs long before an earthquake. The characteristics of this sound pattern can be used to give a precise estimate (within a few percent) of the stress on the fault (that is, how much force is it under) and to estimate the time remaining before failure, which gets more and more precise as failure approaches. The team now thinks that this sound pattern is a direct measure of the elastic energy that is in the system at a given time.

“This is the first time that machine learning has been used to analyse acoustic data to predict when an earthquake will occur, long before it does, so that plenty of warning time can be given – it’s incredible what machine learning can do,” said co-author Professor Sir Colin Humphreys of Cambridge’s Department of Materials Science & Metallurgy, whose main area of research is energy-efficient and cost-effective LEDs. Humphreys was Rouet-Leduc’s supervisor when he was a PhD student at Cambridge.

“Machine learning enables the analysis of datasets too large to handle manually and looks at data in an unbiased way that enables discoveries to be made,” said Rouet-Leduc.

Although the researchers caution that there are multiple differences between a lab-based experiment and a real earthquake, they hope to progressively scale up their approach by applying it to real systems which most resemble their lab system. One such site is in California along the San Andreas Fault, where characteristic small repeating earthquakes are similar to those in the lab-based earthquake simulator. Progress is also being made on the Cascadia fault in the Pacific Northwest of the United States and British Columbia, Canada, where repeating slow earthquakes that occur over weeks or months are also very similar to laboratory earthquakes.

“We’re at a point where huge advances in instrumentation, machine learning, faster computers and our ability to handle massive data sets could bring about huge advances in earthquake science,” said Rouet-Leduc.

Reference: 
Bertrand Rouet-Leduc et al. ‘Machine Learning Predicts Laboratory Earthquakes.’ Geophysical Research Letters (2017). DOI: 10.1002/2017GL074677

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

‘Handful of Changes’ Make Cancer

‘Handful of changes’ make cancer

source: http://www.bbc.co.uk/news/health-41644020

CancerImage copyrightSPL

British scientists have worked out how many changes it takes to transform a healthy cell into a cancer.

The team, at the Wellcome Trust Sanger Institute, showed the answer was a tiny handful, between one and 10 mutations depending on the type of tumour.

It has been one of the most hotly debated issues in cancer science for decades.

The findings, published in the journal Cell, could improve treatment for patients.

If you played spot the difference between a cancer and healthy tissue, you could find tens of thousands of differences – or mutations – in the DNA.

Some are driving the cancer’s growth, while others are just along for the ride. So which ones are important?

Root cause

The researchers analysed the DNA from 7,664 tumours to find “driver mutations” that allow a cell to be more selfish, aggressive and cancerous.

They showed it could take:

  • just one mutation to drive thyroid and testicular cancers
  • four mutations to make a breast or liver cancer
  • 10 mutations to create a colorectal cancer.

Dr Peter Campbell, one of the researchers, told the BBC News website: “We’ve known about the genetic basis of cancer for many decades now, but how many mutations are responsible has been incredibly hotly debated.

“What we’ve been able to do in this study is really provide the first unbiased numbers.

“And it seems that of the thousands of mutations in a cancer genome, only a small handful are responsible for dictating the way the cell behaves, what makes it cancerous.”

Half the mutations identified were in sets of genetic instructions – or genes – that had never been implicated in cancer before.

Therapy

The long-term goal is to advance precision cancer treatment.

If doctors know which few mutations, out of thousands, were driving a patient’s cancer, it could allow drugs that specifically targeted that mutation to be used.

Drugs such as herceptin and Braf inhibitors are already used to attack specific mutations in tumours.

The researchers were able to pick out the mutations that were driving the growth of cancer by turning to Charles Darwin and evolutionary theory.

In essence, driver mutations should appear more often in tumours than “neutral” mutations that do not make the cell cancerous.

This is because the forces of natural selection give an evolutionary advantage to mutations that help a cell grow and divide more readily.

Dr Nicholas McGranahan, from the Cancer Research UK and the UCL Cancer Institute, said the approach was “elegant”.

He said: “Cancer is a disease that evolves and changes over time, and it makes sense to use ideas like this from species evolution to work out the genetic faults that cause cancer to grow.

“But as this study focuses on one part of cancer evolution, it can only give us insight into part of the puzzle.

“Other components such as how DNA is packaged into chromosomes are also key in how a tumour progresses and will need to be looked at to give us a clearer picture of how cancer evolves.”

Follow James on Twitter.

Step Inside The Mind Of The Young Stephen Hawking As His PhD Thesis Goes Online For First Time

Step inside the mind of the young Stephen Hawking as his PhD thesis goes online for first time

source: www.cam.ac.uk

Stephen Hawking’s PhD thesis, Properties of expanding universes’, has been made freely available to anyone, anywhere in the world, after being made accessible via the University of Cambridge’s Open Access repository, Apollo.

Anyone, anywhere in the world should have free, unhindered access to not just my research, but to the research of every great and enquiring mind across the spectrum of human understanding.

Stephen Hawking

The 1966 doctoral thesis by the world’s most recognisable scientist is the most requested item in Apollo with the catalogue record alone attracting hundreds of views per month. In just the past few months, the University has received hundreds of requests from readers wishing to download Professor Hawking’s thesis in full.

To celebrate Open Access Week 2017, Cambridge University Library’s Office of Scholarly Communication has today announced Professor Hawking’s permission to make his thesis freely available and Open Access in Apollo. By making his PhD thesis Open Access, anyone can now freely download and read this historic and compelling research by the then little-known 24-year-old Cambridge postgraduate.

Professor Hawking said: “By making my PhD thesis Open Access, I hope to inspire people around the world to look up at the stars and not down at their feet; to wonder about our place in the universe and to try and make sense of the cosmos. Anyone, anywhere in the world should have free, unhindered access to not just my research, but to the research of every great and enquiring mind across the spectrum of human understanding.

“Each generation stands on the shoulders of those who have gone before them, just as I did as a young PhD student in Cambridge, inspired by the work of Isaac Newton, James Clerk Maxwell and Albert Einstein. It’s wonderful to hear how many people have already shown an interest in downloading my thesis – hopefully they won’t be disappointed now that they finally have access to it!”

Dr Arthur Smith, Deputy Head of Scholarly Communication, said: “Open Access enables research. By eliminating the barriers between people and knowledge we can realise new breakthroughs in all areas of science, medicine and technology. It is especially important for disseminating the knowledge acquired during doctoral research studies. PhD theses contain a vast trove of untapped and unique information just waiting to be used, but which is often locked away from view and scrutiny.

“From October 2017 onwards, all PhD students graduating from the University of Cambridge will be required to deposit an electronic copy of their doctoral work for future preservation. And like Professor Hawking, we hope that many students will also take the opportunity to freely distribute their work online by making their thesis Open Access. We would also invite former University alumni to consider making their theses Open Access, too.”

While the University is committed to archiving all theses it is often a struggle gaining permission to open up historic theses. With the online publication of Professor Hawking’s thesis, Cambridge now hopes to encourage its former academics – which includes 98 Nobel Affiliates – to make their work freely available to all.

To make more of the University’s theses Open Access in Apollo, the Office of Scholarly Communication and Cambridge University Library will digitise the theses of any alumni who wish to make their dissertation Open Access. Interested alumni should contact thesis@repository.cam.ac.uk

At a recent event to celebrate the 1,000th research dataset in Apollo, Dr Jessica Gardner, Director of Library Services, said: “Cambridge University Library has a 600-year-old history we are very proud of. It is home to the physical papers of such greats as Isaac Newton and Charles Darwin. Their research data was on paper and we have preserved that with great care and share it openly on line through our digital library.

“But our responsibility now is today’s researcher and today’s scientists and people working across all disciplines across our great university. Our preservation stewardship of that research data from the digital humanities across the biomedical and that is a core part of what we now do.”

Apollo is home to over 200,000 digital objects including 15,000 research articles, 10,000 images, 2,400 theses and 1,000 datasets. The items made available in Apollo have been accessed from nearly every country in the world and in 2017 have collectively received over one million downloads.

Professor Hawking’s 1966 doctoral thesis ‘Properties of expanding universes’ is available in Apollo at https://doi.org/10.17863/CAM.11283 or in high resolution on Cambridge Digital Library at https://cudl.lib.cam.ac.uk/view/MS-PHD-05437/1

For further information about Open Access Week, visit: www.openaccessweek.org


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

First Detection of Gravitational Waves and Light Produced By Colliding Neutron Stars

First detection of gravitational waves and light produced by colliding neutron stars

source: www.cam.ac.uk

In a galaxy far away, two dead stars begin a final spiral into a massive collision. The resulting explosion unleashes a huge burst of energy, sending ripples across the very fabric of space. In the nuclear cauldron of the collision, atoms are ripped apart to form entirely new elements and scattered outward across the Universe.

What I am most excited about, personally, is a completely new way of measuring distances across the universe.

Ulrich Sperhake

It could be a scenario from science fiction, but it really happened 130 million years ago — in the NGC 4993 galaxy in the Hydra constellation, at a time here on Earth when dinosaurs still ruled, and flowering plants were only just evolving.

Today, dozens of UK scientists – including researchers from the University of Cambridge – and their international collaborators representing 70 observatories worldwide announced the detection of this event and the significant scientific firsts it has revealed about our Universe.

Those ripples in space finally reached Earth at 1.41pm UK time, on Thursday 17 August 2017, and were recorded by the twin detectors of the US-based Laser Interferometer Gravitational-wave Observatory (LIGO) and its European counterpart Virgo.

A few seconds later, the gamma-ray burst from the collision was recorded by two specialist space telescopes, and over following weeks, other space- and ground-based telescopes recorded the afterglow of the massive explosion. UK developed engineering and technology is at the heart of many of the instruments used for the detection and analysis.

Studying the data confirmed scientists’ initial conclusion that the event was the collision of a pair of neutron stars – the remnants of once gigantic stars, but collapsed down into approximately the size of a city. “These objects are made of matter in its most extreme, dense state, standing on the verge of total gravitational collapse,” said Michalis Agathos, from Cambridge’s Department of Applied Mathematics and Theoretical Physics. “By studying subtle effects of matter on the gravitational wave signal, such as the effects of tides that deform the neutron stars, we can infer the properties of matter in these extreme conditions.”

There are a number of “firsts” associated with this event, including the first detection of both gravitational waves and electromagnetic radiation (EM) – while existing astronomical observatories “see” EM across different frequencies (eg, optical, infra-red, gamma ray etc), gravitational waves are not EM but instead ripples in the fabric of space requiring completely different detection techniques. An analogy is that LIGO and Virgo “hear” the Universe.

The announcement also confirmed the first direct evidence that short gamma ray bursts are linked to colliding neutron stars. The shape of the gravitational waveform also provided a direct measure of the distance to the source, and it was the first confirmation and observation of the previously theoretical cataclysmic aftermaths of this kind of merger – a kilonova.

Additional research papers on the aftermath of the event have also produced a new understanding of how heavy elements such as gold and platinum are created by supernova and stellar collisions and then spread through the Universe. More such original science results are still under current analysis.

By combining gravitational-wave and electromagnetic signals together, researchers also used for the first time a new and novel technique to measure the expansion rate of the Universe.

While binary black holes produce “chirps” lasting a fraction of a second in the LIGO detector’s sensitive band, the August 17 chirp lasted approximately 100 seconds and was seen through the entire frequency range of LIGO — about the same range as common musical instruments. Scientists could identify the chirp source as objects that were much less massive than the black holes seen to date. In fact, “these long chirping signals from inspiralling neutron stars are really what many scientists expected LIGO and Virgo to see first,” said Christopher Moore, researcher at CENTRA, IST, Lisbon and member of the DAMTP/Cambridge LIGO group. “The shorter signals produced by the heavier black holes were a spectacular surprise that led to the awarding of the 2017 Nobel prize in physics.”

UK astronomers using the VISTA telescope in Chile were among the first to locate the new source. “We were really excited when we first got notification that a neutron star merger had been detected by LIGO,” said Professor Nial Tanvir from the University of Leicester, who leads a paper in Astrophysical Journal Letters today. “We immediately triggered observations on several telescopes in Chile to search for the explosion that we expected it to produce. In the end, we stayed up all night analysing the images as they came in, and it was remarkable how well the observations matched the theoretical predictions that had been made.”

“It is incredible to think that all the gold in the Earth was probably produced by merging neutron stars, similar to this event that exploded as kilonovae billions of years ago.”

“Not only is this the first time we have seen the light from the aftermath of an event that caused a gravitational wave, but we had never before caught two merging neutron stars in the act, so it will help us to figure out where some of the more exotic chemical elements on Earth come from,” said Dr Carlos Gonzalez-Fernandez of Cambridge’s Institute of Astronomy, who processed the follow-up images taken with the VISTA telescope.

“This is a spectacular discovery, and one of the first of many that we expect to come from combining together information from gravitational wave and electromagnetic observations,” said Nathan Johnson-McDaniel, researcher at DAMTP, who contributed to predictions of the amount of ejected matter using the gravitational wave measurements of the properties of the binary.

Though the LIGO detectors first picked up the gravitational wave in the United States, Virgo, in Italy, played a key role in the story. Due to its orientation with respect to the source at the time of detection, Virgo recovered a small signal; combined with the signal sizes and timing in the LIGO detectors, this allowed scientists to precisely triangulate the position in the sky. After performing a thorough vetting to make sure the signals were not an artefact of instrumentation, scientists concluded that a gravitational wave came from a relatively small patch of the southern sky.

“This event has the most precise sky localisation of all detected gravitational waves so far,” says Jo van den Brand of Nikhef (the Dutch National Institute for Subatomic Physics) and VU University Amsterdam, who is the spokesperson for the Virgo collaboration. “This record precision enabled astronomers to perform follow-up observations that led to a plethora of breath-taking results.”

Fermi was able to provide a localisation that was later confirmed and greatly refined with the coordinates provided by the combined LIGO-Virgo detection. With these coordinates, a handful of observatories around the world were able, hours later, to start searching the region of the sky where the signal was thought to originate. A new point of light, resembling a new star, was first found by optical telescopes. Ultimately, about 70 observatories on the ground and in space observed the event at their representative wavelengths. “What I am most excited about, personally, is a completely new way of measuring distances across the universe through combining the gravitational wave and electromagnetic signals. Obviously, this new cartography of the cosmos has just started with this first event, but I just wonder whether this is where we will see major surprises in the future,” said Ulrich Sperhake, Head of Cambridge’s gravitational wave group in LIGO.

In the weeks and months ahead, telescopes around the world will continue to observe the afterglow of the neutron star merger and gather further evidence about its various stages, its interaction with its surroundings, and the processes that produce the heaviest elements in the universe.

Reference: 
Physical Review Letters
“GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral.”

Science
“A Radio Counterpart to a Neutron Star Merger.”
“Swift and NuSTAR observations of GW170817: detection of a blue kilonova.”
“Illuminating Gravitational Waves: A Concordant Picture of Photons from a Neutron Star Merger.”

Astrophysical Journal Letters
“Gravitational Waves and Gamma-rays from a Binary Neutron Star Merger: GW170817 and GRB 170817A.”
“Multi-Messenger Observations of a Binary Neutron Star Merger.”

Nature
“A gravitational-wave standard siren measurement of the Hubble constant.”

Adapted from STFC and LIGO press releases. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Synthetic Organs, Nanobots and DNA ‘Scissors’: The Future of Medicine

Synthetic organs, nanobots and DNA ‘scissors’: the future of medicine

source: www.cam.ac.uk

Nanobots that patrol our bodies, killer immune cells hunting and destroying cancer cells, biological scissors that cut out defective genes: these are just some of technologies that Cambridge researchers are developing which are set to revolutionise medicine in the future.

In a new film to coincide with the recent launch of the Cambridge Academy of Therapeutic Sciences, researchers discuss some of the most exciting developments in medical research and set out their vision for the next 50 years.

Professor Jeremy Baumberg from the NanoPhotonics Centre discusses a future in which diagnoses do not have to rely on asking a patient how they are feeling, but rather are carried out by nanomachines that patrol our bodies, looking for and repairing problems. Professor Michelle Oyen from the Department of Engineering talks about using artificial scaffolds to create ‘off-the-shelf’ replacement organs that could help solve the shortage of donated organs. Dr Sanjay Sinha from the Wellcome Trust-MRC Stem Cell Institute sees us using stem cell ‘patches’ to repair damaged hearts and return their function back to normal.

Dr Alasdair Russell from the Cancer Research UK Cambridge Institute describes how recent breakthroughs in the use of CRISPR-Cas9 – a DNA editing tool – will enable us to snip out and replace defective regions of the genome, curing diseases in individual patients; and lawyer Dr Kathy Liddell, from the Cambridge Centre for Law, Medicine and Life Sciences, highlights how research around law and ethics will help to make gene editing safe.

Professor Gillian Griffiths, Director of the Cambridge Institute for Medical Research, envisages us weaponising ‘killer T cells’ – important immune system warriors – to hunt down and destroy even the most evasive of cancer cells.

All of these developments will help transform the field of medicine, says Professor Chris Lowe, Director of the Cambridge Academy of Therapeutic Sciences, who sees this as an exciting time for medicine. New developments have the potential to transform healthcare “right the way from how you handle the patient to actually delivering the final therapeutic product – and that’s the exciting thing”.

Read more about research on future therapeutics in Research Horizons magazine. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Winton Symposium Tackles The Challenge of Energy Storage and Distribution

Winton Symposium tackles the challenge of energy storage and distribution

source: www.cam.ac.uk

The sixth annual Winton Symposium will be held on 9 November at the University’s Cavendish Laboratory on the theme of Energy Storage and Distribution.

There is an urgent need to store and efficiently distribute energy to ensure the lights stay on.

Nalin Patel

Storage and distribution of energy is seen as the missing link between intermittent renewable energy and reliability of supply, but current technologies have considerable room for improvements in performance. Speakers at the annual symposium, which is free and open to the public, will discuss some of the new technologies in this important area, and how understanding the basic science of these can accelerate their development.

“As intermittent forms of renewable energies continue to contribute to a larger share of our energy mix, there is an urgent need to store and efficiently distribute energy to ensure the lights stay on,” said Dr Nalin Patel, Winton Programme Manager at the University of Cambridge.

The one-day event is an opportunity for students, researchers and industrialists from a variety of backgrounds to hear a series of talks given by world-leading experts and to join in the debate. Speakers at the event will include Professor Harold Wilson, Programme Director of the UK Atomic Energy Authority; Professor Katsuhiko Hirose, Professional Partner at Toyota Motor Corporation; and Professor David Larbalestier, Director of the Applied Superconductivity Center, National High Magnetic Field Laboratory at Florida State University. The full programme of speakers is available online.

The symposium is organised by Professor Sir Richard Friend, Cavendish Professor of Physics and Director of the Winton Programme for the Physics of Sustainability and Dr Nalin Patel the Winton Programme Manager.

There is no registration fee for the symposium and complimentary lunch and drinks reception will be provided, however participants are required to register online. The event is open for all to attend.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Conservationists’ Eco-Footprints Suggest Education Alone Won’t Change Behaviour

Conservationists’ eco-footprints suggest education alone won’t change behaviour

source: www.cam.ac.uk

A new study shows that even those presumably best informed on the environment find it hard to consistently “walk the walk”, prompting scientists to question whether relying solely on information campaigns will ever be enough.

While it may be hard to accept, we have to start acknowledging that increased education alone is perhaps not the panacea we would hope

Andrew Balmford

Conservationists work to save the planet, and few are as knowledgeable when it comes to the environmental pressures of the Anthropocene.

However, the first wide-ranging study to compare the environmental footprint of conservationists to those of others – medics and economists, in this case – has found that, while conservationists behave in a marginally ‘greener’ manner, the differences are surprisingly modest.

Researchers say their findings add to increasing evidence that education and knowledge has little impact on individual behavior when it comes to major issues such as the environment and personal health.

Conservation scientists from the universities of Cambridge, UK, and Vermont, US, gathered data on a range of lifestyle choices – from bottled water use to air travel, meat consumption and family size – for 734 participants across the three groupings.

They found that fellow conservationists recycled more and ate less meat than either economists or medics, were similar to the other groups in how they travelled to work, but owned more cats and dogs.

The combined footprint score of the conservationists was roughly 16% less than that of economists, and 7% lower than the medics.

Nevertheless the average conservationist in the study’s sample took nine flights a year (half for work; half personal), ate meat or fish five times a week, and purchased very few offsets to their personal carbon emissions.

In fact, researchers found little correlation between the extent of environmental knowledge and environmentally-friendly behavior.

Moreover, greener action in one aspect of a person’s life did not predict it in any others – regardless of occupation. So a positive and relatively simple habit such as recycling did not appear to act as a “gateway” to more committed behaviour change.

The team suggest that overall improvements might be most effectively achieved through tailored interventions: targeting higher-impact behaviors such as meat consumption and flying through government regulation and by incentivising alternatives.

“While it may be hard to accept, we have to start acknowledging that increased education alone is perhaps not the panacea we would hope,” said lead author Andrew Balmford, Professor of Conservation Science at the University of Cambridge.

“Structural changes are key. For example, providing more affordable public transport, or removing subsidies for beef and lamb production. Just look at the effect of improved collection schemes on the uptake of recycling.

“The idea of ‘nudging’ – encouraging particular choices through changes in how cafes are laid out or travel tickets are sold, for instance – might have untapped potential to help us lower our footprint,” Balmford said.

“As conservationists we must do a great deal more to lead by example. Obvious starting points include changing the ways we interact, so that attending frequent international meetings is no longer regarded as essential to making scientific progress. For many of us flying is probably the largest contributor to our personal emissions.”

The study’s four authors offer their own mea culpa: pointing out that, between them, they have seven children, took 31 flights in 2016, and ate an average of two meat meals in the week before submitting their study – now published – to the journal Biological Conservation.

“I don’t think conservationists are hypocrites, I think that we are human – meaning that some decisions are rational, and others, we rationalise,” said study co-author Brendan Fisher from the University of Vermont’s Gund Institute for Environment and Rubenstein School of Environment and Natural Resources.

“Our results show that conservationists pick and choose from a buffet of pro-environmental behaviours the same as everyone else. We might eat less meat and compost more, but we fly more – and many of us still commute significant distances in gas cars.”

For the study, researchers distributed surveys on environmental behavior through conservation, economics and biomedical organisations to targeted newsletters, mailing lists and social media groups.

Of the self-selecting respondents, there were 300 conservationists, 207 economists and 227 medics from across the UK and US.

The participants were also asked a series of factual questions on environmental issues – from atmospheric change to species extinction – and ways to most effectively lower carbon footprints.

“Interestingly, conservationists scored no better than economists on environmental knowledge and awareness of pro-environmental actions,” said Balmford.

Overall footprint scores were higher for males, US nationals, economists, and people with higher degrees and larger incomes, but were unrelated to environmental knowledge.

Fisher says the study supports the idea that ‘values’ are a key driver of behaviour. Across the professions, attaching a high value to the environment was consistently associated with a lower footprint: fewer personal flights and less food waste, for example.

“It doesn’t matter if you are a medic, economist, or conservationist, our study shows that one of the most significant drivers of your behaviour is how much you value the environment,” Fisher said.

“Economists who care about the environment behave as well as conservationists.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Study Identifies Factors Linked To Dying Comfortably For The Very Old

Study identifies factors linked to dying comfortably for the very old

source: www.cam.ac.uk

Very old people are more likely to die comfortably if they die in a care home or at home, compared with dying in a hospital, suggests a new study from the University of Cambridge. Yet while the overwhelming majority of very old people reported symptoms at the end of life such as distress, pain and depression, the study found that these were not always treated effectively.

How we care for the oldest members of society towards the end of their lives is one of the big issues for societies across the world

Jane Fleming

In a study published in the journal BMC Geriatrics, the researchers argue that their findings highlight the need to improve training in end-of-life care for all staff, in all settings, and in particular to address the current shortage of palliative care doctors in the NHS.

As life expectancy increases, so more and more people are dying at increasingly older ages, often affected by multiple conditions such as dementia, heart disease and cancer, which make their end-of-life care complicated. In the UK, in just a quarter of a century the proportion of deaths occurring at the age of 85 or older has risen steeply from around one in five in 1990 to almost half of all current deaths.

Older people living with dementia commonly report multiple symptoms as they approach the end-of-life, and if these symptoms are not adequately controlled, they may increase distress and worsen an individual’s quality of life.

While some people close to the end-of-life may prefer to die at home, only a minority of the ‘oldest old’ (those aged 85 years and above) actually die in their own homes. In the UK, fewer older people die in hospices or receive specialist palliative care at home than younger age groups, and the trend for older deaths is gradually moving away from death in hospital towards long-term care facilities.

Little is known about symptom control for ‘older old’ people or whether care in different settings enables them to die comfortably. To address this gap in our knowledge, researchers from the Cambridge Institute of Public Health examined the associations between factors potentially related to comfort during very old people’s final illness: physical and cognitive disability, place of care and transitions in their final illness, and place of death. This involved a retrospective analysis of data for 180 study participants aged between 79 and 107 years.

The researchers found that just one in 10 participants died without symptoms of distress, pain, depression, and delirium or confusion, and most people had in fact experienced combinations of two or more of these symptoms. Of the treatable symptoms reported, pain was addressed in the majority, but only effectively for half of these; only a fraction of those with depression received treatment for their symptom.

Compared with people who died in hospital, the odds of being reported as having died comfortably were four times as high for people whose end-of-life care had been in a care home or who died at their usual address, whether that was their own home or a care home.

People living in the community who relied on formal services for support more than once a week, and people who were cared for at home during their final illness but then died in hospital, were less likely to have reportedly died comfortably.

“How we care for the oldest members of society towards the end of their lives is one of the big issues for societies across the world,” says Dr Jane Fleming from the Department of Public Health and Primary Care, the study’s first author. “The UK is not the only country where an urgent review of the funding for older people’s long-term care is needed, along with commitments to staff training and development in this often undervalued sector.

“It’s heartening that the majority of very old people in our study, including those with dementia, appear to have been comfortable at the end-of-life, but we need to do more to ensure that everyone is able to die comfortably, wherever they are.”

The authors of the study argue that it highlights the need to improve training in end-of-life care for all staff, at all levels and in all settings.

“Improving access to supportive and palliative care in the community should be a priority, otherwise staying at home may not always be the most comfortable setting for end-of-life care, and inadequacies of care may lead to admission before death in hospital,” adds co-author Dr Morag Farquhar, who is now based at the University of East Anglia.

Contrary to public perceptions, the authors say their study demonstrates that good care homes can provide end-of-life care comparable to hospice care for the very old, enabling continuity of care from familiar staff who know their residents. However, they say, this needs recognising and supporting through valuing staff, providing access to training and improving links with primary and community healthcare providers.

“In the UK, we particularly need to address the current shortage of palliative care doctors in the NHS, where training numbers are not going up to match demand, but the shortage is even greater in developing countries,” says co-author Rowan Calloway.

“In the future, community care will be increasingly reliant on non-specialists, so it will be crucial that all members of the multi-disciplinary teams needed to support very frail older people near the end of their lives have good training in palliative and supportive care skills.”

The study was supported by the Abbeyfield Society, Bupa Foundation, Medical Research Council, and the National Institute for Health Research Collaboration for Leadership in Applied Health and Care Cambridgeshire & Peterborough.

Reference
Fleming, J et al. Dying comfortably in very old age with or without dementia in different care settings – a representative “older old” population study. BMC Geriatrics; 26 Sept 2017; DOI: 10.1186/s12877-017-0605-2

Key findings and policy implications

The Cambridge City over – 75s Cohort Study


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

No Evidence To Support Claims That Telephone Consultations Reduce GP Workload or Hospital Referrals

No evidence to support claims that telephone consultations reduce GP workload or hospital referrals

source: www.cam.ac.uk

Telephone consultations to determine whether a patient needs to see their GP face-to-face can deal with many problems, but a study led by researchers at the Cambridge Centre for Health Services Research (University of Cambridge and RAND Europe), found no evidence to support claims by companies offering to manage these services or by NHS England that the approach saves money or reduces the number of hospital referrals.

The NHS must be careful to ensure that it bases its information and recommendation on robust evidence.

Martin Roland

As UK general practices struggle with rising demand from patients, more work being transferred from secondary to primary care, and increasing difficulty in recruiting general practitioners, one proposed potential solution is a ‘telephone first’ approach, in which every patient asking to see a GP is initially phoned back by their doctor on the same day. At the end of this phone call the GP and the patient decide whether the problem needs a face-to-face consultation, or whether it has been satisfactorily resolved on the phone.

Two commercial companies provide similar types of management support for practices adopting the new approach, with claims that the approach dramatically reduces the need for face-to-face consultations, reduces workload stress for GPs and practice staff, increases continuity of care, reduces A&E attendance and emergency hospital admissions, and increases patient satisfaction.

Some of these claims are repeated in NHS England literature, including the assertion based on claims from one of the companies that practices using the approach have a 20% lower A&E usage and that “the model has demonstrated a cost saving of approximately £100k per practice through prevention of avoidable attendance and admissions to hospital”. Several Clinical Commissioning Groups have subsequently paid for the management support required for the approach to be adopted by practices in their area.

The National Institute for Health Research (NIHR) acknowledged the need for robust and independent evaluation of current services and therefore commissioned the team led by Martin Roland, Emeritus Professor of Health Services Research at the University of Cambridge. The results of the evaluation, which looked at data sources including GP and hospital records, patient surveys and economic analyses, are published today in The BMJ.

The study found that adoption of the ‘telephone first’ approach had a major effect on patterns of consultation: the number of telephone consultations increased 12-fold, and the number of face-to-face consultations fell by 38%.

However, the study found that the ‘telephone first’ approach was on average associated with increased overall GP workload; there was an overall increase of 8% in the mean time spent consulting by GPs, but this figure masks a wide variation between practices, with some practices experiencing a substantial reduction in workload and others a large increase.

Dr Jennifer Newbould from RAND Europe, part of the Cambridge Centre for Health Services Research, the study’s first author, says: “There are some positives to a ‘telephone first approach’; for example, we found clear evidence that a significant part of patient workload can be addressed through phone consultations. But we need to be careful about seeing this as a panacea: while this may increase a GP practice’s control over day-to-day workload, it does not necessarily decrease the amount of time GPs spend consulting and may, in some cases, increase it.”

The researchers found no evidence that the approach substantially reduced overall attendance at A&E departments or emergency hospital admissions: introduction of the ‘telephone first’ approach was followed by a small (2%) increase in hospital admissions, no initial change in A&E attendance, but a small (2% per year) decrease in the subsequent rate of rise of A&E attendance. However, far from reducing secondary care costs, they found overall secondary care costs increased slightly by £11,776 per 10,000 patients.

Professor Roland adds: “Importantly, we found no evidence to support claims made by one of the companies that support such services – claims that have been repeated by NHS England – that the approach would be substantially cost-saving or reduce hospital referrals. This has resulted in some Clinical Commissioning Groups across England buying their consultancy services based on unsubstantiated claims. The NHS must be careful to ensure that it bases its information and recommendation on robust evidence.”

The study was funded by the National Institute for Health Research.

Reference                                                   
Newbould, J et al. Tele-First. Evaluation of a ‘telephone first’ approach to demand management in English general practice: observational study. BMJ (2017). DOI: 10.1136/bmj.j4197


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

App-Based Citizen Science Experiment Could Help Researchers Predict Future Pandemics

App-based citizen science experiment could help researchers predict future pandemics

source: www.cam.ac.uk

A new app gives UK residents the chance to get involved in an ambitious, ground-breaking science experiment that could save lives.

This could the best data set we’ve ever had on the movement of people in the UK – for a researchers like us, this is incredibly exciting.

Julia Gog

The most likely and immediate threat to our species is a global pandemic of highly infectious flu. Such a pandemic could be so serious that it currently tops the UK Government’s Risk Register.

Scientists from the University of Cambridge and London School of Hygiene and Tropical Medicine are attempting to collect a gold standard data set that can be used to predict how the next pandemic flu would spread through this country – and what can be done to stop it. They need your help.

UK residents can take part in the BBC Pandemic experiment simply by downloading the Pandemic app onto your smartphone via App Store or Google Play from today.

The app and results will be featured in a documentary on BBC Four in 2018, to be presented by Dr Hannah Fry and Dr Javid Abdelmoneim.

Data gathered via the app could be key in preparing for the next pandemic outbreak. In order to better understand how an infectious disease like flu can spread, researchers need data about how we travel and interact.

Two experiments will be conducted through the app: the National Outbreak, which is open to anyone in the UK from 27th September 2017; and the Haslemere Outbreak, a closed local study that is only open to people in the town of Haslemere, Surrey, and runs for 72 hours starting on Thursday 19th October 2017.

In the National Outbreak, the app will track your approximate movement at regular intervals over a 24 hour period – all data will be anonymised, so the app will not know exactly where or who you are. The app will also ask some questions about your journeys and the people you spent time with during those 24 hours.

All data collected will be grouped to ensure anonymity, and a research team from the University of Cambridge and the London School of Hygiene and Tropical Medicine will use it to predict how a flu pandemic might spread across the country – and determine what can be done to stop it.

Professor Julia Gog, who specialises in the mathematics of infectious diseases, and her colleagues from Cambridge’s Department of Applied Mathematics and Theoretical Physics have helped design the experiment and will be processing the data, running statistical analyses, and building and running the pandemic models.

“This will give us a chance to explore a range of different scenarios,” said Professor Gog. “This could the best data set we’ve ever had on the movement of people in the UK, and could help support future research projects to control infectious diseases – for a researchers like us, this is incredibly exciting.”

There are flu outbreaks every year but in the last 100 years, there have been four pandemics of a particularly deadly flu, including the Spanish Influenza outbreak which hit in 1918, killing up to 100 million people worldwide. Nearly a century later, a catastrophic flu pandemic still tops the UK Government’s Risk Register of threats to this country. Key to the Government’s response plan are mathematical models which simulate how a highly contagious disease may spread. These models help to decide how best to direct NHS resources, like vaccines and protective clothing. But the models are only as good as the data that goes into them.

The more people of all ages that take part in BBC Pandemic, the better that data will be.

By identifying the human networks and behaviours that spread a deadly flu, the app will help to make these models more accurate and, in turn, help to stem the next pandemic.

This project has been commissioned by the BBC, and is being undertaken in collaboration with researchers at the University of Cambridge and the London School of Hygiene and Tropical Medicine.

More information is available at the BBC website.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

New Type of Supercomputer Could Be Based On ‘Magic Dust’ Combination of Light and Matter

New type of supercomputer could be based on ‘magic dust’ combination of light and matter

source: www.cam.ac.uk

A team of researchers from the UK and Russia have successfully demonstrated that a type of ‘magic dust’ which combines light and matter can be used to solve complex problems and could eventually surpass the capabilities of even the most powerful supercomputers.

One referee said, ‘Who would be crazy enough to try to implement this?!’

Natalia Berloff

The researchers, from Cambridge, Southampton and Cardiff Universities in the UK and the Skolkovo Institute of Science and Technology in Russia, have used quantum particles known as polaritons – which are half light and half matter – to act as a type of ‘beacon’ showing the way to the simplest solution to complex problems. This entirely new design could form the basis of a new type of computer that can solve problems that are currently unsolvable, in diverse fields such as biology, finance or space travel. The results are reported in the journal Nature Materials.

Our technological progress — from modelling protein folding and behaviour of financial markets to devising new materials and sending fully automated missions into deep space — depends on our ability to find the optimal solution of a mathematical formulation of a problem: the absolute minimum number of steps that it takes to solve that problem.

The search for an optimal solution is analogous to looking for the lowest point in a mountainous terrain with many valleys, trenches, and drops. A hiker may go downhill and think that they have reached the lowest point of the entire landscape, but there may be a deeper drop just behind the next mountain. Such a search may seem daunting in natural terrain, but imagine its complexity in high-dimensional space. “This is exactly the problem to tackle when the objective function to minimise represents a real-life problem with many unknowns, parameters, and constraints,” said Professor Natalia Berloff of Cambridge’s Department of Applied Mathematics and Theoretical Physics and the Skolkovo Institute of Science and Technology, and the paper’s first author.

Modern supercomputers can only deal with a small subset of such problems when the dimension of the function to be minimised is small or when the underlying structure of the problem allows it to find the optimal solution quickly even for a function of large dimensionality. Even a hypothetical quantum computer, if realised, offers at best the quadratic speed-up for the “brute-force” search for the global minimum.

Berloff and her colleagues approached the problem from an unexpected angle: What if instead of moving along the mountainous terrain in search of the lowest point, one fills the landscape with a magical dust that only shines at the deepest level, becoming an easily detectible marker of the solution?

“A few years ago our purely theoretical proposal on how to do this was rejected by three scientific journals,” said Berloff. “One referee said, ‘Who would be crazy enough to try to implement this?!’ So we had to do it ourselves, and now we’ve proved our proposal with experimental data.”

Their ‘magic dust’ polaritons are created by shining a laser at stacked layers of selected atoms such as gallium, arsenic, indium, and aluminium. The electrons in these layers absorb and emit light of a specific colour. Polaritons are ten thousand times lighter than electrons and may achieve sufficient densities to form a new state of matter known as a Bose-Einstein condensate, where the quantum phases of polaritons synchronise and create a single macroscopic quantum object that can be detected through photoluminescence measurements.

The next question the researchers had to address was how to create a potential landscape that corresponds to the function to be minimised and to force polaritons to condense at its lowest point. To do this, the group focused on a particular type of optimisation problem, but a type that is general enough so that any other hard problem can be related to it, namely minimisation of the XY model which is one of the most fundamental models of statistical mechanics. The authors have shown that they can create polaritons at vertices of an arbitrary graph: as polaritons condense, the quantum phases of polaritons arrange themselves in a configuration that correspond to the absolute minimum of the objective function.

“We are just at the beginning of exploring the potential of polariton graphs for solving complex problems,” said co-author Professor Pavlos Lagoudakis, Head of the Hybrid Photonics Lab at the University of Southampton and the Skolkovo Institute of Science and Technology, where the experiments were performed. “We are currently scaling up our device to hundreds of nodes, while testing its fundamental computational power. The ultimate goal is a microchip quantum simulator operating at ambient conditions.”

Reference:
Natalia G. Berloff et al. ‘Realizing the classical XY Hamiltonian in polariton simulators.’ Nature Materials (2017). DOI: 10.1038/nmat4971

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Cambridge Scientist Leading UK’s £65m Scientific Collaboration With US

Cambridge scientist leading UK’s £65m scientific collaboration with US

source: www.cam.ac.uk

The UK is investing £65 million in a flagship global science project based in the United States that could change our understanding of the universe, securing the UK’s position as the international research partner of choice. Professor Mark Thomson from the University of Cambridge’s Cavendish Laboratory has been the elected co-leader of the international DUNE collaboration since its inception and is the overall scientific lead of this new UK initiative.

This UK investment in fundamental science will enable us to deliver critical systems to the DUNE experiment and to provide new opportunities for the next generation of scientists to work at the forefront of science and technology.

Mark Thomson

This week, UK Universities and Science Minister Jo Johnson signed the agreement with the US Energy Department to invest the sum in the Long-Baseline Neutrino Facility (LBNF) and the Deep Underground Neutrino Experiment (DUNE). DUNE will study the properties of mysterious particles called neutrinos, which could help explain more about how the universe works and why matter exists at all.

This latest investment is part of a long history of UK research collaboration with the US, and is the first major project of the wider UK-US Science and Technology agreement.

On signing the agreement in Washington DC, UK Science Minister, Jo Johnson said: “Our continued collaboration with the US on science and innovation is beneficial to both of our nations and through this agreement we are sharing expertise to enhance our understanding of many important topics that have the potential to be world changing.

“The UK is known as a nation of science and technical progress, with research and development being at the core of our industrial strategy.  By working with our key allies, we are maintaining our position as a global leader in research for years to come.”

“The international DUNE collaboration came together to realise a dream of a game-changing program of neutrino science; today’s announcement represents a major milestone in turning this dream into reality,” said Professor Thomson. “This UK investment in fundamental science will enable us to deliver critical systems to the DUNE experiment and to provide new opportunities for the next generation of scientists to work at the forefront of science and technology.”

This investment is a significant step which will secure future access for UK scientists to the international DUNE experiment. Investing in the next generation of detectors, like DUNE, helps the UK to maintain its world-leading position in science research and continue to develop skills in new cutting-edge technologies.

The UK’s Science and Technology Facilities Council (STFC) will manage the UK’s investment in the international facility, giving UK scientists and engineers the chance to take a leading role in the management and development of the DUNE far detector and the LBNF beam line and associated PIP-II accelerator development.

Accompanying Jo Johnson on the visit to the US, Chief Executive Designate at UK Research and Innovation, Sir Mark Walport said: “Research and innovation are global endeavours. Agreements like the one signed today by the United Kingdom and the United States set the framework for the great discoveries of the future, whether that be furthering our understanding of neutrinos or improving the accessibility of museum collections.

“Agreements like this also send a clear signal that UK researchers are outward looking and ready to work with the best talent wherever that may be. UK Research and Innovation is looking forward to extending partnerships in science and innovation around the world.”

DUNE will be the first large-scale US-hosted experiment run as a truly international project at the inter-governmental level, with more than 1,000 scientists and engineers from 31 countries building and operating the facility, including many from the UK.  The US is meeting the major civil construction costs for conventional facilities, but is seeking international partners to design and build major elements of the accelerator and detectors.  The total international partner contributions to the entire project are expected to be about $500M.

The UK research community is already a major contributor to the DUNE collaboration, with 14 UK universities and two STFC laboratories providing essential expertise and components to the experiment and facility. This ranges from the high-power neutrino production target, the readout planes and data acquisitions systems to the reconstruction software.

Dr Brian Bowsher, Chief Executive of STFC, said:“This investment is a significant and exciting step for the UK that builds on UK expertise.

“International partnerships are the key to building these world-leading experiments, and the UK’s continued collaboration with the US, through STFC, demonstrates that we are the science partner of choice in such agreements.

“I am looking forward to seeing our scientists work with our colleagues in the US in developing this experiment and the exciting science which will happen as a result.”

One aspect DUNE scientists will look for is the differences in behaviour between neutrinos and their antimatter counterparts, antineutrinos, which could give us clues as to why we live in a matter-dominated universe – in other words, why we are all here, instead of having been annihilated just after the Big Bang. DUNE will also watch for neutrinos produced when a star explodes, which could reveal the formation of neutron stars and black holes, and will investigate whether protons live forever or eventually decay, bringing us closer to fulfilling Einstein’s dream of a grand unified theory.

The DUNE experiment will attract students and young scientists from around the world, helping to foster the next generation of leaders in the field and to maintain the highly skilled scientific workforce worldwide.

The Cambridge team is playing a leading role in the development of the advanced pattern recognition and computational techniques that will be needed to interpret the data from the vast DUNE detectors.

Other than Cambridge, the UK universities involved in the project are Birmingham, Bristol, Durham, Edinburgh, Imperial, Lancaster, Liverpool, UCL, Manchester, Oxford, Sheffield, Sussex and Warwick.

Adapted from an STFC press release


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.