All posts by Admin

Stopping Tumour Cells Killing Surrounding Tissue May Provide Clue To Fighting Cancer

Stopping tumour cells killing surrounding tissue may provide clue to fighting cancer

source: www.cam.ac.uk

Tumours kill off surrounding cells to make room to grow, according to new research from the University of Cambridge. Although the study was carried out using fruit flies, its findings suggest that drugs to prevent, rather than encourage, cell death might be effective at fighting cancer – contrary to how many of the current chemotherapy drugs work.

It sounds counterintuitive not to encourage cell death as this means you’re not attacking the tumour itself

Eugenia Piddini

The idea that different populations of cells compete within the body, with winners and losers, was discovered in the 1970s and is thought to be a ‘quality control’ mechanism to rid the tissue of damaged or poorly-performing cells. With the discovery that genes involved in cancer promote this process, scientists have speculated that so-called ‘cell competition’ might explain how tumours grow within our tissues.

Now, researchers at the Wellcome Trust/Cancer Research UK Gurdon Institute, University of Cambridge, have used fruit flies genetically manipulated to develop intestinal tumours to show for the first time that as the tumour grows and its cells proliferate, it kills off surrounding healthy cells, making space in which to grow. The results of the study, funded by Cancer Research UK, are published in the journal Current Biology.

Image: Tumour cells (green) growing in the intestine of a fruit fly. Credit: Golnar Kolahgar

Dr Eugenia Piddini, who led the research, believes the finding may answer one of the longstanding questions about cancer. “We know that as cancer spreads through the body – or ‘metastasises’ – it can cause organ failure,” she says. “Our finding suggests a possible explanation for this: if the tumour kills surrounding cells, there will come a point where there are no longer enough healthy cells for the organ to continue to function.”

The cancer cells encourage a process known as apoptosis, or ‘cell death’, in the surrounding cells, though the mechanism by which this occurs is currently unclear and will be the subject of further research.

By manipulating genetic variants within the surrounding cells to resist apoptosis, the researchers were able to contain the tumour and prevent its spread. This suggests drugs that carry out the same function – inhibiting cell death – may provide an effective way to prevent the spread of some types of cancer. This is counter to the current approach to fighting cancer: most current drugs used in chemotherapy encourage cell death as a way of destroying the tumour, though this can cause ‘collateral damage’ to healthy cells, hence why chemotherapy patients often become very sick during treatment.

In fact, some drugs that inhibit cell death are already being tested in clinical trials to treat conditions such as liver damage; if proven to be safe, they may provide options for potential anti-cancer drugs. However, further research is needed to confirm that this approach will be suitable for treating cancer.

“It sounds counterintuitive not to encourage cell death as this means you’re not attacking the tumour itself,” says Dr Eugenia Piddini. “But if we think of it like an army fighting a titan, it makes sense that if you protect your soldiers and stop them dying, you stand a better chance of containing – and even killing – your enemy.”

The work, which was carried out by postdoctoral researcher Saskia Suijkerbuijk and colleagues in the Piddini group, used fruit flies because they are much simpler organisms to study than mammals; however, many of the genes being studied are conserved across species – in other words, the genes, or genes with an identical or very similar function, are found in both the fruit fly and mammals.

Dr Alan Worsley, senior science information officer at Cancer Research UK, said: “Tumours often need to elbow healthy cells out of the way in order to grow. This intriguing study in fruit flies suggests that if researchers can turn off the signals that tell healthy cells to die, they could act as a barrier that boxes cancer cells in and stunts their growth. We don’t yet know if the same thing would work in patients, but it highlights an ingenious new approach that could help to keep early stage cancers in check.”

Reference
Suijkerbuijk, SJE et al. Cell competition drives the growth of intestinal adenomas in Drosophila. Current Biology; 22 Feb 2016. dx.doi.org/10.1016/j.cub.2015.12.043


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

The Fitzwilliam Museum is 200 Today

The Fitzwilliam Museum is 200 today

Today, one of the great collections of art in the UK celebrates its bicentenary. Two hundred years to the day of his death, the Fitzwilliam Museum has revealed previously unknown details of the life of its mysterious founder, Richard 7th Viscount Fitzwilliam of Merrion.

The gift Viscount Fitzwilliam left to the nation was one of the most important of his age

Tim Knox

Research for a new book has shown how his beloved library may have contributed to his death, and how his passion for music led him to the love of his life: a French dancer with whom he had two children, ‘Fitz’ and ‘Billy’.

The Fitzwilliam Museum: a History is written by Lucilla Burn, Assistant Director for Collections at the Fitzwilliam. She said: “Lord Fitzwilliam’s life has been described as ‘deeply obscure’. Many men of his class and period, who sought neither fame nor notoriety, nor wrote copious letters or diaries, do not leave a conspicuous record. But by going through the archives and letters that relate to him, for the first time we can paint a fuller picture of his history, including aspects of his life that have previously been unknown, even to staff here at the Fitzwilliam.”

Lord Fitzwilliam died on the 4th of February 1816, and founded the Fitzwilliam Museum through the bequest to the University of Cambridge of his splendid collection of art, books and manuscripts, along with £100,000 to build the Museum. This generous gift began the story of one of the finest museums in Britain, which now houses over half a million artworks and antiquities.

Other than his close connection to Cambridge and his love of art and books, a motivation for Fitzwilliam’s bequest may have been his lack of legitimate heirs. The new details of his mistress help to explain why he never married.

In 1761 Richard Fitzwilliam entered Trinity Hall, Cambridge, and in 1763 his Latin ode, Ad Pacem, was published in a volume of loyal addresses to George III printed by the University of Cambridge. He made a strong impression on his tutor, the fiercely ambitious Samuel Hallifax, who commissioned Joseph Wright of Derby to paint a fine portrait of Fitzwilliam on his graduation with an MA degree in 1764.

Fitzwilliam’s studies continued after Cambridge; he travelled widely on the continent, perfecting his harpsichord technique in Paris with Jacques Duphly, an eminent composer, teacher and performer. A number of Fitzwilliam’s own harpsichord compositions have survived, indicating he was a gifted musician.
But from 1784 he was also drawn to Paris by his passionate attachment to Marie Anne Bernard, a dancer at the Opéra whose stage name was Zacharie. With Zacharie, Fitzwilliam fathered three children, two of whom survived infancy – little boys known to their parents as ‘Fitz’ and ‘Billy’. How the love affair ended is unknown, but its fate was clouded, if not doomed, by the French Revolution.

We do not know what happened to Zacharie after her last surviving letter, written to Lord Fitzwilliam late in December 1790. Her health was poor, so it is possible that she died in France. However, the elder son, ‘Fitz’, Henry Fitzwilliam Bernard, his wife Frances and their daughter Catherine were alive and living in Richmond with Lord Fitzwilliam at the time of the latter’s death in 1816. It is not known what happened to ‘Billy’.

At the age of seventy, early in August 1815, Lord Fitzwilliam fell from a ladder in his library and broke his knee. This accident may have contributed to his death the following spring; and on 18 August that year Fitzwilliam drew up his last will and testament. Over the course of his life he had travelled extensively in Europe; by the time of his death he had amassed around 144 paintings, including masterpieces by Titian, Veronese and Palma Vecchio, 300 carefully ordered albums of Old Master prints, and a magnificent library containing illuminated manuscripts, musical autographs by Europe’s greatest composers and 10,000 fine printed books.

His estates were left to his cousin’s son, George Augustus Herbert, eleventh Earl of Pembroke and eighth Earl of Montgomery. But he also carefully provided for his relatives and dearest friends. The family of Fitzwilliam’s illegitimate son, Henry Fitzwilliam Bernard (‘Fitz’), including his wife and daughter, received annuities for life totalling £2,100 a year.
On his motivation for leaving all his works of art to the University, he wrote: “And I do hereby declare that the bequests so by me made to the said Chancellor Masters and Scholars of the said University are so made to them for the purpose of promoting the Increase of Learning and the other great objects of that Noble Foundation.”

Fitzwilliam Museum Director Tim Knox said: “The gift Viscount Fitzwilliam left to the nation was one of the most important of his age. This was the period when public museums were just beginning to emerge. Being a connoisseur of art, books and music, our Founder saw the importance of public collections for the benefit of all. But we are also lucky that his life circumstances enabled him to do so – had there been a legitimate heir, he might not have been able to give with such liberality. From the records we have discovered he appears to have been as generous as he was learned: he arranged music concerts to raise funds for charity, and helped many people escaping the bloodiest moments of the French Revolution. We are delighted to commemorate our Founder in our bicentenary year.”

Exhibitions and events for the Fitzwilliam Museum’s Bicentenary will be taking place throughout 2016. These include two key exhibitions opening in February, a retrospective of its history, Celebrating the First 200 Years: The Fitzwilliam Museum 1816 – 2016, and a major exhibition of Egyptian antiquities, Death on the Nile: Uncovering the afterlife of ancient Egypt. For more information visit www.fitzmuseum.cam.ac.uk.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Syrian Aid: Lack of Evidence For ‘Interventions That Work’, Say Researchers

Syrian aid: lack of evidence for ‘interventions that work’, say researchers

source: www.cam.ac.uk

The lack of an evidence base in the donor-funded response to Syrian migrant crisis means funds may be allocated to ineffective interventions, say researchers, who call on funders and policymakers in London for this week’s Syrian Donor Conference to insist on evaluation as a condition of aid.

A focus on health and health services is notably absent in the donor conference agenda yet it is a fundamental determinant on the success of education and livelihoods policies

Adam Coutts

In the fifth year of the Syrian refugee crisis, donors and humanitarian agencies still remain unsure about which policies and interventions have been most effective, and continue to rely on a largely reactive response, say a group of researchers, aid workers and Syrian medical professionals.

Response approaches to date have often been short-termist, sometimes duplicating work and have very little evidence of effectiveness or impact, they say.

As national leaders and UN delegates gather in London today for the Support Syria Donor Conference, members of the Syrian Public Health Network warn that unless aid is provided on condition of evidence-gathering and transparency so funding can be directed to interventions that work, the health, education and livelihoods of refugees will continue to deteriorate.

They caution that Syrians in neighbouring countries such as Lebanon and Jordan – where services are stretched to breaking point – will suffer the most from ineffective interventions unless governments and NGOs of wealthy nations to do more to link allocation of donor funds to evidence, something that Network members have highlighted in a briefing for the UK’s Department for International Development.

“A focus on health and health services is notably absent in the donor conference agenda yet it is a fundamental determinant on the success of education and livelihoods policies,” said Dr Adam Coutts, Cambridge University researcher and member of the Syria Public Health Network.

“What funding there is for refugee healthcare risks disappearing unless governments insist on an evidence basis for aid allocation, similar to that expected in domestic policy-making.

“It is estimated that there are now over 4.3 million Syrian refugees in neighbouring frontline countries, and over half these people are under the age of 18. This level of displacement is unprecedented and given how short funds are, we need to be sure that programmes work,” said Coutts, from Cambridge’s Department of Politics and International Studies.

“New ideas and approaches need to be adopted in order to reduce the massive burdens on neighbouring frontline states.”

Researchers say that the health response should do more to address the so-called ‘non-communicable diseases’ which ultimately cause more deaths: slow, silent killers such as diabetes, heart disease and, in particular, mental disorders. This means moving towards the development of universal health care systems in the region and building new public health services.

The calls for more evidence come on the back of an article published last week in theJournal of the Royal Society of Medicine, in which members of the Syria Public Health Network (SPHN) address the response to mental disorders among displaced Syrians.

Clinics in some camps in Turkey and Lebanon report almost half of occupants suffering from high levels of psychological distress. However, many Syrians in neighbouring countries live outside the camps – up to 80% in Jordan, for example – which means cases are unreported.

In Lebanon, despite political commitment to mental health, there are just 71 psychiatrists, mostly in Beirut.

“The implementation of short-term mental health interventions which often lack culturally relevant or practically feasible assessment tools risk diverting funds away from longer term, evidence based solutions,” said Coutts.

Moreover, a shortage of Syrian mental health professionals – less than 100 prior to the conflict has now fallen to less than 60 – is worsened by some neighbouring countries preventing Syrian doctors of any specialism from practising. Along with Physicians for Human Rights, SPHN members are calling for restrictions to be lifted on practising licenses for displaced Syrian health professionals.

“To date Syrian medical workers in Lebanon and Jordan are a largely untapped workforce who are ready to work and help with the response. However, due to labour laws and the dominance of private health service providers it is very difficult if not impossible for them to work legally,” said SPHN member Dr Aula Abbara.

Emerging evidence from the Syrian crisis, as well as evidence from previous conflicts, is pointing to psychological treatments which show some effectiveness:

Pilot studies with refugees in Turkish camps using ‘telemental’ projects, the delivery of psychiatric care through telecommunications, suggest that such techniques are effective in supporting healthcare professionals on the ground.

The ‘teaching recovery techniques’ method is designed to boost children’s capacity to cope with the psychological aftermath of war. These techniques have been used in communities in the aftermath of major natural disasters and conflicts, and have shown promise.

While SPHN members caution that adequate testing of these interventions is required, they argue that this is precisely the point: more evidence of what works.

Added Coutts: “A more scientific approach is needed so that precious and increasingly scarce financial aid is put to the most effective use possible. At the moment, NGOs and governments are not making sufficient reference to evidence in determining health, education and labour market policies for the largest displacement of people since World War Two.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Modelling How The Brain Makes Complex Decisions

Modelling how the brain makes complex decisions

source: www.cam.ac.uk

Researchers have built the first biologically realistic mathematical model of how the brain plans and learns when faced with a complex decision-making process.

By combining planning and learning into one coherent model, we’ve made what is probably the most comprehensive model of complex decision-making to date

Johannes Friedrich

Researchers have constructed the first comprehensive model of how neurons in the brain behave when faced with a complex decision-making process, and how they adapt and learn from mistakes.

The mathematical model, developed by researchers from the University of Cambridge, is the first biologically realistic account of the process, and is able to predict not only behaviour, but also neural activity. The results, reported in The Journal of Neuroscience, could aid in the understanding of conditions from obsessive compulsive disorder and addiction to Parkinson’s disease.

The model was compared to experimental data for a wide-ranging set of tasks, from simple binary choices to multistep sequential decision making. It accurately captures behavioural choice probabilities and predicts choice reversal in an experiment, a hallmark of complex decision making.

Our decisions may provide immediate gratification, but they can also have far-reaching consequences, which in turn depend on several other actions we have already made or will make in the future. The trouble that most of us have is how to take the potential long-term effects of a particular decision into account, so that we make the best choice.

There are two main types of decisions: habit-based and goal-based. An example of a habit-based decision would be a daily commute, which is generally the same every day. Just as certain websites are cached on a computer so that they load faster the next time they are visited, habits are formed by ‘caching’ certain behaviours so that they become virtually automatic.

An example of a goal-based decision would be a traffic accident or road closure on that same commute, forcing the adoption of a different route.

“A goal-based decision is much more complicated from a neurobiological point of view, because there are so many more variables – it involves exploring a branching set of possible future situations,” said the paper’s first author Dr Johannes Friedrich of Columbia University, who conducted the work while a postdoctoral researcher in Cambridge’s Department of Engineering. “If you think about a detour on your daily commute, you need to make a separate decision each time you reach an intersection.”

Habit-based decisions have been thoroughly studied by neuroscientists and are fairly well-understood in terms of how they work at a neural level. The mechanisms behind goal-based decisions however, remain elusive.

Now, Friedrich and Dr Máté Lengyel, also from Cambridge’s Department of Engineering, have built a biologically realistic solution to this computational problem. The researchers have shown mathematically how a network of neurons, when connected appropriately, can identify the best decision in a given situation and its future cumulative reward.

“Constructing these sorts of models is difficult because the model has to plan for all possible decisions at any given point in the process, and computations have to be performed in a biologically plausible manner,” said Friedrich. “But it’s an important part of figuring out how the brain works, since the ability to make decisions is such a core competence for both humans and animals.”

The researchers also found that for making a goal-based decision, the synapses which connect the neurons together need to ‘embed’ the knowledge of how situations follow on from each other, depending on the actions that are chosen, and how they result in immediate reward.

Crucially, they were also able to show in the same model how synapses can adapt and re-shape themselves depending on what did or didn’t work previously, in the same way that it has been observed in human and animal subjects.

“By combining planning and learning into one coherent model, we’ve made what is probably the most comprehensive model of complex decision-making to date,” said Friedrich. “What I also find exciting is that figuring out how the brain may be doing it has already suggested us new algorithms that could be used in computers to solve similar tasks,” added Lengyel.

The model could be used to aid in the understanding of a range of conditions. For instance, there is evidence for selective impairment in goal-directed behavioural control in patients with obsessive compulsive disorder, which forces them to rely instead on habits. Deep understanding of the underlying neural processes is important as impaired decision making has also been linked to suicide attempts, addiction and Parkinson’s disease.

Reference:
Johannes Friedrich and Máté Lengyel. ‘Goal-Directed Decision Making with Spiking Neurons.’ The Journal of Neuroscience (2016). DOI: 10.1523/JNEUROSCI.2854-15.2016


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/modelling-how-the-brain-makes-complex-decisions#sthash.V8eJGMPe.dpuf

Graphene Shown To Safely Interact With Neurons In The Brain

Graphene shown to safely interact with neurons in the brain

source: www.cam.ac.uk

Researchers have shown that graphene can be used to make electrodes that can be implanted in the brain, which could potentially be used to restore sensory functions for amputee or paralysed patients, or for individuals with motor disorders such as Parkinson’s disease.

We are just at the tip of the iceberg when it comes to the potential of graphene and related materials in bio-applications and medicine.

Andrea Ferrari

Researchers have successfully demonstrated how it is possible to interface graphene – a two-dimensional form of carbon – with neurons, or nerve cells, while maintaining the integrity of these vital cells. The work may be used to build graphene-based electrodes that can safely be implanted in the brain, offering promise for the restoration of sensory functions for amputee or paralysed patients, or for individuals with motor disorders such as epilepsy or Parkinson’s disease.

The research, published in the journal ACS Nano, was an interdisciplinary collaboration coordinated by the University of Trieste in Italy and the Cambridge Graphene Centre.

Previously, other groups had shown that it is possible to use treated graphene to interact with neurons. However the signal to noise ratio from this interface was very low. By developing methods of working with untreated graphene, the researchers retained the material’s electrical conductivity, making it a significantly better electrode.

“For the first time we interfaced graphene to neurons directly,” said Professor Laura Ballerini of the University of Trieste in Italy. “We then tested the ability of neurons to generate electrical signals known to represent brain activities, and found that the neurons retained their neuronal signalling properties unaltered. This is the first functional study of neuronal synaptic activity using uncoated graphene based materials.”

Our understanding of the brain has increased to such a degree that by interfacing directly between the brain and the outside world we can now harness and control some of its functions. For instance, by measuring the brain’s electrical impulses, sensory functions can be recovered. This can be used to control robotic arms for amputee patients or any number of basic processes for paralysed patients – from speech to movement of objects in the world around them. Alternatively, by interfering with these electrical impulses, motor disorders (such as epilepsy or Parkinson’s) can start to be controlled.

Scientists have made this possible by developing electrodes that can be placed deep within the brain. These electrodes connect directly to neurons and transmit their electrical signals away from the body, allowing their meaning to be decoded.

However, the interface between neurons and electrodes has often been problematic: not only do the electrodes need to be highly sensitive to electrical impulses, but they need to be stable in the body without altering the tissue they measure.

Too often the modern electrodes used for this interface (based on tungsten or silicon) suffer from partial or complete loss of signal over time. This is often caused by the formation of scar tissue from the electrode insertion, which prevents the electrode from moving with the natural movements of the brain due to its rigid nature.

Graphene has been shown to be a promising material to solve these problems, because of its excellent conductivity, flexibility, biocompatibility and stability within the body.

Based on experiments conducted in rat brain cell cultures, the researchers found that untreated graphene electrodes interfaced well with neurons. By studying the neurons with electron microscopy and immunofluorescence the researchers found that they remained healthy, transmitting normal electric impulses and, importantly, none of the adverse reactions which lead to the damaging scar tissue were seen.

According to the researchers, this is the first step towards using pristine graphene-based materials as an electrode for a neuro-interface. In future, the researchers will investigate how different forms of graphene, from multiple layers to monolayers, are able to affect neurons, and whether tuning the material properties of graphene might alter the synapses and neuronal excitability in new and unique ways. “Hopefully this will pave the way for better deep brain implants to both harness and control the brain, with higher sensitivity and fewer unwanted side effects,” said Ballerini.

“We are currently involved in frontline research in graphene technology towards biomedical applications,” said Professor Maurizio Prato from the University of Trieste. “In this scenario, the development and translation in neurology of graphene-based high-performance biodevices requires the exploration of the interactions between graphene nano- and micro-sheets with the sophisticated signalling machinery of nerve cells. Our work is only a first step in that direction.”

“These initial results show how we are just at the tip of the iceberg when it comes to the potential of graphene and related materials in bio-applications and medicine,” said Professor Andrea Ferrari, Director of the Cambridge Graphene Centre. “The expertise developed at the Cambridge Graphene Centre allows us to produce large quantities of pristine material in solution, and this study proves the compatibility of our process with neuro-interfaces.”

The research was funded by the Graphene Flagship, a European initiative which promotes a collaborative approach to research with an aim of helping to translate graphene out of the academic laboratory, through local industry and into society.

Reference:
Fabbro A., et. al. ‘Graphene-Based Interfaces do not Alter Target Nerve Cells.’ ACS Nano (2016). DOI: 10.1021/acsnano.5b05647


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Changes to NHS Policy Unlikely to reduce Emergency Hospital Admissions

Changes to NHS policy unlikely to reduce emergency hospital admissions

source: www.cam.ac.uk

Recent changes to UK healthcare policy intended to reduce the number of emergency hospital admissions are unlikely to be effective, according to a study published in the British Medical Journal.

Too often government policy is based on wishful thinking rather than on hard evidence on what is actually likely to work, and new interventions often aren’t given enough time to bed in to know whether they’re really working

Martin Roland

Alternative approaches are therefore needed to tackle the continuing rise of costly emergency admissions, conclude researchers from the Health Research Board Centre for Primary Care Research at the Royal College of Surgeons in Ireland (RCSI) in collaboration with the University of Cambridge.

Recently introduced changes to GPs’ pay mean that they are now incentivised to identify people in their practice thought to be at high-risk of future emergency admission and offer extra support in the form of ‘case-management’, including personalised care plans. However, the researchers show that emergency admission is a difficult outcome to predict reliably. Electronic tools have been developed to identify people at high-risk but these tools will, at best, only identify a minority of people who will actually be admitted to hospital. In addition, the researchers found that there is currently little evidence that implementing case management for people identified as high-risk actually reduces the risk of future emergency admission.

The authors suggest alternative options that may have more impact on the use of hospital beds for patients following an emergency admission, based on the research evidence in this area.

One recommendation is to focus on reducing the length of time that patients are in hospital – though this depends on resources being available in the community to support patients when they are discharged. Second, a significant proportion of all emergency admissions are re-admissions to hospital following discharge and research evidence supports interventions to reduce some of these admissions, especially when several members of the healthcare team (e.g. doctor, nurse, social worker, case manager) are involved in helping patients manage the transition from hospital to home.

A third option is to focus on certain medical conditions, such as pneumonia, known to cause avoidable emergency admissions and more likely to respond to interventions in primary care. Finally, the authors suggest that policy efforts should be concentrated in more deprived areas where people are more likely to suffer with multiple chronic medical conditions and are more likely to be admitted to hospital.

Lead author and Health Research Board Research Fellow Dr Emma Wallace from the RCSI said: “Reducing emergency admissions is a popular target when trying to curtail spiralling healthcare costs. However, only a proportion of all emergency admissions are actually avoidable and it’s important that policy efforts to reduce emergency admissions are directed where they are most likely to succeed.

“Our analysis indicates that current UK healthcare policy targeting people identified as high risk in primary care for case management is unlikely to be effective and alternative options need to be considered.”

Professor Martin Roland, senior author and Professor of Health Services Research at the University of Cambridge, added: “Too often government policy is based on wishful thinking rather than on hard evidence on what is actually likely to work, and new interventions often aren’t given enough time to bed in to know whether they’re really working.

“Reducing the number of people who are readmitted to hospital, and reducing the length of time that people stay in hospital are both likely to have a bigger effect on hospital bed use than trying to predict admission in the population. Both of these need close working between primary and secondary care and between health and social care.”

Reference
Wallace, E et al. Reducing emergency admissions through community-based interventions: are uncertainties in the evidence reflected in health policy? BMJ; 28 Jan 2016


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/changes-to-nhs-policy-unlikely-to-reduce-emergency-hospital-admissions#sthash.bP74Lhd1.dpuf

Making Operating Systems Safer and Faster With ‘Unikernels’

Making operating systems safer and faster with ‘unikernels’

source: www.cam.ac.uk

Technology to improve the security, speed and scale of data processing in age of the Internet of Things is being developed by a Cambridge spin-out company.

This acquisition shows the power of open source development to have impact and to be commercially successful.

Andy Hopper

Specialised computer software components to improve the security, speed and scale of data processing in cloud computing are being developed by a University of Cambridge spin-out company. The company, Unikernel Systems, which was formed by staff and postdoctoral researchers at the University Computer Laboratory, has recently been acquired by San-Francisco based software company Docker Inc.

Unikernels are small, potentially transient computer modules specialised to undertake a single task at the point in time when it is needed. Because of their reduced size, they are far more secure than traditional operating systems, and can be started up and shut down quickly and cheaply, providing flexibility and further security.

They are likely to become increasingly used in applications where security and efficiency are vital, such as systems storing personal data and applications for the so-called Internet of Things (IoT) – internet-connected appliances and consumer products.

“Unikernels provide the means to run the same application code on radically different environments from the public cloud to IoT devices,” said Dr Richard Mortier of the Computer Laboratory, one of the company’s advisors. “This allows decisions about where to run things to be revisited in the light of experience – providing greater flexibility and resilience. It also means software on those IoT devices is going to be a lot more reliable.”

Recent years have seen a huge increase in the amount of data that is collected, stored and processed, a trend that will only continue as increasing numbers of devices are connected to the internet. Most commercial data storage and processing now takes place within huge datacentres run by specialist providers, rather than on individual machines and company servers; the individual elements of this system are obscured to end users within the ‘cloud’. One of the technologies that has been instrumental in making this happen is virtual machines.

Normally, a virtual machine (VM) runs just like a real computer, with its own virtual operating system – just as your desktop computer might run Windows. However, a single real machine can run many VMs concurrently. VMs are general purpose, able to handle a wide range of jobs from different types of user, and capable of being moved across real machines within datacentres in response to overall user demand. The University’s Computer Laboratory started research on virtualisation in 1999, and the Xen virtual machine monitor that resulted now provides the basis for much of the present-day cloud.

Although VMs have driven the development of the cloud (and greatly reduced energy consumption), their inherent flexibility can come at a cost if their virtual operating systems are the generic Linux or Windows systems. These operating systems are large and complex, they have significant memory footprints, and they take time to start up each time they are required. Security is also an issue, because of their relatively large ‘attack surface’.

Given that many VMs are actually used to undertake a single function, (e.g. acting as a company database), recent research has shifted to minimising complexity and improving security by taking advantage of the narrow functionality. And this is where unikernels come in.

Researchers at the Computer Laboratory started restructuring VMs into flexible modular components in 2009, as part of the RCUK-funded MirageOS project. These specialised modules – or unikernels – are in effect the opposite of generic VMs. Each one is designed to undertake a single task; they are small, simple and quick, using just enough code to enable the relevant application or process to run (about 4% of a traditional operating system according to one estimate).

The small size of unikernels also lends considerable security advantages, as they present a much smaller ‘surface’ to malicious attack, and also enable companies to separate out different data processing tasks in order to limit the effects of any security breach that does occur. Given that resource use within the cloud is metered and charged, they also provide considerable cost savings to end users.

By the end of last year, the unikernel technology arising from MirageOS was sufficiently advanced that the team, led by Dr. Anil Madhavapeddy, decided to found a start-up company. The company, Unikernel Systems, was recently acquired by San Francisco-based Docker Inc. to accelerate the development and broad adoption of the technology, now envisaged as a critical element in the future of the Internet of Things.

“This brings together one of the most significant developments in operating systems technology of recent years, with one of the most dynamic startups that has already revolutionised the way we use cloud computing. This link-up will truly allow us all to “rethink cloud infrastructure”, said Balraj Singh, co-founder and CEO of Unikernel Systems.

“This acquisition shows that the Computer Laboratory continues to produce innovations that find their way into mainstream developments. It also shows the power of open source development to have impact and to be commercially successful”, said Professor Andy Hopper, Head of the University of Cambridge Computer Laboratory.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

How Many Ways Can You Arrange 128 Tennis Balls? Researchers Solve an Apparently Impossible Problem

How many ways can you arrange 128 tennis balls? Researchers solve an apparently impossible problem

source: www.cam.ac.uk

A bewildering physics problem has apparently been solved by researchers, in a study which provides a mathematical basis for understanding issues ranging from predicting the formation of deserts, to making artificial intelligence more efficient.

The brute force way of doing this would be to keep changing the system and recording the configurations. Unfortunately, it would take many lifetimes before you could record it all. Also, you couldn’t store them, because there isn’t enough matter in the universe.

Stefano Martiniani

In research carried out at the University of Cambridge, a team developed a computer program that can answer this mind-bending puzzle: Imagine that you have 128 soft spheres, a bit like tennis balls. You can pack them together in any number of ways. How many different arrangements are possible?

The answer, it turns out, is something like 10250 (1 followed by 250 zeros). The number, also referred to as ten unquadragintilliard, is so huge that it vastly exceeds the total number of particles in the universe.

Far more important than the solution, however, is the fact that the researchers were able to answer the question at all. The method that they came up with can help scientists to calculate something called configurational entropy – a term used to describe how structurally disordered the particles in a physical system are.

Being able to calculate configurational entropy would, in theory, eventually enable us to answer a host of seemingly impossible problems – such as predicting the movement of avalanches, or anticipating how the shifting sand dunes in a desert will reshape themselves over time.

These questions belong to a field called granular physics, which deals with the behaviour of materials such as snow, soil or sand. Different versions of the same problem, however, exist in numerous other fields, such as string theory, cosmology, machine learning, and various branches of mathematics. The research shows how questions across all of those disciplines might one day be addressed.

Stefano Martiniani, a Gates Scholar at St John’s College, University of Cambridge, who carried out the study with colleagues in the Department of Chemistry, explained: “The problem is completely general. Granular materials themselves are the second most processed kind of material in the world after water and even the shape of the surface of the Earth is defined by how they behave.”

“Obviously being able to predict how avalanches move or deserts may change is a long, long way off, but one day we would like to be able to solve such problems. This research performs the sort of calculation we would need in order to be able to do that.”

At the heart of these problems is the idea of entropy – a term which describes how disordered the particles in a system are. In physics, a “system” refers to any collection of particles that we want to study, so for example it could mean all the water in a lake, or all the water molecules in a single ice cube.

When a system changes, for example because of a shift in temperature, the arrangement of these particles also changes. For example, if an ice cube is heated until it becomes a pool of water, its molecules become more disordered. Therefore, the ice cube, which has a tighter structure, is said to have lower entropy than the more disordered pool of water.

At a molecular level, where everything is constantly vibrating, it is often possible to observe and measure this quite clearly. In fact, many molecular processes involve a spontaneous increase in entropy until they reach a steady equilibrium.

In granular physics, however, which tends to involve materials large enough to be seen with the naked eye, change does not happen in the same way. A sand dune in the desert will not spontaneously change the arrangement of its particles (the grains of sand). It needs an external factor, like the wind, for this to happen.

This means that while we can predict what will happen in many molecular processes, we cannot easily make equivalent predictions about how systems will behave in granular physics. Doing so would require us to be able to measure changes in the structural disorder of all of the particles in a system – its configurational entropy.

To do that, however, scientists need to know how many different ways a system can be structured in the first place. The calculations involved in this are so complicated that they have been dismissed as hopeless for any system involving more than about 20 particles. Yet the Cambridge study defied this by carrying out exactly this type of calculation for a system, modelled on a computer, in which the particles were 128 soft spheres, like tennis balls.

“The brute force way of doing this would be to keep changing the system and recording the configurations,” Martiniani said. “Unfortunately, it would take many lifetimes before you could record it all. Also, you couldn’t store the configurations, because there isn’t enough matter in the universe with which to do it.”

Instead, the researchers created a solution which involved taking a small sample of all possible configurations and working out the probability of them occurring, or the number of arrangements that would lead to those particular configurations appearing.

Based on these samples, it was possible to extrapolate not only in how many ways the entire system could therefore be arranged, but also how ordered one state was compared with the next – in other words, its overall configurational entropy.

Martiniani added that the team’s problem-solving technique could be used to address all sorts of problems in physics and maths. He himself is, for example, currently carrying out research into machine learning, where one of the problems is knowing how many different ways a system can be wired to process information efficiently.

“Because our indirect approach relies on the observation of a small sample of all possible configurations, the answers it finds are only ever approximate, but the estimate is a very good one,” he said. “By answering the problem we are opening up uncharted territory. This methodology could be used anywhere that people are trying to work out how many possible solutions to a problem you can find.”

The paper, Turning intractable counting into sampling: computing the configurational entropy of three-dimensional jammed packings, is published in the journal, Physical Review E.

Stefano Martiniani is a St John’s Benefactor Scholar and Gates Scholar at the University of Cambridge.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cambridge Joins Consortium to Launch £40 Million Apollo Therapeutics Fund

Cambridge joins consortium to launch £40 million Apollo Therapeutics Fund

source: www.cam.ac.uk

Three global pharmaceutical companies and the technology transfer offices of three world-leading universities – Imperial College London, University College London and the University of Cambridge – have joined forces with a combined £40 million to create the Apollo Therapeutics Fund.

This new joint venture will support the translation of ground-breaking academic science from within these universities into innovative new medicines for a broad range of diseases.

Each of the three industry partner companies – AstraZeneca UK Limited, Glaxo Group Limited and Johnson & Johnson Innovation-JJDC, Inc. – will contribute £10 million over 6 years to the venture. The technology transfer offices of the three university partners – Imperial Innovations Group plc, Cambridge Enterprise Ltd and UCL Business plc – will each contribute a further £3.3 million.

The aim of Apollo is to advance academic pre-clinical research from these universities to a stage at which it can either be taken forward by one of the industry partners following an internal bidding process or be out-licensed. The three industry partners will also provide R&D expertise and additional resources to assist with the commercial evaluation and development of projects.

Drug development is extremely complex, costly and lengthy; currently only around 10 percent of therapies entering clinical trials reach patients as medicines. By combining funding for promising early-stage therapeutics from leading UK universities with a breadth of industry expertise, Apollo aims to share the risk and accelerate the development of important new treatments, while also reducing the cost.

Dr Ian Tomlinson, former Senior Vice President, Worldwide Business Development and Biopharmaceuticals R&D, for GSK and founder & Chief Scientific Officer of Domantis Limited, has been appointed Chairman of the Apollo Therapeutics Investment Committee (AIC). Comprising representatives from the six partners, the AIC will make all investment decisions.

The AIC will be advised by an independent Drug Discovery Team of ex-industry scientists who will be employed by Apollo to work with the universities and their technology transfer offices to identify and shape projects to bring forward for development. All therapy areas and modalities, including small molecules, peptides, proteins, antibodies, cell and gene therapies will be considered.

Apollo will be based at Stevenage Bioscience Catalyst. Once funded, projects will be progressed by the Drug Discovery Team alongside the university investigators, with other external resources and also in-kind resources from the industry partners as appropriate. For successful projects, the originating university and technology transfer office will receive a percentage of future commercial revenues or out-licensing fees and the remainder will be divided amongst all the Apollo partners.

Dr Tomlinson said: “This is the first time that three global pharmaceutical companies and the technology transfer offices of three of the world’s top ten universities have come together to form a joint enterprise of this nature, making the Apollo Therapeutics Fund a truly innovative venture.

“Apollo provides an additional source of early stage funding that will allow more therapeutics projects within the three universities to realise their full potential. The active participation of the industry partners will also mean that projects will be shaped at a very early stage to optimise their suitability for further development.

“The Apollo Therapeutics Fund should benefit the UK economy by increasing the potential for academic research to be translated into new medicines for patients the world over.”

Iain Thomas, Head of Life Sciences at Cambridge Enterprise, added: “Efficiently bringing together drug discovery expertise, potential customers, funding and project management, along with rapid decision making and execution through the Apollo Therapeutics Fund is a unique and extraordinarily exciting and valuable proposition for any academic or company that wants to see early stage ground breaking therapeutic technology progress to the clinic for patient benefit and economic return.”

Adapted from a press release by Apollo Therapeutics.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/cambridge-joins-consortium-to-launch-ps40-million-apollo-therapeutics-fund#sthash.OvTFfvJB.dpuf

How To Get Teams To Share Information

How to get teams to share information

source: www.cam.ac.uk

Are you happy to share information with your colleagues? And do they share their valuable information with you? A number of companies have realised that withholding key information within organisational silos might happen more often that we might like to admit. Now a new study suggests how and when companies should restore meaningful communication across the organisation.

The study reveals that teams across the land are not playing nicely after all. In fact, there are many occasions when we choose not to share information with colleagues if we think it can harm our own prospects of success. And when that information determines, say, the level of funding passed down from a CEO, it can have a significant – and counter-productive – effect on the company as a whole.

“Most organisations must make decisions about where best to allocate resources,” said Nektarios (Aris) Oraiopoulos of Cambridge Judge Business School, whose study, published in the journal Management Science, examined how these issues play out in the pharmaceutical industry. “Pharmaceutical companies as a whole need to regularly reassess their research and development portfolios and decide which projects have the greatest potential; for example they might choose to improve an existing drug or develop a new one. Such decisions are often made by executives who rely on information provided by the project managers. But individual project managers do not necessarily give accurate information to the boss if they think it will cost them the resources that fund their projects.”

Oraiopoulos’s study, undertaken with Vincent Mak of Cambridge Judge Business School, and Professor Jochen Schlapp of Mannheim University, revealed managers’ likelihood to share information depended on whether there was an appropriate fit between the type of the project (e.g. a new project vs a ‘me-too’ project) and the incentives scheme in place.

“In small companies such as start-ups, there’s often such a strong culture of collective ambition and responsibility – and enhanced risk – that it’s hard to attribute success or failure individually,” said Oraiopoulos. “Therefore the most effective incentive rewards everyone on the basis of the collective success. But as the company grows, people inevitably assume singular responsibilities, the outcomes are less risky and, in the interests of the company, managers start following individual agendas – and management starts rewarding individual performance.”

Which is where the problems start. “If two project managers are offered a group incentive for success, individuals are more willing to be upfront about any failings. But when the two project managers compete for resources and rewards, as it often happens in a bigger organisation, project managers are less likely to step aside.”

There are many reasons for this, said Oraiopoulos, not necessarily based in deception. “Pharmaceutical research includes many ‘true believers’ – researchers who have absolute faith in a new product, especially if it could cure an important disease. But that faith skews their judgment. They believe their breakthrough is just around the corner, even if all the existing evidence suggests otherwise.”

This is a difficult moral argument for any CEO to reject – a difficulty compounded by the lack of impartial information in such a knowledge-specific industry. “One project manager’s specialty might be cardiovascular, another’s oncology,” said Oraiopoulos. “No one knows the science and potential of their product better than they do. They can present an accurate case on why their project deserves resources – or, consciously or subconsciously, mask its failings because no-one has the expertise to challenge them. So how does the CEO tell the difference?”

The answer is trust and giving teams a compelling motivation to be honest. But a collective incentive has drawbacks. “If you’re leading one of five departments who are rewarded only for collective excellence,” said Oraiopoulos, “where’s your motivation? You might as well let the others carry you.”

And even financial incentive doesn’t necessarily work. “Many researchers’ greatest reward is completing their project,” said Oraiopoulos. “That means being consistently confident their boss supports their work.”

So what’s the solution? “Organisations are tackling it in different ways,” he said. “Some are creating smaller, individual units, for example, centres of excellence or turning departments into small start-ups, with defined budgets. Others are promoting more collaboration between departments.”

Swiss global healthcare company Roche did both. When it bought drug developer Genentech in 2009 it kept the two companies’ research and development sections separate, empowering its “late-stage development group” to pick the strongest project – and motivating the losing group by announcing it would develop its plans later. But while that worked with Alzheimer’s treatments, a more linked approach was required for fast-paced developments in cancer research. “The need to understand the biology and right therapeutic approach requires the best minds,” said Roche’s head of oncology Jason Coloma. “We needed to leverage the knowledge in these divisions and break down some of these firewalls.”

The company formed a cancer immunotherapy committee which, says Roche, “brings the leadership and senior scientific minds together to consider different areas of interest and unmet needs that can be fulfilled by looking at different combinations.”

Roche’s approach confirms Oraiopoulos’s findings that new products require a team strategy, while ‘me-too’ projects benefit from more individual approaches. But how to break down a colossal R&D function into start-up-style divisions?

GlaxoSmithKline replaced its research and development ‘pyramid’ with 12 centres of excellence. “We learned these centres must be built around two things,” its then CEO Jean-Pierre Garnier said later. “A specific mission – the most effective therapies for Alzheimer’s – and the stage of the R&D process required to perform that mission, for example choosing a target for attacking the disease. Anything not critical to the core R&D process must occur outside the centre. All other functions – toxicology, drug metabolism, formulation, had to become service units, delivering at the lowest possible cost.”

Simultaneously, GSK overhauled its incentives. “Pharmaceutical R&D typically pursues two objectives – to be first in class and to offer the best-in-class compound for attacking a disease. For too long the industry has tried to be a ballet dancer and a footballer at the same time.”

But he warned fragmenting a company needs commitment. “To operate in this fashion, companies must strengthen opportunities, negotiate deals and nurture external scientific ‘bets’ (work with outside experts). This means a cultural shift. It’s an enormous but necessary task.”

Oraiopoulos’s research suggests there are so many variables – different products, motivations, branches of medicine, organisational goals – each company must then find its own solution. Pfizer’s recent buy-out of Botox maker Allergan is expected to maintain separate divisions for innovative and established treatments, so how the company allocates its resources remains to be seen.

“You must strike a balance,” said Oraiopoulos, “between rewarding individual and group performance. It’s a spectrum and each company must find their place on it, for patients and for the advancement of treatments. Many companies are encountering this challenge. We’re only scratching the surface.”

Reference:
Schlapp, Oraiopoulos, and Mak: ‘Resource Allocation Under Imperfect Evaluation.’ Management Science (2015). DOI: 10.1287/mnsc.2014.2083

Originally published on the Cambridge Judge Business School website


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/how-to-get-teams-to-share-information#sthash.Lzyiamgr.dpuf

Artificial Intelligence and Rise of the Machines: Cambridge Science Festival 2016

Artificial intelligence and rise of the machines: Cambridge Science Festival 2016

source: www.cam.ac.uk

The annual two-week Festival, which runs from 7 – 20 March and stages more than 300 events, examines the growing interaction between humans and technology.

Full programme now online | Bookings open Monday 8 February

Will artificial intelligence be superior to or as creative as the human brain? Are we letting machines take over and give rise to mass unemployment or worse? Should we be worried about quantum computing and the impact it will have on the way we work, communicate and live in the future? Or should we harness rather than hate the digital deluge?

As we teeter on the brink of total machine dependency and epoch-making technological developments that cross all areas of our lives, Cambridge Science Festival asks these and many other critical questions.

The annual Festival presents an impressive line-up of who’s who from the science world and beyond, including Professors Sir John Gurdon (Nobel Laureate), Sir David Spiegelhalter, Richard Gilbertson, Raymond Leflamme, Didier Queloz, Meg Urry, and Tony Purnell, Head of Technical Development for British Cycling. Other speakers include Dr Ewan Birney, Director of the European Bioinformatics Institute; Angus Thirlwell, CEO and Co-founder of Hotel Chocolat; Dr Hermann Hauser, eminent technology entrepreneur; comedian Robin Ince; Charles Simonyi, high-tech pioneer and space traveller; and writer Simon Guerrier (Dr Who).

At the core of this year’s Festival is a series of events that explores the increasing symbiosis between humans and technology, and the questions this raises for humanity in the coming century. On the first day, a panel of outstanding speakers debate the implications of artificial intelligence. The panel consists of experts from the fields of information technology, robotics and neuroscience, including Dr Hermann Hauser, Dr Mateja Jamnik, Professor Trevor Robbins and Professor Alan Winfield. This event will be moderated by Tom Feilden, Science Correspondent for the Today Programme on BBC Radio 4.

Organiser of the event, Professor Barbara Sahakian, University of Cambridge, said: “Artificial intelligence could be of great benefit to society, producing innovative discoveries and providing humans with more leisure time. However, workers are concerned that, more and more, jobs are being taken over by artificial intelligence. We can see this in the context of the current trend for robots to work in car factories and driverless trains, and also in the future movement towards driverless cars.

“Some people feel this is an inevitable progression into the future due to advances in artificial intelligence, information technology and machine learning. However, others including many neuroscientists are not convinced that computers will ever be able to demonstrate creativity nor fully understand social and emotional interactions.”

The view that machines are taking over every aspect of our lives and whether this is a positive or negative factor of modern living is further examined in the event, ‘The rise of the humans: at the intersection of society and technology’. Dave Coplin, author and Chief Envisioning Officer for Microsoft UK, discusses the future of the UK’s IT and digital industries and addresses the convergence of society and technology, focussing on the developments that are creating so many new opportunities.

Coplin, who also has a new book coming out shortly, believes our current relationship with technology is holding us back from using it properly and we should think differently about the potential future uses for technology.

He said: “We should harness, rather than hate, the digital deluge. Individuals and organisations need to rise up and take back control of the potential that technology offers our society. We need to understand and aspire to greater outcomes from our use of technology”.

Building further on these issues in the second week of the Festival, Zoubin Ghahramani, Professor of Information Engineering at the University of Cambridge and the Cambridge Liaison Director of the Alan Turing Institute, explores intelligence and learning in brains and machines. He asks, what is intelligence? What is learning? Can we build computers and robots that learn? How much information does the brain store? How does mathematics help us answer these questions?

Professor Ghahramani highlights some current areas of research at the frontiers of machine learning, including a project to develop an Automatic Statistician, and speculates on some of the future applications of computers that learn.

For many, quantum computing is the answer to machine learning. Influential pioneer in quantum information theory and the co-founder and current director of the Institute for Quantum Computing at the University of Waterloo, Professor Raymond Laflamme presents the annual Andrew Chamblin Memorial Lecture: ‘harnessing the quantum world’. During his lecture, Professor Laflamme will share the latest breakthroughs and biggest challenges in the quest to build technologies based on quantum properties that will change the ways we work, communicate and live.

A former PhD student of Professor Stephen Hawking, Professor Laflamme is interested in harnessing the laws of quantum mechanics to develop new technologies that will have extensive societal impact. He believes that the development of quantum computers will allow us to really understand the quantum world and explore it more deeply.

He said: “This exploration will allow us to navigate in the quantum world, to understand chemistry and physics at the fundamental level and bring us new technologies with applications in health, such as the development of drugs, and to develop new materials with a variety of applications.

“In the next half decade, we will produce quantum processors with more than 100 quantum bits (qubits). As we pass the count of about 30 qubits (approximately one gigabyte), classical computers can no longer compete and we fully enter the quantum world. That will be very exciting, from then on we do not have the support of classical computers to tell us if the quantum devices behave as expected so we will need to find new ways to learn the reliability of these devices. Once we have 30-50 qubits (approximately one million gigabytes), I believe that we will get an equivalent of Moore’s law, but for the increased number of qubits.”

New technologies also have a major impact on healthcare, which comes under the spotlight during the final weekend of the Festival as it returns for the third year running to the Cambridge Biomedical Campus. During the event ‘How big data analysis is changing how we understand the living world’, Dr Ewan Birney, Fellow of the Royal Society and Director of the EMBL European Bioinformatics Institute, explores the opportunities and challenges of genomics and big data in healthcare, from molecular data to high-resolution imaging.

These kinds of technological revolutions mean biological data is being collected faster than ever. Dr Shamith Samarajiwa, from the Medical Research Council Cancer Unit, explains how analysing biomedical big data can help us understand different cancers and identify new targets for treatments in ‘Battling cancer with data science’. Meanwhile, Dr Peter Maccallum from Cancer Research UK Cambridge Institute, discusses the challenges of storing and processing the terabytes of data produced every week, more than was previously generated in a decade in the event ‘Big data from small sources: computing demands in molecular and cell biology’.

Speaking ahead of this year’s Science Festival, Coordinator, Dr Lucinda Spokes said, “Using the theme of big data and knowledge, we are addressing important questions about the kinds of technology that affects, or will affect, not only every aspect of science, from astronomy to zoology, but every area of our lives; health, work, relationships and even what we think we know.

“Through a vast range of debates, talks, demonstrations and performances, some of the most crucial issues of our time and uncertainties about our future as a species will be explored during these packed two weeks.”

The full programme also includes events on neuroscience, healthcare, sports science, psychology, zoology and an adults-only hands-on session amongst many others.

Facebook: www.facebook.com/Cambridgesciencefestival

Twitter: @camscience #csf2016


Cambridge Science Festival

Since its launch in 1994, the Cambridge Science Festival has inspired thousands of young researchers and visitor numbers continues to rise; last year, the Festival attracted well over 45,000 visitors. The Festival, one of the largest and most respected, brings science, technology, engineering, maths and medicine to an audience of all ages through demonstrations, talks, performances and debates. It draws together a diverse range of independent organisations in addition to many University departments, centres and museums.

This year’s Festival sponsors and partners are Cambridge University Press, AstraZeneca, MedImmune, Illumina, TTP Group, Science AAAS, BlueBridge Education, Siemens, ARM, Microsoft Research, Redgate, Linguamatics, FameLab, Babraham Institute, Wellcome Genome Campus, Napp, The Institute of Engineering and Technology, St Mary’s School, Anglia Ruskin University, Cambridge Junction, Addenbrooke’s Hospital, Addenbrooke’s Charitable Trust, James Dyson Foundation, Naked Scientists, Hills Road Sixth Form College, UTC Cambridge, British Science Week, Alzheimer’s Research UK, Royal Society of Chemistry, Cambridge Science Centre, Cambridge Live, and BBC Cambridgeshire.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/artificial-intelligence-and-rise-of-the-machines-cambridge-science-festival-2016#sthash.4ujm8SvB.dpuf

Fuel Cell Electrolyte Developed To Offer Cleaner, More Efficient Energy

Fuel cell electrolyte developed to offer cleaner, more efficient energy

source: www.cam.ac.uk

A new thin-film electrolyte material that helps solid oxide fuel cells operate more efficiently and cheaply than those composed of conventional materials, and has potential applications for portable power sources, has been developed at the University of Cambridge.

The ability to precisely engineer and tune highly crystalline materials at the nanoscale is absolutely key for next-generation power generation and storage of many different kinds.

Judith Driscoll

These new materials offer the possibility of either significantly improving the efficiency of current high-temperature fuel cell systems, or achieving the same performance levels at much lower temperatures. Either of these approaches could enable much lower fuel consumption and waste energy. The material was co-invented by Professor Judith Driscoll of the Department of Materials Science and Metallurgy and her colleague Dr Shinbuhm Lee, with support from collaborators at Imperial College and at three different labs in the US.

Solid oxide fuel cells are comprised of a negative electrode (cathode) and positive electrode (anode), with an electrolyte material sandwiched between them. The electrolyte transports oxygen ions from the cathode to the anode, generating an electric charge. Compared to conventional batteries, fuel cells have the potential to run indefinitely, if supplied by a source of fuel such as hydrogen or a hydrocarbon, and a source of oxygen.

By using thin-film electrolyte layers, micro solid oxide fuel cells offer a concentrated energy source, with potential applications in portable power sources for electronic consumer or medical devices, or those that need uninterruptable power supplies such as those used by the military or in recreational vehicles.

“With low power requirements and low levels of polluting emissions, these fuel cells offer an environmentally attractive solution for many power source applications,” said Dr Charlanne Ward of Cambridge Enterprise, the University’s commercialisation arm, which is managing the patent that was filed in the US. “This opportunity has the potential to revolutionise the power supply problem of portable electronics, by improving both the energy available from the power source and safety, compared with today’s battery solutions.”

In addition to providing significantly improved conductivity, the new electrolyte material offers:

  • minimal heat loss and short circuiting due to low electronic conductivity
  • minimal cracking under heat cycling stress due to small feature size in the construction
  • high density, reducing the risk of fuel leaks
  • simple fabrication using standard epitaxial growth and self-assembly techniques

“The ability to precisely engineer and tune highly crystalline materials at the nanoscale is absolutely key for next-generation power generation and storage of many different kinds,” said Driscoll. “Our new methods and understanding have allowed us to exploit the very special properties of nanomaterials in a practical and stable thin-film configuration, resulting in a much improved oxygen ion conducting material.”

In October, a paper on the enhancement of oxygen ion conductivity in oxides was published in Nature Communications. It is this enhancement that improves efficiency and enables low-temperature operation of fuel cells. As a result of the reported advantages, the novel electrolyte material can also potentially be used in the fabrication of improved electrochemical gas sensors and oxygen separation membranes (to extract oxygen molecules from the air). The inventors have also published two other papers showing the enhanced ionic conduction in different materials systems, one in Nano Letters and one inAdvanced Functional Materials.

Cambridge Enterprise is working with Driscoll to take the technology to market, seeking to collaborate with a fuel cell manufacturer with expertise in thin-film techniques to validate the new material.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

J A Kemp Patent and Trade Mark Attorneys to open office in Cambridge

J A Kemp Patent and Trade Mark Attorneys to open office in Cambridge

Displaying JA Kemp Logo Strap Colour.pngDisplaying JA Kemp Logo Strap Colour.png

J A Kemp has announced that it will open an office in Cambridge in February 2016.

Ranked in tier 1 by The Legal 500 and Chambers and Partners, J A Kemp is one of the largest UK and European patent and trade mark attorney firms.  The firm’s strong international presence is complemented by a UK practice that has nearly doubled in volume in the past four years and has long included significant work for several of the country’s leading university-based technology transfer organisations as well as a diverse range of significant clients in other sectors.  J A Kemp also advises start-ups and growing businesses across all industry sectors on patents, trade marks and IP strategy.

The firm’s patent capabilities embrace all technology areas across the spectrum of biotechnology and life sciences, chemistry and pharmaceuticals, electronics, engineering, IT and software.  The fast-growing business, scientific and academic hub of Cambridge is an ideal base to expand the firm’s operations with a view to serving clients there and in the M11 corridor and East of England.

The firm, whose main office is in central London, already has an office in Oxford which has tripled in size over the last four years.  The office in Cambridge will be at 30 Station Road, in the well-connected and growing business district surrounding Cambridge railway station, and will accommodate six attorneys and two support staff.

 

Commenting on the move, partner Andy Bentham, who will head up the new office, said:

“We have seen a significant increase in patent activity from the UK in the last few years. Much of this stems from the entrepreneurial scientific communities in Oxford and Cambridge.  In Cambridge growth is faster now than at any time in the 25 years since I first came here as an undergraduate. 

 “We are looking forward to joining the Cambridge cluster of hi-tech, biomedical, agri-tech and other innovators.  With 150 people based in our main office in London and over 20 based in Oxford, opening an office in Cambridge too means we can offer services from all three points of the so-called ‘golden triangle’.”

ANDY BENTHAM Cropped

 

J A Kemp has the specialist expertise required to secure and protect the most complex patent portfolios and the firm has an internationally renowned team of specialists in trade mark matters.  J A Kemp also offers full-service IP dispute resolution and litigation expertise.

 For further information contact: Claire Wright, Head of Marketing – cwright@jakemp.com – 020 3077 8600.

Brain Waves Could Help Predict How We Respond To General Anaesthetics

Brain waves could help predict how we respond to general anaesthetics

source: www.cam.ac.uk

The complex pattern of ‘chatter’ between different areas of an individual’s brain while they are awake could help doctors better track and even predict their response to general anaesthesia – and better identify the amount of anaesthetic necessary – according to new research from the University of Cambridge.

A very good way of predicting how an individual responds to our anaesthetic was the state of their brain network activity at the start of the procedure

Srivas Chennu

Currently, patients due to undergo surgery are given a dose of anaesthetic based on the so-called ‘Marsh model’, which uses factors such as an individual’s body weight to predict the amount of drug needed. As patients ‘go under’, their levels of awareness are monitored in a relatively crude way. If they are still deemed awake, they are simply given more anaesthetic. However, general anaesthetics can carry risks, particularly if an individual has an underlying health condition such as a heart disorder.

As areas of the brain communicate with each other, they give off tell-tale signals that can give an indication of how conscious an individual is. These ‘networks’ of brain activity can be measured using an EEG (electroencephalogram), which measures electric signals as brain cells talk to each other. Cambridge researchers have previously shown that these network signatures can even be seen in some people in a vegetative state and may help doctors identify patients who are aware despite being unable to communicate. These findings build upon advances in the science of networks to tackle the challenge of understanding and measuring human consciousness.

In a study published today in the open access journal PLOS Computational Biology, funded by the Wellcome Trust, the researchers studied how these signals changed in healthy volunteers as they received an infusion of propofol, a commonly used anaesthetic.

Twenty individuals (9 male, 11 female) received a steadily increasing dose of propofol – all up to the same limit – while undergoing a task that involved pressing one button if they heard a ‘ping’ and a different button if they heard a ‘pong’. At the same time, the researchers tracked their brain network activity using an EEG.

By the time the subjects had reached the maximum dose, some individuals were still awake and able to carry out the task, while others were unconscious. As the researchers analysed the EEG readings, they found clear differences between those who were responding to the anaesthetic and those who remained able to carry on with the task. This ‘brain signature’ was evident in the network of communications between brain areas carried by alpha waves (brain cell oscillations in the frequency range of 7.5–12.5 Hz), the normal range of electrical activity of the brain when conscious and relaxed.

In fact, when the researchers looked at the baseline EEG readings before any drug was given, they already saw differences between those who would later succumb to the drug and those who were less responsive to its effects. Dividing the subjects into two groups based on their EEG readings – those with lots of brain network activity at baseline and those with less – the researchers were able to predict who would be more responsive to the drug and who would be less.

The researchers also measured levels of propofol in the blood to see if this could be used as a measure of how conscious an individual was. Although they found little correlation with the alpha wave readings in general, they did find a correlation with a specific form of brain network activity known as delta-alpha coupling. This may be able to provide a useful, non-invasive measure of the level of drug in the blood.

“A very good way of predicting how an individual responds to our anaesthetic was the state of their brain network activity at the start of the procedure,” says Dr Srivas Chennu from the Department of Clinical Neurosciences, University of Cambridge. “The greater the network activity at the start, the more anaesthetic they are likely to need to put them under.”

Dr Tristan Bekinschtein, senior author from the Department of Psychology, adds: “EEG machines are commonplace in hospitals and relatively inexpensive. With some engineering and further testing, we expect they could be adapted to help doctors optimise the amount of drug an individual needs to receive to become unconscious without increasing their risk of complications.”

Srivas Chennu will be speaking at the Cambridge Science Festival on Wednesday 16 March. During the event, ‘Brain, body and mind: new directions in the neuroscience and philosophy of consciousness’, he will be examining what it means to be conscious.

Reference
Chennu, S et al. Brain connectivity dissociates responsiveness from drug exposure during propofol induced transitions of consciousness. PLOS Computational Biology; 14 Jan 2016

Image
Brain networks during the transition to unconsciousness during propofol sedation (drug infusion timeline shown in red). Participants with robust networks at baseline (left panel) remained resistant to the sedative, while others showed characteristically different, weaker networks during unconsciousness (middle). All participants regained similar networks when the sedative wore off (right).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/brain-waves-could-help-predict-how-we-respond-to-general-anaesthetics#sthash.NI0yjOFD.dpuf

John Maynard Keynes: Great Economist, Poor Currency Trader

John Maynard Keynes: great economist, poor currency trader

Source: www.cam.ac.uk

John Maynard Keynes struggled as a foreign-exchange trader, finds the first detailed study of the famous economist as currency speculator.

If someone as economically literate and well-connected as Keynes found it difficult to time currencies, then the rest of us should think twice before believing we can do any better.

David Chambers

A detailed new study of the chequered currency-trading record of John Maynard Keynes might make today’s overconfident currency speculators think twice.

While Keynes was one of the most famous economists in history, and his stock-picking record as an asset manager was outstanding, a forensic analysis of his personal currency trades found that his record was pedestrian by comparison.

The findings are forthcoming in the Journal of Economic History, in a study co-authored by Olivier Accominotti from the London School of Economics and Political Science, and David Chambers of the University of Cambridge Judge Business School.

“Unlike his stock investing, Keynes found currency investing a lot tougher, despite the fact that he was at the centre of the world of international finance throughout the time he traded currencies,” said Chambers. To be sure, Keynes made money from speculating in currencies in the 1920s and 1930s and his profits arose from more than pure chance. “Directionally, he called currencies more or less correctly but he really struggled with timing his trades. One main message for investors today is that if someone as economically literate and well-connected as Keynes found it difficult to time currencies, then the rest of us should think twice before believing we can do any better.”

In his currency trading, Keynes relied heavily on his own analysis of fundamental economic factors such as inflation, trade balance, capital flows and political developments.

Such ‘fundamentals-based’ strategy differs from ‘technical’ strategies that follow simple mechanical trading rules but seek profits by identifying market anomalies – typically through the carry trade (betting on high-interest currencies versus low-interest rate currencies) and momentum (betting on currencies which have recently appreciated versus those which have depreciated). Both fundamentals-based and technical trading styles are observed among modern-day currency managers.

But Keynes produced unremarkable results at the dawn of the modern foreign-exchange market, when dealings were transformed by telegraphic transfer and the emergence of a forward exchange market.

The period during which he traded was marked by considerable foreign exchange volatility and large deviations of exchange rates from their fundamental values which appear obvious to investors today. However, trading these deviations in real time was hazardous. “Implementing a currency trading strategy based on the analysis of macroeconomic fundamentals was challenging (even) for John Maynard Keynes,” said the research paper.

This was particularly the case in the 1920s. Currency traders can be judged in terms of the return generated per unit of risk, also known as the Sharpe Ratio. While Keynes generated a Sharpe Ratio of approximately 0.2 (assuming his trading equity was fixed), the same ratio for an equal-weighted blend of the carry and momentum strategies was substantially higher at close to 1.0 after transaction costs. When he resumed currency trading in 1932 after a five-year break coinciding with the return to the gold standard, although Keynes outperformed the carry strategy (whose mean return was negative) in the 1930s, he still underperformed a simple momentum strategy.

The study also found that Keynes “experienced periods of considerable losses in both the 1920s and 1930s. Indeed, he was close to being technically bankrupt in 1920 and could only stay trading thanks to his ability to borrow funds from his social circle.”

The research is based on a detailed dataset of 354 personal currency trades by Keynes between 1919 and 1939 (mostly in five currencies against the British pound – the US dollar, French franc, Deutsche mark, Italian lira and Dutch guilder).

Details of the trades were contained in ledgers kept in the archives at King’s College, Cambridge, where Keynes managed the college endowment fund for decades. Famously shifting the college portfolio from property to stocks, Keynes’s investment writings based on his very successful investment strategy at King’s College later on became a source of inspiration for David Swensen, the architect of the influential “Yale model” for managing university endowments in the US today.

Originally published on the Cambridge Judge Business School website.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/john-maynard-keynes-great-economist-poor-currency-trader#sthash.D0LmkPr1.dpuf

High Ozone Levels In Tropical Pacific Caused By Fires Burning In Africa and Asia

High ozone levels in tropical Pacific caused by fires burning in Africa and Asia

Source: www.cam.ac.uk

Study indicates ‘biomass burning’ may play a larger role in climate change than previously realised.

The measurements are now starting to produce insight into how the composition of the remote tropical atmosphere is affected by human activities occurring nearly halfway around the world.

Neil Harris

While efforts to limit emissions of greenhouse gases, including ozone, tend to focus on industrial activities and the burning of fossil fuels, a new study suggests that future regulations may need to address the burning of forests and vegetation. The study, published in the journal Nature Communications, indicates that ‘biomass burning’ may play a larger role in climate change than previously realised.

Based on observations from two aircraft missions, satellite data and a variety of models, an international research team showed that fires burning in tropical Africa and Southeast Asia caused pockets of high ozone and low water in the lower atmosphere above Guam – a remote island in the Pacific Ocean 1,700 miles east of Taiwan.

“We were very surprised to find high concentrations of ozone and chemicals that we know are only emitted by fires in the air around Guam,” said the study’s lead author Daniel Anderson, a graduate student at the University of Maryland. “We didn’t make specific flights to target high-ozone areas – they were so omnipresent that no matter where we flew, we found them.”

For the study, two research planes on complementary missions flew over Guam measuring the levels of dozens of chemicals in the atmosphere in January and February 2014. One aircraft flew up to 24,000 feet above the ocean surface during the UK Natural Environment Research Council’s Coordinated Airborne Studies in the Tropics (CAST) mission. The other flew up to 48,000 feet above the ocean surface during the CONvective Transport of Active Species in the Tropics (CONTRAST) mission.

“International collaboration is essential for studying global environmental issues these days,” said CAST Principal Investigator Neil Harris, of Cambridge’s Department of Chemistry. “This US/UK-led campaign over the western Pacific was the first of its kind in this region and collected a unique data set. The measurements are now starting to produce insight into how the composition of the remote tropical atmosphere is affected by human activities occurring nearly halfway around the world.”

Researchers examined 17 CAST and 11 CONTRAST flights and compiled over 3,000 samples from high-ozone, low-water air parcels for the study. In the samples, the team detected high concentrations of chemicals associated with biomass burning—hydrogen cyanide, acetonitrile, benzene and ethyne.

“Hydrogen cyanide and acetonitrile were the smoking guns because they are emitted almost exclusively by biomass burning. High levels of the other chemicals simply added further weight to the findings,” said study co-author Julie Nicely, a graduate student from the University of Maryland.

Next, the researchers traced the polluted air parcels backward 10 days, using the National Oceanic and Atmospheric Administration (NOAA) Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) model and precipitation data, to determine where they came from. Overlaying fire data from NASA’s moderate resolution imaging spectroradiometer (MODIS) on board the Terra satellite, the researchers connected nearly all of the high-ozone, low-water structures to tropical regions with active biomass burning in tropical Africa and Southeast Asia.

“The investigation utilised a variety of models, including the NCAR CAM-Chem model to forecast and later analyse chemical and dynamical conditions near Guam, as well as satellite data from numerous instruments that augmented the interpretation of the aircraft observations,” said study co-author Douglas Kinnison, a project scientist at the University Corporation for Atmospheric Research.

In the paper, the researchers also offer a new explanation for the dry nature of the polluted air parcels.

“Our results challenge the explanation atmospheric scientists commonly offer for pockets of high ozone and low water: that these zones result from the air having descended from the stratosphere where air is colder and dryer than elsewhere,” said University of Maryland Professor Ross Salawitch, the study’s senior author and principal investigator of CONTRAST.

“We know that the polluted air did not mix with air in the stratosphere to dry out because we found combined elevated levels of carbon monoxide, nitric oxide and ozone in our air samples, but air in the higher stratosphere does not contain much naturally occurring carbon monoxide,” said Anderson.

The researchers found that the polluted air that reached Guam never entered the stratosphere and instead simply dried out during its descent within the lower atmosphere. While textbooks show air moving upward in the tropics, according to Salawitch, this represents the net motion of air. Because this upward motion happens mostly within small storm systems, it must be balanced by air slowly descending, such as with these polluted parcels released from fires.

Based on the results of this study, global climate models may need to be reassessed to include and correctly represent the impacts of biomass burning, deforestation and reforestation, according to Salawitch. Also, future studies such as NASA’s upcoming Atmospheric Tomography Mission will add to the data collected by CAST and CONTRAST to help obtain a clearer picture of our changing environment.

In addition to those mentioned above, the study’s authors included UMD Department of Atmospheric and Oceanic Science Professor Russell Dickerson and Assistant Research Professor Timothy Canty; CAST co-principal investigator James Lee of the University of York; CONTRAST co-principal investigator Elliott Atlas of the University of Miami; and additional researchers from NASA; NOAA; the University of California, Irvine; the California Institute of Technology; the University of Manchester; the Institute of Physical Chemistry Rocasolano; and the National Research Council in Argentina.

This research was supported by the Natural Environment Research Council, National Science Foundation, NASA, and National Oceanic and Atmospheric Administration.

Reference:
Daniel C. Anderson et al. ‘A pervasive role for biomass burning in tropical high ozone/low water structures’ Nature Communications (2016). DOI: 10.1038/ncomms10267. 

Inset image: Air Tracking. Credit: Daniel Anderson

Adapted from a University of Maryland press release. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/high-ozone-levels-in-tropical-pacific-caused-by-fires-burning-in-africa-and-asia#sthash.bWKq7Na0.dpuf

Cocaine Addiction: Scientists Discover ‘Back Door’ Into The Brain

Cocaine addiction: Scientists discover ‘back door’ into the brain

source: www.cam.ac.uk

Individuals addicted to cocaine may have difficulty in controlling their addiction because of a previously-unknown ‘back door’ into the brain, circumventing their self-control, suggests a new study led by the University of Cambridge.

Most people who use cocaine do so initially in search of a hedonic ‘high’. In some individuals, though, frequent use leads to addiction, where use of the drug is no longer voluntary, but ultimately becomes a compulsion

David Belin

A second study from the team suggests that a drug used to treat paracetamol overdose may be able to help individuals who want to break their addiction and stop their damaging cocaine seeking habits.

Although both studies were carried out in rats, the researchers believe the findings will be relevant to humans.

Cocaine is a stimulant drug that can lead to addiction when taken repeatedly. Quitting can be extremely difficult for some people: around four in ten individuals who relapse report having experienced a craving for the drug – however, this means that six out of ten people have relapsed for reasons other than ‘needing’ the drug.

“Most people who use cocaine do so initially in search of a hedonic ‘high’,” explains Dr David Belin from the Department of Pharmacology at the University of Cambridge. “In some individuals, though, frequent use leads to addiction, where use of the drug is no longer voluntary, but ultimately becomes a compulsion. We wanted to understand why this should be the case.”

Drug-taking causes a release in the brain of the chemical dopamine, which helps provide the ‘high’ experienced by the user. Initially the drug taking is volitional – in other words, it is the individual’s choice to take the drug – but over time, this becomes habitual, beyond their control.

Previous research by Professor Barry Everitt from the Department of Psychology at Cambridge showed that when rats were allowed to self-administer cocaine, dopamine-related activity occurred initially in an area of the brain known as the nucleus accumbens, which plays a significant role driving ‘goal-directed’ behaviour, as the rats sought out the drug. However, if the rats were given cocaine over an extended period, this activity transferred to the dorsolateral striatum, which plays an important role in habitual behaviour, suggesting that the rats were no longer in control, but rather were responding automatically, having developed a drug-taking habit.

The brain mechanisms underlying the balance between goal-directed and habitual behaviour involves the prefrontal cortex, the brain region that orchestrates our behaviour. It was previously thought that this region was overwhelmed by stimuli associated with the drugs, or with the craving experienced during withdrawal; however, this does not easily explain why the majority of individuals relapsing to drug use did not experience any craving.

Chronic exposure to drugs alters the prefrontal cortex, but it also alters an area of the brain called the basolateral amygdala, which is associated with the link between a stimulus and an emotion. The basolateral amygdala stores the pleasurable memories associated with cocaine, but the pre-frontal cortex manipulates this information, helping an individual to weigh up whether or not to take the drug: if an addicted individual takes the drug, this activates mechanisms in the dorsal striatum.

However, in a study published today in the journal Nature Communications, Dr Belin and Professor Everitt studied the brains of rats addicted to cocaine through self-administration of the drug and identified a previously unknown pathway within the brain that links impulse with habits.

The pathway links the basolateral amygdala indirectly with the dorsolateral striatum, circumventing the prefrontal cortex. This means that an addicted individual would not necessarily be aware of their desire to take the drug.

“We’ve always assumed that addiction occurs through a failure or our self-control, but now we know this is not necessarily the case,” explains Dr Belin. “We’ve found a back door directly to habitual behaviour.

“Drug addiction is mainly viewed as a psychiatric disorder, with treatments such as cognitive behavioural therapy focused on restoring the ability of the prefrontal cortex to control the otherwise maladaptive drug use. But we’ve shown that the prefrontal cortex is not always aware of what is happening, suggesting these treatments may not always be effective.”

In a second study, published in the journal Biological Psychiatry, Dr Belin and colleagues showed that a drug used to treat paracetamol overdose may be able to help individuals addicted to cocaine overcome their addiction – provided the individual wants to quit.

The drug, N-acetylcysteine, had previously been shown in rat studies to prevent relapse. However, the drug later failed human clinical trials, though analysis suggested that while it did not lead addicted individuals to stop using cocaine, amongst those who were trying to abstain, it helped them refrain from taking the drug.

Dr Belin and colleagues used an experiment in which rats compulsively self-administered cocaine. They found that rats given N-acetylcysteine lost the motivation to self-administer cocaine more quickly than rats given a placebo. In fact, when they had stopped working for cocaine, they tended to relapse at a lower rate. N-acetylcysteine also increased the activity in the brain of a particular gene associated with plasticity – the ability of the brain to adapt and learn new skills.

“A hallmark of addiction is that the user continues to take the drug even in the face of negative consequences – such as on their health, their family and friends, their job, and so on,” says co-author Mickael Puaud from the Department of Pharmacology of the University of Cambridge. “Our study suggests that N-acetylcysteine, a drug that we know is well tolerated and safe, may help individuals who want to quit to do so.”

Reference
Murray, JE et al. Basolateral and central amygdala differentially recruit and maintain dorsolateral striatum-dependent cocaine-seeking habits. Nature Comms; 16 December 2015

Ducret, E et al. N-acetylcysteine facilitates self-imposed abstinence after escalation of cocaine intake. Biological Psychiatry; 7 Oct 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/cocaine-addiction-scientists-discover-back-door-into-the-brain#sthash.265BAzOW.dpuf

Melting of Massive Ice ‘Lid’ Resulted In Huge Release of CO2 At The End of The Ice Age

Melting of massive ice ‘lid’ resulted in huge release of CO2 at the end of the ice age

source: www.cam.ac.uk

A new study of how the structure of the ocean has changed since the end of the last ice age suggest that the melting of a vast ‘lid’ of sea ice caused the release of huge amounts of carbon dioxide into the atmosphere.

Although conditions at the end of the last ice age were very different to today, this study highlights the importance that dynamic features such as sea ice have on regulating the climate system.

Jenny Roberts

A new study reconstructing conditions at the end of the last ice age suggests that as the Antarctic sea ice melted, massive amounts of carbon dioxide that had been trapped in the ocean were released into the atmosphere.

The study includes the first detailed reconstruction of the Southern Ocean density of the period and identified how it changed as the Earth warmed. It suggests a massive reorganisation of ocean temperature and salinity, but finds that this was not the driver of increased concentration of carbon dioxide in the atmosphere. The study, led by researchers from the University of Cambridge, is published in the journalProceedings of the National Academy of Sciences.

The ocean is made up of different layers of varying densities and chemical compositions. During the last ice age, it was thought that the deepest part of the ocean was made up of very salty, dense water, which was capable of trapping a lot of CO2. Scientists believed that a decrease in the density of this deep water resulted in the release of CO2 from the deep ocean to the atmosphere.

However, the new findings suggest that although a decrease in the density of the deep ocean did occur, it happened much later than the rise in atmospheric CO2, suggesting that other mechanisms must be responsible for the release of CO2 from the oceans at the end of the last ice age.

“We set out to test the idea that a decrease in ocean density resulted in a rise in CO2 by reconstructing how it changed across time periods when the Earth was warming,” said the paper’s lead author Jenny Roberts, a PhD student in Cambridge’s Department of Earth Sciences who is also a member of the British Antarctic Survey. “However what we found was not what we were expecting to see.”

In order to determine how the oceans have changed over time and to identify what might have caused the massive release of CO2, the researchers studied the chemical composition of microscopic shelled animals that have been buried deep in ocean sediment since the end of the ice age. Like layers of snow, the shells of these tiny animals, known as foraminifera, contain clues about what the ocean was like while they were alive, allowing the researchers to reconstruct how the ocean changed as the ice age was ending.

They found that during the cold glacial periods, the deepest water was significantly denser than it is today. However, what was unexpected was the timing of the reduction in the deep ocean density, which happened some 5,000 years after the initial increase in CO2, meaning that the density decrease couldn’t be responsible for releasing CO2 to the atmosphere.

“Before this study there were these two observations, the first was that glacial deep water was really salty and dense, and the second that it also contained a lot of CO2, and the community put two and two together and said these two observations must be linked,” said Roberts. “But it was only through doing our study, and looking at the change in both density and CO2 across the deglaciation, that we found they actually weren’t linked. This surprised us all.”

Through examination of the shells, the researchers found that changes in CO2 and density are not nearly as tightly linked as previously thought, suggesting something else must be causing CO2 to be released from the ocean.

Like a bottle of wine with a cork, sea ice can prevent CO2-rich water from releasing its CO2to the atmosphere. The Southern Ocean is a key area of exchange of CO2 between the ocean and atmosphere. The expansion of sea ice during the last ice age acted as a ‘lid’ on the Southern Ocean, preventing CO2 from escaping. The researchers suggest that the retreat of this sea ice lid at the end of the last ice age uncorked this ‘vintage’ CO2, resulting in an increase in carbon dioxide in the atmosphere.

“Although conditions at the end of the last ice age were very different to today, this study highlights the importance that dynamic features such as sea ice have on regulating the climate system, and emphasises the need for improved understanding and prediction as we head into our ever warming world,” said Roberts.

Reference:
Roberts, J. et. al. ‘Evolution of South Atlantic density and chemical stratification across the last deglaciation.’ PNAS (2016). DOI: 10.1073/pnas.1511252113

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/melting-of-massive-ice-lid-resulted-in-huge-release-of-co2-at-the-end-of-the-ice-age#sthash.SygI4uQf.dpuf

Banning Trophy Hunting Could Do More Harm Than Good

Banning trophy hunting could do more harm than good

source: www.cam.ac.uk

Trophy hunting shouldn’t be banned, but instead it should be better regulated to ensure funds generated from permits are invested back into local conservation efforts, according to new research.

There are many concerns about trophy hunting beyond the ethical that currently limit its effectiveness as a conservation tool.

Nigel Leader-Williams

Banning trophy hunting would do more harm than good in African countries that have little money to invest in critical conservation initiatives, argue researchers from the Universities of Cambridge, Adelaide and Helsinki. Trophy hunting can be an important conservation tool, provided it can be done in a controlled manner to benefit biodiversity conservation and local people. Where political and governance structures are adequate, trophy hunting can help address the ongoing loss of species.

The researchers have developed a list of 12 guidelines that could address some of the concerns about trophy hunting and enhance its contribution to biodiversity conservation. Their paper is published in the journal Trends in Ecology & Evolution.

“The story of Cecil the lion, who was killed by an American dentist in July 2015, shocked people all over the world and reignited debates surrounding trophy hunting,” said Professor Corey Bradshaw of the University of Adelaide, the paper’s senior author.

“Understandably, many people oppose trophy hunting and believe it is contributing to the ongoing loss of species; however, we contend that banning the US$217 million per year industry in Africa could end up being worse for species conservation,” he said.

Professor Bradshaw says trophy hunting brings in more money and can be less disruptive than ecotourism. While the majority of animals hunted in sub-Saharan Africa are more common and less valuable species, the majority of hunting revenue comes from a few valuable species, particularly the charismatic ‘Big Five’: lion, leopard, elephant, buffalo and black or white rhinoceros.

“Conserving biodiversity can be expensive, so generating money is essential for environmental non-government organisations, conservation-minded individuals, government agencies and scientists,” said co-author Dr Enrico Di Minin from the University of Helsinki.

“Financial resources for conservation, particularly in developing countries, are limited,” he said. “As such, consumptive (including trophy hunting) and non-consumptive (ecotourism safaris) uses are both needed to generate funding. Without such these, many natural habitats would otherwise be converted to agricultural or pastoral uses.

“Trophy hunting can also have a smaller carbon and infrastructure footprint than ecotourism, and it generates higher revenue from a lower number of uses.”

However, co-author Professor Nigel Leader-Williams from Cambridge’s Department of Geography said there is a need for the industry to be better regulated.

“There are many concerns about trophy hunting beyond the ethical that currently limit its effectiveness as a conservation tool,” he said. “One of the biggest problems is that the revenue it generates often goes to the private sector and rarely benefits protected-area management and the local communities. However, if this money was better managed, it would provide much needed funds for conservation.”

The authors’ guidelines to make trophy hunting more effective for conservation are:

  1. Mandatory levies should be imposed on safari operators by governments so that they can be invested directly into trust funds for conservation and management;
  2. Eco-labelling certification schemes could be adopted for trophies coming from areas that contribute to broader biodiversity conservation and respect animal welfare concerns;
  3. Mandatory population viability analyses should be done to ensure that harvests cause no net population declines;
  4. Post-hunt sales of any part of the animals should be banned to avoid illegal wildlife trade;
  5. Priority should be given to fund trophy hunting enterprises run (or leased) by local communities;
  6. Trusts to facilitate equitable benefit sharing within local communities and promote long-term economic sustainability should be created;
  7. Mandatory scientific sampling of hunted animals, including tissue for genetic analyses and teeth for age analysis, should be enforced;
  8. Mandatory 5-year (or more frequent) reviews of all individuals hunted and detailed population management plans should be submitted to government legislators to extend permits;
  9. There should be full disclosure to public of all data collected (including levied amounts);
  10. Independent government observers should be placed randomly and without forewarning on safari hunts as they happen;
  11. Trophies must be confiscated and permits are revoked when illegal practices are disclosed; and
  12. Backup professional shooters and trackers should be present for all hunts to minimise welfare concerns.

Reference:
E. Di Minin et al. ‘Banning Trophy Hunting Will Exacerbate Biodiversity Loss.’ Trends in Ecology & Evolution (2015). DOI: 10.1016/j.tree.2015.12.006

Adapted from a University of Adelaide press release.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/banning-trophy-hunting-could-do-more-harm-than-good#sthash.WN4HFNcM.dpuf

Global Learning Is Needed To Save Carbon Capture and Storage From Being Abandoned

Global learning is needed to save carbon capture and storage from being abandoned

source: www.cam.ac.uk

Governments should not be abandoning carbon capture and storage, argues a Cambridge researcher, as it is the only realistic way of dramatically reducing carbon emissions. Instead, they should be investing in global approaches to learn what works – and what doesn’t.

If we’re serious about meeting aggressive national or global emissions targets, the only way to do it affordably is with carbon capture and storage

David Reiner

Carbon capture and storage, which is considered by many experts as the only realistic way to dramatically reduce carbon emissions in an affordable way, has fallen out of favour with private and public sector funders. Corporations and governments worldwide, including most recently the UK, are abandoning the same technology they championed just a few years ago.

In a commentary published today (11 January) in the inaugural issue of the journalNature Energy, a University of Cambridge researcher argues that now is not the time for governments to drop carbon capture and storage (CCS). Like many new technologies, it is only possible to learn what works and what doesn’t by building and testing demonstration projects at scale, and that by giving up on CCS instead of working together to develop a global ‘portfolio’ of projects, countries are turning their backs on a key part of a low-carbon future.

CCS works by separating the carbon dioxide emitted by coal and gas power plants, transporting it and then storing it underground so that the CO2 cannot escape into the atmosphere. Critically, CCS can also be used in industrial processes, such as chemical, steel or cement plants, and is often the only feasible way of reducing emissions at these facilities. While renewable forms of energy, such as solar or wind, are important to reducing emissions, until there are dramatic advances in battery technology, CCS will be essential to deliver flexible power and to build green industrial clusters.

“If we’re serious about meeting aggressive national or global emissions targets, the only way to do it affordably is with CCS,” said Dr David Reiner of Cambridge Judge Business School, the paper’s author. “But since 2008, we’ve seen a decline in interest in CCS, which has essentially been in lock step with our declining interest in doing anything serious about climate change.”

Just days before last year’s UN climate summit in Paris, the UK government cancelled a four-year, £1 billion competition to support large-scale CCS demonstration projects. And since the financial crisis of 2008, projects in the US, Canada, Australia, Europe and elsewhere have been cancelled, although the first few large-scale integrated projects have recently begun operation. The Intergovernmental Panel on Climate Change (IPCC) says that without CCS, the costs associated with slowing global warming will double.

According to Reiner, there are several reasons that CCS seems to have fallen out of favour with both private and public sector funders. The first is cost – a single CCS demonstration plant costs in the range of $1 billion. Unlike solar or wind, which can be demonstrated at a much smaller scale, CCS can only be demonstrated at a large scale, driven by the size of commercial-scale power plants and the need to characterise the geological formations which will store the CO2.

“Scaling up any new technology is difficult, but it’s that much harder if you’re working in billion-dollar chunks,” said Reiner. “At 10 or even 100 million dollars, you will be able to find ways to fund the research & development. But being really serious about demonstrating CCS and making it work means allocating very large sums at a time when national budgets are still under stress after the global financial crisis.”

Another reason is commercial pressures and timescales. “The nature of demonstration is that you work out the kinks – you find out what works and what doesn’t, and you learn from it,” said Reiner. “It’s what’s done in science or in research and development all the time: you expect that nine of ten ideas won’t work, that nine of ten oil wells you drill won’t turn up anything, that nine of ten new drug candidates will fail. Whereas firms can make ample returns on a major oil discovery or a blockbuster drug to make up for the many failures along the way, that is clearly not the case for CCS, so the answer is almost certainly government funding or mandates.

“The scale of CCS and the fact that it’s at the demonstration rather than the research and development phase also means that you don’t get to play around with the technology as such – you’re essentially at the stage where, to use a gambling analogy, you’re putting all your money on red 32 or black 29. And when a certain approach turns out to be more expensive than expected, it’s easy for nay-sayers to dismiss the whole technology, rather than to consider how to learn from that failure and move forward.”

There is also the issue that before 2008 countries thought they would each be developing their own portfolios of projects and so they focused inward, rather than working together to develop a global portfolio of large-scale CCS demonstrations. In the rush to fund CCS projects between 2005 and 2009, countries assembled projects independently, and now only a handful of those projects remain.

According to Reiner, building a global portfolio, where countries learn from each other’s projects, will assist in learning through diversity and replication, ‘de-risking’ the technology and determining whether it ever emerges from the demonstration phase.

“If we’re not going to get CCS to happen, it’s hard to imagine getting the dramatic emissions reductions we need to limit global warming to two degrees – or three degrees, for that matter,” he said. “However, there’s an inherent tension in developing CCS – it is not a single technology, but a whole suite and if there are six CCS paths we can go down, it’s almost impossible to know sitting where we are now which is the right path. Somewhat ironically, we have to be willing to invest in these high-cost gambles or we will never be able to deliver an affordable, low-carbon energy system.”

Reference:
David M. Reiner. ‘Learning through a portfolio of carbon capture and storage demonstration projects.’ Nature Energy (2016). DOI: 10.1038/nenergy.2015.11

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/global-learning-is-needed-to-save-carbon-capture-and-storage-from-being-abandoned#sthash.HBlfRDwr.dpuf

New £369m Contract to Service RAF Hercules in Cambridge

New £369m contract to service RAF Hercules in Cambridge

Hercules transport planeImage copyrightAFP/Getty
Image captionRAF Hercules aircraft are used to carry troops, supplies and equipment in support of military operations around the world

A £369m contract to maintain the RAF’s Hercules fleet has been awarded to Cambridge-based Marshall Aerospace helping to secure 1,200 jobs.

The agreement will mean continued support for the C-130Js until 2022, the Ministry of Defence (MoD) said.

Marshall, which was founded in 1909, are specialists in servicing transport planes.

RAF Hercules aircraft are used to carry troops, supplies and equipment in support of military operations.

More on this story and others from CambridgeshireHercules transport planeImage copyrightPA

Image captionThe MoD describes the Hercules as “one of RAF’s workhorses” and a “vital part of its transport fleet” 

The MoD describes the Hercules as “one of RAF’s workhorses” and a “vital part of its transport fleet”.

As part of the six-year contract work will also be undertaken by Lockheed Martin and its sub-contractors at sites in Havant, Stansted and Cheltenham.

Defence Secretary Michael Fallon said: “It (the contract) will secure around 1,200 skilled jobs and ensure our essential RAF transport aircraft are prepared for operations for years to come.”

The contract will focus on servicing the 24 RAF C-130J-type Hercules planes following the retirement of the C-130K planes.

source: http://www.bbc.co.uk/

Bango Expands Collaboration with Microsoft to Put Carrier Billed Payments on Windows 10 Devices

Bango Expands Collaboration with Microsoft to Put Carrier Billed Payments on Windows 10 Devices

Successful relationship now scales-up to include PC, tablet, smartphone.

News Image

Bango offers our operator partners a sophisticated platform for launching, managing and growing carrier billing business in the Windows Store.

Bango (AIM: BGO), the mobile payments company, has expanded its agreement and integration with Microsoft Corp. to deliver carrier-billed payments across Windows 10 devices. As a result, charging payments to the user’s phone bills will become available to Windows Store customers.

With more than 110 million devices running Windows 10, consumers are using the Windows Store to acquire a vast range of applications and entertainment content that can be used across devices. For the first time, customers running Windows 10 on a PC or tablet will now be able to buy digital content by charging the costs to their mobile phone bill.

Bango is working in collaboration with Microsoft and mobile operator partners globally to ensure maximum coverage for this payment method. Specific operator availability will be announced as they are launched starting in January. Operators will benefit from unique Bango Boost technology, which analyzes and benchmarks a wide range of KPIs that grow payments success rates from carrier billing, in some instances by over 70%

App stores are seeing the emergence of carrier billing as a vital enabler of mobile commerce globally, generating 3x – 10x increase in conversion rates. In September 2015, Progressive Research reported approximately 280 carrier billing routes live with app stores, with over 40% running through the Bango Platform.

Commenting on the announcement, Bango CEO Ray Anderson said:

“We have enjoyed our collaboration with Microsoft for the Windows Store, so it is a major milestone that Microsoft is now adopting this payment method across Windows 10. Operators using the Bango Payment Platform will get to market quickly and can then use Bango technology to maximize their Windows 10 revenue.”

Todd Brix, Windows Store General Manager said:
“This addition to Windows 10 presents a new opportunity for app and game developers to reach millions of unbanked or under-banked consumers by enabling them to easily bill content to their existing mobile operator accounts. Bango offers our operator partners a sophisticated platform for launching, managing and growing carrier billing business in the Windows Store.”

About Bango
Bango’s mobile payment platform is vital to the global growth in digital content sales. The giants of mobile choose the Bango Payment Platform to provide a delightful and immediate payment experience that maximizes sales of digital content.

With over 140 markets activated by our partners, the Bango Payment Platform is established as the global standard for app stores to offer carrier billing. As the next billion consumers pick up their first smartphone, Bango technology will be there to unlock the universe of apps, video, games and other content that bring those smartphones to life. Global leaders plugging into Bango include Amazon, BlackBerry, Facebook, Google, Samsung, Microsoft and Mozilla.

source: http://uk.prweb.com/

Opinion: Paying People To Stay Away Is Not Always The Best Way To Protect Watersheds

Opinion: Paying people to stay away is not always the best way to protect watersheds

source: www.cam.ac.uk

Libby Blanchard and Bhaskar Vira from Cambridge’s Department of Geography argue that we need to consider alternative approaches in order to protect watersheds.

The successful management of the Wasatch demonstrates that an overreliance on markets to deliver watershed protection might be misguided.

In the American West, unprecedented droughts have caused extreme water shortages. The current drought in California and across the West is entering its fourth year, with precipitation and water storage reaching record low levels.

Such drought and water scarcity are onlylikely to increase with climate change, and the chances of a “megadrought” – one that lasts 35 years or longer — affecting the Southwest and central Great Plains by 2100 are above 80% if greenhouse gas emissions are not reduced.

Droughts are currently ranked second in the US in terms of national weather-related damages, with annual losses just shy of US$9 billion annually. Such economic impacts are likely to worsen as the century progresses.

As the frequency and severity of droughts increases, the successful protection of watersheds to capture, store and deliver water downstream in catchments will become increasingly important, even as the effective protection of watersheds becomes more challenging.

Since the early 2000s, the prevailing view in watershed protection is that paying upstream resource users for avoiding harmful activities, or rewarding positive action, is the most effective and direct method. This is the case of the Catskills watershed in New York, where environmentally sound economic development is incentivized.

There are, however, many different ways communities can invest in watersheds to harness the benefits they provide downstream communities.

In a recently published paper in the journal Ecosystem Services, we highlight an alternative option with the example of Salt Lake City’s successful management of the Wasatch watershed. Instead of offering financial incentives for the “ecosystem services” provided by this watershed, planners use regulations to secure the continued delivery of water, while allowing for recreational and public use.

The successful management of the Wasatch demonstrates that an overreliance on markets to deliver watershed protection might be misguided.

Perhaps part of the reason for this overreliance on market-based tools is a paucity of alternative success stories of watershed management. We note that the Wasatch story has been largely absent from much of the literature that discusses the potential of investing in watersheds for the important services that they provide. This absence results in an incomplete understanding of options to secure watershed ecosystem services, and limits the consideration of alternative watershed conservation approaches.

The Wasatch management strategy

The Wasatch is a 185-square-mile watershed that is an important drinking water source to over half a million people in Salt Lake City. This water comes from the annual snowmelt from the 11,000-foot-high peaks in the Wasatch range, which act as Salt Lake City’s virtual reservoir.

Salt Lake City’s management of the Wasatch watershed is somewhat unusual in contemporary examples of watershed protection in that it is focused on nonexclusionary regulation – that is, allowing permitted uses – and zoning to protect the urban water supply. For instance, the cities of Portland, Oregon and Santa Fe, New Mexico have worked with the US Forest Service to prohibit public access to source water watersheds within forests to protect drinking water supplies. In contrast, the governance of the Wasatch allows for public access and both commercial and noncommercial activities to occur in the watershed, such as skiing and mountain biking. It also imposes restrictions on allowable uses, such as restricting dogs in the watershed.

This permitted use, socially negotiated, helps mitigate the potential trade-offs associated with protection activities.

The suite of policies that protect the Wasatch do not include a “payments for ecosystem services” or other market-based incentives component, nor has there been any discussion of compensating potential resource users in the watershed for foregone economic opportunities. By not having a market-based incentives component, the Wasatch example provides an alternative regulatory-based solution for the protection of natural capital, which contrasts with the now prevalent market-based payments approach.

Importantly, the Wasatch example reinforces the rights of citizens to derive positive benefits from nature, without these being mediated through the mechanism of markets. In most payment-based systems, potential harm to a watershed is avoided by organizing beneficiaries so that they can compensate upstream resource users for foregone activities. In contrast, reliance on regulation and permitted activities supports the ‘polluter pays principle,’ which might be more appropriate in many circumstances.

Why we need alternative strategies

With the American West facing ever-increasing droughts, policymakers will be faced with the increasingly difficult task of protecting and preserving water supplies. Thus, awareness of alternative, successful strategies of watershed protection and management is crucially important.

The Wasatch offers an important example of how natural capital can be instrumentally and economically valued, but conserved via regulatory approaches and land use management and zoning, rather than a reliance on the creation of water markets, which are often misplaced and not suitable. Bringing stakeholders together to negotiate allowable uses that preserve critical watershed functions is an additional option within the policymaker’s toolkit, and one that is at risk of being forgotten in the rush to payment-based systems.

Libby Blanchard, Gates Cambridge Scholar and PhD Candidate, University of Cambridgeand Bhaskar Vira, Reader in Political Economy at the Department of Geography and Fellow of Fitzwilliam College; Director, University of Cambridge Conservation Research Institute, University of Cambridge

This article was originally published on The Conversation. Read the original article.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/discussion/opinion-paying-people-to-stay-away-is-not-always-the-best-way-to-protect-watersheds#sthash.xMyKtxrR.dpuf

Second Contagious Form Of Cancer Found In Tasmanian Devils

Second contagious form of cancer found in Tasmanian devils

source: www.cam.ac.uk

Transmissible cancers – cancers which can spread between individuals by the transfer of living cancer cells – are believed to arise extremely rarely in nature. One of the few known transmissible cancers causes facial tumours in Tasmanian devils, and is threatening this species with extinction. Today, scientists report the discovery of a second transmissible cancer in Tasmanian devils.

Until now, we’ve always thought that transmissible cancers arise extremely rarely in nature, but this new discovery makes us question this belief

Elizabeth Murchison

The discovery, published in the journalProceedings of the National Academy of Science, calls into question our current understanding of the processes that drive cancers to become transmissible.

Tasmanian devils are iconic marsupial carnivores that are only found in the wild on the Australian island state of Tasmania. The size of a small dog, the animals have a reputation for ferocity as they frequently bite each other during mating and feeding interactions.

In 1996, researchers observed Tasmanian devils in the north-east of the island with tumours affecting the face and mouth; soon it was discovered that these tumours were contagious between devils, spread by biting. The cancer spreads rapidly throughout the animal’s body and the disease usually causes the death of affected animals within months of the appearance of symptoms. The cancer has since spread through most of Tasmania and has triggered widespread devil population declines. The species was listed as endangered by the International Union for Conservation of Nature in 2008.

To date, only two other forms of transmissible cancer have been observed in nature: in dogs and in soft-shell clams. Cancer normally occurs when cells in the body start to proliferate uncontrollably; occasionally, cancers can spread and invade the body in a process known as ‘metastasis’; however, cancers do not normally survive beyond the body of the host from whose cells they originally derived. Transmissible cancers, however, arise when cancer cells gain the ability to spread beyond the body of the host that first spawned them, by transmission of cancer cells to new hosts.

Now, a team led by researchers from the University of Tasmania, Australia, and the University of Cambridge, UK, has identified a second, genetically distinct transmissible cancer in Tasmania devils.

“The second cancer causes tumours on the face that are outwardly indistinguishable from the previously-discovered cancer,” said first author Dr Ruth Pye from the Menzies Institute for Medical Research at the University of Tasmania. “So far it has been detected in eight devils in the south-east of Tasmania.”

“Until now, we’ve always thought that transmissible cancers arise extremely rarely in nature,” says Dr Elizabeth Murchison from the Department of Veterinary Medicine at the University of Cambridge, a senior author on the study, “but this new discovery makes us question this belief.

“Previously, we thought that Tasmanian devils were extremely unlucky to have fallen victim to a single runaway cancer that emerged from one individual devil and spread through the devil population by biting. However, now that we have discovered that this has happened a second time, it makes us wonder if Tasmanian devils might be particularly vulnerable to developing this type of disease, or that transmissible cancers may not be as rare in nature as we previously thought.”

Professor Gregory Woods, joint senior author from the Menzies Institute for Medical Research at the University of Tasmania, adds: “It’s possible that in the Tasmanian wilderness there are more transmissible cancers in Tasmanian devils that have not yet been discovered. The potential for new transmissible cancers to emerge in this species has important implications for Tasmanian devil conservation programmes.”

The discovery of the second transmissible cancer began in 2014, when a devil with facial tumours was found in south-east Tasmania. Although this animal’s tumours were outwardly very similar to those caused by the first-described Tasmanian devil transmissible cancer, the scientists found that this devil’s cancer carried different chromosomal rearrangements and was genetically distinct. Since then, eight additional animals have been found with the new cancer in the same area of south-east Tasmania.

The research was primarily supported the Wellcome Trust and the Australian Research Council, with additional support provided by Dr Eric Guiler Tasmanian Devil Research Grants and by the Save the Tasmanian Devil Program.

For more information about the research into Tasmanian devils, see T is for Tasmanian Devil.

Reference
Pye, RJ et al. A second transmissible cancer in Tasmanian devils. PNAS; 28 Dec 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/second-contagious-form-of-cancer-found-in-tasmanian-devils#sthash.Jy0fZf1U.dpuf

Opinion: How Frugal Innovation Can Kickstart The Global Economy In 2016

Opinion: How frugal innovation can kickstart the global economy in 2016

source: www.cam.ac.uk

Jaideep Prabhu (Cambridge Judge Business School) discusses the frugal innovation revolution that is taking the world by storm.

In late 2015 a Cambridge-based nonprofit released the Raspberry Pi Zero, a tiny £4 computer that was a whole £26 cheaper than the original 2012 model. The Zero is not only remarkable for its own sake – a computer so cheap it comes free with a £5.99 magazine – it is also symptomatic of a larger “frugal innovation” revolution that is taking the world by storm.

With the global economy struggling, this is the kind of innovation that could kickstart it in 2016. Empowered by cheap computers such as the Raspberry Pi and other ubiquitous tools such as smartphones, cloud computing, 3D printers, crowdfunding, and social media, small teams with limited resources are now able to innovate in ways that only large companies and governments could in the past. This frugal innovation – the ability to create faster, better and cheaper solutions using minimal resources – is poised to drive global growth in 2016 and beyond.

More than four billion people around the world, most of them in developing countries, live outside the formal economy and face significant unmet needs when it come to health, education, energy, food, and financial services. For years this large population was either the target of aid or was left to the mercy of governments.

More recently, large firms and smaller social enterprises have begun to see these four billion as an enormous opportunity to be reached through market-based solutions. These solutions must, however, be frugal – highly affordable and flexible in nature. They typically include previously excluded groups both as consumers and producers. Bringing the next four billion into the formal economy through frugal innovation has already begun to unleash growth and create unprecedented wealth in Asia, Africa and Latin America. But there’s much, much more to come.

Good news

Take the case of telecommunications. Over the last decade or so, highly affordable handsets and cheap calling rates have made mobile phones as commonplace as toothbrushes. In addition to bringing massive productivity gains to farmers and small businesses – not to mention creating new sources of employment – mobile phones also enable companies to roll out financial, healthcare and educational services affordably and at scale.

Take the case of Safaricom, Vodafone’s subsidiary in Kenya. In 2007 the company introduced M-Pesa, a service that enables anyone with a basic, SMS-enabled mobile phone to send and receive money that can be cashed in a corner shop acting as an M-Pesa agent.

This person-to-person transfer of small amounts of money between people who are often outside the banking system has increased financial inclusion in Kenya in a highly affordable and rapid way. So much so that more than 20m Kenyans now use M-Pesa and the volume of transactions on the system is more than US$25 billion, more than half the country’s GDP. M-Pesa (and services like it) have now spread to several other emerging markets in Africa and Asia.

Similar frugal innovations in medical devices, transport, solar lighting and heating, clean cookstoves, cheap pharmaceuticals, sanitation, consumer electronics and so on, have driven growth in Asia and Africa over the past decade and will continue to do so in the decades to come.

Catching on

Meanwhile the developed world is catching up. Declining real incomes and government spending, accompanied by greater concern for the environment, are making Western consumers both value and values conscious.

The rise of two massive movements in recent years, the sharing economy and the maker movement, shows the potential of frugal innovation in the West. The sharing economy, exemplified by Airbnb, BlaBlaCar and Kickstarter, has empowered consumers to trade spare assets with each other and thus generate new sources of income. The maker movement, meanwhile, features proactive consumers who tinker in spaces such as FabLabs,TechShops and MakeSpaces, designing solutions to problems they encounter.

Square, a small white, square device that fits into the audio jack of a smartphone, using its computing power and connectivity to make credit card payments, is an example of a product that was developed in a TechShop. Launched in 2010, the Square is on track to make US$1 billion in revenue in 2015.

Frugal innovation not only has the power to drive more inclusive growth by tackling poverty and inequality around the world, it is also increasingly the key to growth that will not simultaneously wreck the planet. The big issue at the Paris climate summit was the increasing wedge between the developed and the developing world. On the one hand, the rich countries cannot stop the poor ones from attempting to achieve the West’s levels of prosperity. On the other, however, poor countries cannot grow in the way the West did without wrecking the planet.

The only way to square this circle is to ensure that the growth is sustainable. The need for frugal innovation is therefore all the more vital in areas such as energy generation and use, manufacturing systems that are more local, and a move to a circular economy where companies (and consumers) reduce, reuse and recycle materials in a potentially endless loop.

Never before have so many been able to do so much for so little. Aiding and stimulating this frugal innovation revolution holds the key to driving global growth by employing more people to solve some of the big problems of poverty, inequality and climate change that stalk the planet.

Jaideep Prabhu, Director, Centre for India & Global Business at Cambridge Judge Business School, University of Cambridge

This article was originally published on The Conversation. Read the original article.

The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.

Inset images: M-Pesa stand (Fiona Bradley); Square readers (Dom Sagolla).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/discussion/opinion-how-frugal-innovation-can-kickstart-the-global-economy-in-2016#sthash.8sVre3w0.dpuf