All posts by Admin

From Robot Intelligence To Sex By Numbers: Cambridge Heads For Hay

From robot intelligence to sex by numbers: Cambridge heads for Hay

Source: www.cam.ac.uk

For the eighth year running, the Cambridge Series at the prestigious Hay Festival will showcase a broad range of the University’s research excellence.

Cambridge University nurtures and challenges the world’s greatest minds, and offers the deepest understanding of the most intractable problems and the most thrilling opportunities. And for one week a year they bring that thinking to a field in Wales and share it with everyone. That’s a wonderful gift.

Peter Florence

A record number of Cambridge academics will take part in this year’s Hay Festival, one of the most prestigious literary festivals in the world.

This is the eighth year running that the Series has formed part of the festival. This year it features a range of speakers, from experts on climate change, robotics, maternal health and risk to Classics, European politics, nuclear power, playfulness in education and digital media.

The Series is part of the University of Cambridge’s commitment to public engagement. The Festival runs from 26th May to 5th June and is now open for bookings. Twenty-seven academics from the University of Cambridge and several alumni will be speaking.

This year’s line-up includes Professor Peter Mandler on education and social mobility; Professor Ashley Moffett on immunity in pregnancy; Dame Carol Black, Principal of Newnham College, on addiction, obesity and employment; Professor Susan Gathercole on working memory; Fumiya Iida on robot intelligence; Professor Paul Cartledge on ancient Greek democracy; Professor Eric Wolff on climate change, past, present and future; Professor Jim Huntington on breakthrough research into blood clotting and how the insights are being used to prevent heart attacks and stroke; Topun Austin on the development of the human brain; Kathelijne Koops on what chimpanzees and bonobos can tell us about human culture; Suman-Lata Sahonta on LEDs; and Giles Yeo on genetic predisposition to obesity. Dr Yeo will be presenting a BBC Horizon programme on his research in June. Neuroscientist Hannah Critchlow also returns after being singled out as one of the highlights of Hay 2015.

In addition, there will be a series of discussions: Sharath Srinivasan, Director of the Centre of Governance and Human Rights, will be joined by blogger, technologist and social entrepreneur Marieme Jamme and Rob Burnet, CEO and Founder of Well Told Story, to talk about Africa’s digital revolution. David Whitebread, Jenny Gibson and Sara Baker from the PEDAL Research Centre will ask if the consequences of curtailing play, in schools, at home and in the outdoors, could be catastrophic for healthy child development. Madeline Abbas, Chris Bickerton and Katharina Karcher will debate the future of Europe. And theatre director and academic Zoe Svendsen and journalist and economist Paul Mason will explore the theatricality of capitalism through examining what an economic analysis of Shakespeare’s plays might tell us about character and how the human is represented. They are collaborating on a research and development project at the Young Vic Theatre.

Several of the speakers have new books out – Dame Fiona Reynolds, Master of Emmanuel College, will discuss the fight for beauty; Professor David Spiegelhalter will address the statistics of sexual behaviour and whether we can believe them; Professor Paul Murdin will speak about his book on the landscapes of other worlds as imaged close-up by space probes; Simon Taylor will discuss the strange rebirth of nuclear power in Britain; and Matt Wilkinson will explain how the need to move has driven the evolution of life on Earth.  Jennifer Wallace, author of the novel Digging up Milton, will be joined by Professor Adrian Poole to discuss literary celebrity in the 18th and 19th centuries. Chris Bickerton’s book The European Union: a citizen’s guide is out in June.

Also taking part in the Festival from the University of Cambridge are  Professor Richard Evans, Professor Tim Whitmarsh and Dr Christine Corton.

Peter Florence, director of the Hay Festival, said: “Cambridge University nurtures and challenges the world’s greatest minds, and offers the deepest understanding of the most intractable problems and the most thrilling opportunities. And for one week a year they bring that thinking to a field in Wales and share it with everyone. That’s a wonderful gift.”

Dane Comerford, head of public engagement at the University of Cambridge, said: “The Cambridge series is a fantastic way to share fascinating research from the University with the public. The Hay Festival draws an international cross-section of people, from policy makers to prospective university students. We have found that Hay audiences are highly interested in the diversity of Cambridge speakers, and ask some great questions. We look forward to another wonderful series of speakers, with talks and debates covering so many areas of research and key ideas emerging from Cambridge, relevant to key issues faced globally today.”

To book tickets go to www.hayfestival.org. For the full line-up of the Cambridge Series and times, click here.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/from-robot-intelligence-to-sex-by-numbers-cambridge-heads-for-hay#sthash.tIWtChwS.dpuf

Early-Stage Embryos With Abnormalities May Still Develop Into Healthy Babies

Early-stage embryos with abnormalities may still develop into healthy babies

source: www.cam.ac.uk

Abnormal cells in the early embryo are not necessarily a sign that a baby will be born with a birth defect such as Down’s syndrome, suggests new research carried out in mice at the University of Cambridge. In a study published today in the journal Nature Communications, scientists show that abnormal cells are eliminated and replaced by healthy cells, repairing – and in some cases completely fixing – the embryo.

What does it mean if a quarter of the cells from the placenta carry a genetic abnormality – how likely is it that the child will have cells with this abnormality, too? This is the question we wanted to answer

Magdalena Zernicka-Goetz

Researchers at the Department of Physiology, Development and Neuroscience at Cambridge report a mouse model of aneuploidy, where some cells in the embryo contain an abnormal number of chromosomes. Normally, each cell in the human embryo should contain 23 pairs of chromosomes (22 pairs of chromosomes and one pair of sex chromosomes), but some can carry multiple copies of chromosomes, which can lead of developmental disorders. For example, children born with three copies of chromosome 21 will develop Down’s syndrome.

Pregnant mothers – particular older mothers, whose offspring are at greatest risk of developing such disorders – are offered tests to predict the likelihood of genetic abnormalities. Between the 11th and 14th weeks of pregnancy, mothers may be offered chorionic villus sampling (CVS), a test that involves removing and analysing cells from the placenta. A later test, known as amniocentesis, involves analysing cells shed by the foetus into the surrounding amniotic fluid – this test is more accurate, but is usually carried out during weeks 15-20 of the pregnancy, when the foetus is further developed.

Professor Magdalena Zernicka-Goetz, the study’s senior author, was inspired to carry out the research following her own experience when pregnant with her second child. “I am one of the growing number of women having children over the age of 40 – I was pregnant with my second child when I was 44,” says Professor Zernicka-Goetz.

At the time, a CVS test found that as many as a quarter of the cells in the placenta that joined her and her developing baby were abnormal: could the developing baby also have abnormal cells? When Professor Zernicka-Goetz spoke to geneticists about the potential implications, she found that very little was understood about the fate of embryos containing abnormal cells and about the fate of these abnormal cells within the developing embryos.

Fortunately for Professor Zernicka-Goetz, her son, Simon, was born healthy. “I know how lucky I was and how happy I felt when Simon was born healthy,” she says.

“Many expectant mothers have to make a difficult choice about their pregnancy based on a test whose results we don’t fully understand,” says Professor Zernicka-Goetz. “What does it mean if a quarter of the cells from the placenta carry a genetic abnormality –  how likely is it that the child will have cells with this abnormality, too? This is the question we wanted to answer. Given that the average age at which women have their children is rising, this is a question that will become increasingly important.”

“In fact, abnormal cells with numerical and/or structural anomalies of chromosomes have been observed in as many as 80-90% of human early stage embryos following in vitro fertilization,” says Professor Thierry Voet from the Wellcome Trust Sanger Institute, UK, and the University of Leuven, Belgium, another senior author of this paper, “and CSV tests may expose some degree of these abnormalities.”

In research funded by the Wellcome Trust, Professor Zernicka-Goetz and colleagues developed a mouse model of aneuploidy by mixing 8-cell stage mouse embryos in which the cells were normal with embryos in which the cells were abnormal. Abnormal mouse embryos are relatively unusual, so the team used a molecule known as reversine to induce aneuploidy.

In embryos where the mix of normal and abnormal cells was half and half, the researchers observed that the abnormal cells within the embryo were killed off by ‘apoptosis’, or programmed-cell death, even when placental cells retained abnormalities. This allowed the normal cells to take over, resulting in an embryo where all the cells were healthy. When the mix of cells was three abnormal cells to one normal cell, some of abnormal cells continued to survive, but the ratio of normal cells increased.

“The embryo has an amazing ability to correct itself,” explains Professor Zernicka-Goetz. “We found that even when half of the cells in the early stage embryo are abnormal, the embryo can fully repair itself. If this is the case in humans, too, it will mean that even when early indications suggest a child might have a birth defect because there are some, but importantly not all abnormal cells in its embryonic body, this isn’t necessarily the case.”

The researchers will now try to determine the exact proportion of healthy cells needed to completely repair an embryo and the mechanism by which the abnormal cells are eliminated.

Reference
Bolton, H et al. Mouse model of chromosome mosaicism reveals lineage-specific depletion of aneuploid cells and normal developmental potential. Nature Comms; 26 March 2016; DOI: 10.1038/ncomms11165


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/early-stage-embryos-with-abnormalities-may-still-develop-into-healthy-babies#sthash.ZXWcNRA5.dpuf

Solar Cell Material Can Recycle Light to Boost Efficiency

Solar cell material can recycle light to boost efficiency

source: www.cam.ac.uk

Perovskite materials can recycle light particles – a finding which could lead to a new generation of affordable, high-performance solar cells.

It’s a massive demonstration of the quality of this material and opens the door to maximising the efficiency of solar cells

Felix Deschler

Scientists have discovered that a highly promising group of materials known as hybrid lead halide perovskites can recycle light – a finding that they believe could lead to large gains in the efficiency of solar cells.

Hybrid lead halide perovskites are a particular group of synthetic materials which have been the subject of intensive scientific research, as they appear to promise a revolution in the field of solar energy. As well as being cheap and easy to produce, perovskite solar cells have, in the space of a few years, become almost as energy-efficient as silicon – the material currently used in most household solar panels.

By showing that they can also be optimised to recycle light, the new study suggests that this could just be the beginning. Solar cells work by absorbing photons from the sun to create electrical charges, but the process also works in reverse, because when the electrical charges recombine, they can create a photon. The research shows that perovskite cells have the extra ability to re-absorb these regenerated photons – a process known as “photon recycling”. This creates a concentration effect inside the cell, as if a lens has been used to focus lots of light in a single spot.

According to the researchers, this ability to recycle photons could be exploited with relative ease to create cells capable of pushing the limits of energy efficiency in solar panels.

The study builds on an established collaboration, focusing on the use of these materials not only in solar cells but also in light-emitting diodes, and was carried out in the group of Richard Friend, Cavendish Professor of Physics and Fellow of St John’s College at the University of Cambridge. The research was undertaken in partnership with the team of Henry Snaith at the University of Oxford and Bruno Ehrler at the FOM Institute, AMOLF, Amsterdam.

Felix Deschler, who is one of the corresponding authors of the study and works with a team studying perovskites at the Cavendish Laboratory, said: “It’s a massive demonstration of the quality of this material and opens the door to maximising the efficiency of solar cells. The fabrication methods that would be required to exploit this phenomenon are not complicated, and that should boost the efficiency of this technology significantly beyond what we have been able to achieve until now.”

Perovskite-based solar cells were first tested in 2012, and were so successful that in 2013, Science Magazine rated them one of the breakthroughs of the year.

Since then, researchers have made rapid progress in improving the efficiency with which these cells convert light into electrical energy. Recent experiments have produced power conversion efficiencies of around 20% – a figure already comparable with silicon cells.

By showing that perovskite-based cells can also recycle photons, the new research suggests that they could reach efficiencies well beyond this.

The study, which is reported in Science, involved shining a laser on to one part of a 500 nanometre-thick sample of lead-iodide perovskite. Perovskites emit light when they come into contact with it, so the team was able to measure photon activity inside the sample based on the light it emitted.

Close to where the laser light had shone on to the film, the researchers detected a near-infrared light emission. Crucially, however, this emission was also detected further away from the point where the laser hit the sample, together with a second emission composed of lower-energy photons.

“The low-energy component enables charges to be transported over a long distance, but the high-energy component could not exist unless photons were being recycled,” Luis Miguel Pazos Outón, lead author on the study, said. “Recycling is a quality that materials like silicon simply don’t have. This effect concentrates a lot of charges within a very small volume. These are produced by a combination of incoming photons and those being made within the material itself, and that’s what enhances its energy efficiency.”

As part of the study, Pazos Outón also manufactured the first demonstration of a perovskite-based back-contact solar cell. This single cell proved capable of transporting an electrical current more than 50 micrometres away from the contact point with the laser; a distance far greater than the researchers had predicted, and a direct result of multiple photon recycling events taking place within the sample.

The researchers now believe that perovskite solar cells, may be able to reach considerably higher efficiencies than they have to date. “The fact that we were able to show photon recycling happening in our own cell, which had not been optimised to produce energy, is extremely promising,” Richard Friend, a corresponding author, said. “If we can harness this it would lead to huge gains in terms of energy efficiency.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Quantum Effects At Work In The World’s Smelliest Superconductor

Quantum effects at work in the world’s smelliest superconductor

source: www.cam.ac.uk

Researchers have found that quantum effects are the reason that hydrogen sulphide – which has the distinct smell of rotten eggs –behaves as a superconductor at record-breaking temperatures, which may aid in the search for room temperature superconductors.

That we are able to make quantitative predictions with such a good agreement with the experiments is exciting and means that computation can be confidently used to accelerate the discovery of high temperature superconductors.

Chris Pickard

The quantum behaviour of hydrogen affects the structural properties of hydrogen-rich compounds, which are possible candidates for the elusive room temperature superconductor, according to new research co-authored at the University of Cambridge.

New theoretical results, published online in the journal Nature, suggest that the quantum nature of hydrogen – meaning that it can behave like a particle or a wave – strongly affects the recently discovered hydrogen sulphur superconductor, a compound that when subjected to extremely high pressure, is the highest-temperature superconductor yet identified. This new step towards understanding the underlying physics of high temperature superconductivity may aid in the search for a room temperature superconductor, which could be used for applications such as levitating trains, lossless electrical grids and next-generation supercomputers.

Superconductors are materials that carry electrical current with zero electrical resistance. Low-temperature, or conventional, superconductors were first identified in the early 20th century, but they need to be cooled close to absolute zero (zero degrees on the Kelvin scale, or -273 degrees Celsius) before they start to display superconductivity. For the past century, researchers have been searching for materials that behave as superconductors at higher temperatures, which would make them more suitable for practical applications. The ultimate goal is to identify a material which behaves as a superconductor at room temperature.

Last year, German researchers identified the highest temperature superconductor yet – hydrogen sulphide, the same compound that gives rotten eggs their distinctive odour. When subjected to extreme pressure – about one million times higher than the Earth’s atmospheric pressure – this stinky compound displays superconducting behaviour at temperatures as high as 203 Kelvin (-70 degrees Celsius), which is far higher than any other high temperature superconductor yet discovered.

Since this discovery, researchers have attempted to understand what it is about hydrogen sulphide that makes it capable of superconducting at such high temperatures. Now, new theoretical results suggest that the quantum behaviour of hydrogen may be the reason, as it changes the structure of the chemical bonds between atoms. The results were obtained by an international collaboration of researchers led by the University of the Basque Country and the Donostia International Physics Center, and including researchers from the University of Cambridge.

The behaviour of objects in our daily life is governed by classical, or Newtonian, physics. If an object is moving, we can measure both its position and momentum, to determine where an object is going and how long it will take to get there. The two properties are inherently linked.

However, in the strange world of quantum physics, things are different. According to a rule known as Heisenberg’s uncertainty principle, in any situation in which a particle has two linked properties, only one can be measured and the other must be uncertain.

Hydrogen, being the lightest element of the periodic table, is the atom most strongly subjected to quantum behaviour. Its quantum nature affects structural and physical properties of many hydrogen compounds. An example is high-pressure ice, where quantum fluctuations of the proton lead to a change in the way that the molecules are held together, so that the chemical bonds between atoms become symmetrical.

The researchers behind the current study believe that a similar quantum hydrogen-bond symmetrisation occurs in the hydrogen sulphide superconductor.

Theoretical models that treat hydrogen atoms as classical particles predict that at extremely high pressures – even higher than those used by the German researchers for their record-breaking superconductor – the atoms sit exactly halfway between two sulphur atoms, making a fully symmetrical structure. However, at lower pressures, hydrogen atoms move to an off-centre position, forming one shorter and one longer bond.

The researchers have found that when considering the hydrogen atoms as quantum particles behaving like waves, they form symmetrical bonds at much lower pressures – around the same as those used for the German-led experiment, meaning that quantum physics, and symmetrical hydrogen bonds, were behind the record-breaking superconductivity.

“That we are able to make quantitative predictions with such a good agreement with the experiments is exciting and means that computation can be confidently used to accelerate the discovery of high temperature superconductors,” said study co-author Professor Chris Pickard of Cambridge’s Department of Materials Science & Metallurgy.

According to the researcher’s calculations, the quantum symmetrisation of the hydrogen bond has a tremendous impact on the vibrational and superconducting properties of hydrogen sulphide. “In order to theoretically reproduce the observed pressure dependence of the superconducting critical temperature the quantum symmetrisation needs to be taken into account,” said the study’s first author, Ion Errea, from the University of the Basque Country and Donostia International Physics Center.

The discovery of such a high temperature superconductor suggests that room temperature superconductivity might be possible in other hydrogen-rich compounds. The current theoretical study shows that in all these compounds, the quantum motion of hydrogen can strongly affect the structural properties, even modifying the chemical bonding, and the electron-phonon interaction that drives the superconducting transition.

“Theory and computation have played an important role in the hunt for superconducting hydrides under extreme compression,” said Pickard. “The challenges for the future are twofold – increasing the temperature towards room temperature, but, more importantly, dramatically reducing the pressures required.”

Reference:
Ion Errea et. al. ‘Quantum hydrogen-bond symmetrization in the superconducting hydrogen sulfide system.’ Nature (2016).DOI: 10.1038/nature17175.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Spectral Edge Announces Successful £1.5m Funding Round

Spectral Edge announces successful £1.5m funding round

source: www.realwire.com

Cambridge-based image fusion pioneer attracts major backing to commercialise product portfolio

Cambridge, 22nd March 2016, Spectral Edge, (http://www.spectraledge.co.uk/) today announced the successful completion of an oversubscribed £1.5 million second funding round. New lead investors IQ Capital and Parkwalk Advisors, along with angel investors from Cambridge Angels, Wren Capital, Cambridge Capital Group and Martlet, the Marshall of Cambridge Corporate Angel investment fund, join the Rainbow Seed Fund/Midven and Iceni in backing the company.

Spectral Edge Phusion
Spectral Edge Phusion

Spun out of the University of East Anglia (UEA) Colour Lab, Spectral Edge has developed innovative image fusion technology. This combines different types of image, ranging from the visible to invisible (such as infrared and thermal), to enhance detail, aid visual accessibility, and create ever more beautiful pictures.

Spectral Edge’s Phusion technology platform has already been proven in the visual accessibility market, where independent studies have shown that it can transform the TV viewing experience for the estimated 4% of the world’s population that suffers from colour-blindness. It enhances live TV and video, allowing colour-blind viewers to differentiate between colour combinations such as red-green and pink-grey so that otherwise inaccessible content such as sport can be enjoyed.

The new funding will be used to expand Spectral Edge’s team, increase investment in sales and marketing, and underpin development of its product portfolio into IP-licensable products and reference designs. Spectral Edge is mainly targetingcomputational photography, where blending near-infrared and visible images gives higher quality, more beautiful results with greater depth. Other applications include security, where the combination of visible and thermal imaging enhances details to provide easier identification of people filmed on surveillance cameras, as well as visual accessibility through itsEyeteq brand.

“Spectral Edge is a true pioneer in the field of photography. They are set to disrupt and transform the imaging sector, not just within consumer and professional photography, but also across a broad range of business sectors,” said Max Bautin, Managing Partner at IQ Capital. “Backed by a robust catalogue of IP, Spectral Edge’s technology enables individuals and companies to take pictures and record videos with unparalleled detail by taking advantage of non-visible information like near-infra red and heat. We are proud to add Spectral Edge to our portfolio of companies. We back cutting-edge IP-rich technology which pushes the boundaries but also has a proven track record of experiencing stable growth, and Spectral Edge fits that mould perfectly.”

“We are delighted to support Professor Graham Finlayson and his team at Spectral Edge,” said Alastair Kilgour CIO Parkwalk Advisors. “We believe Phusion could prove to be a substantial enhancement to the quality of digital imaging and as such have significant commercial prospects.”

Spectral Edge is led by an experienced team that combines deep technical and business experience. It includes Professor Graham Finlayson, Head of Vision Group and Professor of Computing Science, UEA, Christopher Cytera (managing director) and serial entrepreneur Dr Robert Swann (chairman).

“After having proved the potential of our innovative Phusion technology, this new funding provides Spectral Edge with a springboard for growth,” said Christopher Cytera, Managing Director, Spectral Edge. “The significant investment from IQ Capital, Parkwalk Advisors, Midven, Iceni and Cambridge angel groups demonstrates their faith in our technology, approach and overall strategy. We can now accelerate commercialisation of our intellectual property portfolio and grow by licensing our technology to consumer electronics manufacturers and service providers in our key markets of computational photography, visual accessibility and security.”

-ends-

About Spectral Edge
Formed in February 2011, Spectral Edge is a spin-out company of the Colour Group of the School of Computing Sciences at the University of East Anglia (United Kingdom). It operates from offices in Cambridge.

Spectral Edge Phusion technology enhances images and video by using information outside the normal visible spectrum or applying transformations to that within it. Applications range from computational photography, security and consumer applications such as enhancing TV pictures to improve content accessibility.

Website: http://www.spectraledge.co.uk/
Pictures & Media Pack: http://www.spectraledge.co.uk/about/media-pack
LinkedIn: https://www.linkedin.com/company/spectral-edge-ltd/
Facebook: https://www.facebook.com/SpectralEdge
Twitter: @SpectralEdgeLtd
Email: pr@spectraledge.co.uk

About IQ Capital
IQ Capital is a UK focused venture capital investor, based in Cambridge and London. We invest in B2B software including machine learning & AI, data analytics, cyber security, AdTech, FinTech and e-health, as well as embedded systems and robotics. Recent exits include trade sales to Google, Apple, Becton Dickinson and Huawei.

IQ Capital always invests alongside an experienced, sector-expert entrepreneur who has recently made a significant exit in the same sector and who now has the right skills, mind-set and motivation to support an early stage business. We are currently investing from our 2015 IQ Capital Fund II which is actively looking for new investment opportunities.

About Parkwalk Advisors
Parkwalk is an independent investment firm dedicated to providing clients with access to some of the most exciting deal-flow emanating from British R&D intensive institutions and Universities. Parkwalk invests in, and raises capital for, innovative UK technology companies. The Funds are investment-driven venture capital funds, seeking capital appreciation. Parkwalk portfolio companies all have deeply-embedded IP and commercial potential, and range from early stage seed capital, through development and commercial capital to AIM-listed investments. In the last 12 months Parkwalk has invested over £20m into this investment strategy.

More information can be found at www.parkwalkadvisors.com

About Wren Capital
Wren Capital, whose managing partner is Rajat Malhotra (UK Business Angels Associations Angel Investor of the Year for 2013) specialises in early stage investing across science, engineering and software. Our involvement is tailored to the needs of each business and we aim to be a supportive value-adding investor. We generally follow a co-investment model and have links with a number of universities, business schools, angel networks and a trusted network of high quality investors who share our investment philosophy.

For more information please see: www.wrencapital.co.uk

For more information:
Chris Measures (PR for Spectral Edge)
+44 7976 535147
chris@measuresconsulting.com

XAAR Launches New Family Of Printheads

XAAR LAUNCHES NEW FAMILY OF PRINTHEADS

22(nd) March 2016 – Xaar, the world leader in industrial inkjet technology, is pleased to announce the launch of the Xaar 1003 family of printheads.

The introduction of the Xaar 1003 is in line with the Company’s 2020 vision, recently outlined at the Full Year results, and reiterates Xaar’s commitment to investing significantly in Research & Development.

The Xaar 1003 sets a new benchmark for industrial inkjet printing, with its new upgrades allowing higher productivity, versatility and an all-round superior performance than previous Xaar 1001 and 1002 models. These upgrades include:

— The XaarGuard(TM), the ultimate in nozzle plate protection, providing the longest maintenance-free production runs in the industry*.

— A step forward in consistent print quality across the wide print widths used in many single-pass applications, due to Xaar’s new X-ACT(TM) Micro Electric Mechanical Systems (MEMS) manufacturing process.

Like its predecessors, the new Xaar 1003 family of printheads combines Xaar’s unique TF Technology(R) with Xaar’s Hybrid Side Shooter(R) architecture so that ink is recirculated directly past the back of the nozzle during drop ejection at the highest flow rates in the industry. This ensures that the printhead operates reliably even in the harshest industrial environments and also in horizontal and vertical (skyscraper) jetting modes. Ink is in constant circulation, preventing sedimentation and subsequent blocking of the nozzles when jetting.

In response to market demand, the Xaar 1003 will be available in three variants. The Xaar 1003 GS12 (rich colours or higher speeds) for ceramics applications is first to be launched, closely followed by the Xaar 1003 GS6 (for fine detail) and the Xaar 1003 GS40 (for special effects). The other variants for UV applications will also be available later in the first half of this year.

Gillian Ewers, Xaar’s Director of Marketing, said:

“We are delighted to introduce the new and exciting Xaar 1003 printhead family to the market. This launch is further evidence of our commitment to our customers, and to ensuring Xaar remains at the leading edge of single pass industrial inkjet printing.”

FDA 510(k) Clearance Granted For PneumaCare’s Ground-Breaking Thora-3DI™ System For Non-Contact Respiratory Measurement

FDA 510(k) Clearance Granted For PneumaCare’s Ground-Breaking Thora-3DI™ System For Non-Contact Respiratory Measurement

Displaying pneumacare logo.jpgDisplaying pneumacare logo.jpgpneuma

We are pleased to write to you with some very exciting news about PneumaCare Ltd ( www.pneumacare.com). Please see the attached Press Release issued by the Company.

Thora-3DI™ is a non-invasive, non-contact device that uses a patented technology known as Structured Light Plethysmography (SLP) to measure breathing through detection of movement of the chest and abdomen. The technology can be used to accurately measure respiratory status in patients with a wide range of respiratory conditions, including asthma, chronic obstructive pulmonary disease (COPD), pneumonia and lung failure, and to assess patients before and after surgery. The SLP technology uses safe white light to project a grid pattern onto the chest, and record accurate 3D images of chest wall movements over time. The measurements are converted into visual and numerical outputs, which can help clinicians to make faster diagnoses and treatment decisions, and continually monitor patients in real time, without direct patient contact or intervention. The Thora-3DI™ is mobile, and can easily be moved between wards, or dismantled for transport and use in the community or in clinics.

We would be delighted to discuss any aspect of our business and products with you in light of this great development for the Company.

Read more about PneumaCare and Thora-3DI™ here:

 

No Evidence That Genetic Tests Change People’s Behaviour

No evidence that genetic tests change people’s behaviour

source: www.cam.ac.uk

Genetic tests that provide an estimate of an individual’s risk of developing diseases such as lung cancer and heart disease do not appear to motivate a change in behaviour to reduce the risk, according to a study led by the University of Cambridge and published in The BMJ today.

Expectations have been high that giving people information about their genetic risk will empower them to change their behaviour, but we have found no evidence that this is the case

Theresa Marteau

Researchers at the Behaviour and Health Research Unit analysed a number of studies that looked at whether testing an individual’s DNA for genetic variants that increased their risk of developing so-called ‘common complex diseases’ influenced their health-related behaviour. Complex diseases are those such as heart disease, most cancers and diabetes, where no single gene causes the disease, but rather it is the interaction of dozens – possibly hundreds – of genes together with an individual’s environment and behaviour that leads to the disease.

Genome sequencing – reading an individual’s entire DNA – has opened up the potential to provide individuals with information on whether or not they carry genes known to increase their risk of disease. Such tests are controversial – knowing that an individual carries these variants does not mean that individual will develop the disease; however, proponents argue that if an individual knows that he or she is at a greater risk of a particular disease, they can make an informed decision about whether or not to change their behaviour.

In the early 2000s, several companies launched direct-to-consumer tests for a range of common complex disorders, and these tests continue to be sold in Canada, the United Kingdom, and other European countries. In 2013 in the United States, the Food and Drug Administration ordered the company 23andMe to stop selling its health-related testing kits because of concerns about their accuracy and usefulness, but in October 2015 the company resumed selling some health-related services.

The Cambridge researchers examined over 10,000 abstracts from relevant studies and identified from these 18 studies that matched their criteria for inclusion in their analysis. By compiling the data, they found that informing individuals of their genetic risk had little or no effect on their health-related behaviour, particularly for smoking cessation and physical activity.

Professor Theresa Marteau, who led the study, says: “Expectations have been high that giving people information about their genetic risk will empower them to change their behaviour – to eat more healthily or to stop smoking, for example – but we have found no evidence that this is the case. But nor does the evidence support concerns that such information might demotivate people and discourage them from changing their behaviour.”

However, the researchers recognise that DNA testing may still play a role in improving people’s health. “DNA testing, alone or in combination with other assessments of disease risk, may help clinicians identify individuals at greatest risk and allow them to target interventions such as screening tests, surgery, and drug treatments,” explains co-author Dr Gareth Hollands.

The team argue that these results are consistent with other evidence that risk communication typically has at best only a small effect on health behaviour.

The study was funded by the Medical Research Council and the National Institute for Health Research.

Reference
Hollands, GJ et al. The impact of communicating genetic risks of disease on risk-reducing health behaviour: systematic review with meta-analysis. BMJ; 15 March 2016; DOI: 10.1136/bmj.i1102


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/no-evidence-that-genetic-tests-change-peoples-behaviour#sthash.1ZLtpdlz.dpuf

Researchers Identify When Parkinson’s Proteins Become Toxic To Brain Cells

Researchers identify when Parkinson’s proteins become toxic to brain cells

Source: www.cam.ac.uk

Observation of the point at which proteins associated with Parkinson’s disease become toxic to brain cells could help identify how and why people develop the disease, and aid in the search for potential treatments.

The damage appears to be done before visible fibrils are even formed.

Dorothea Pinotsi

Researchers have used a non-invasive method of observing how the process leading to Parkinson’s disease takes place at the nanoscale, and identified the point in the process at which proteins in the brain become toxic, eventually leading to the death of brain cells.

The results suggest that the same protein can either cause, or protect against, the toxic effects that lead to the death of brain cells, depending on the specific structural form it takes, and that toxic effects take hold when there is an imbalance of the level of protein in its natural form in a cell. The work could help unravel how and why people develop Parkinson’s, and aid in the search for potential treatments. The study is published in the journal Proceedings of the National Academy of Sciences.

Using super-resolution microscopy, researchers from the University of Cambridge were able to observe the behaviour of different types of alpha-synuclein, a protein closely associated with Parkinson’s disease, in order to find how it affects neurons, and at what point it becomes toxic.

Parkinson’s disease is one of a number of neurodegenerative diseases caused when naturally occurring proteins fold into the wrong shape and stick together with other proteins, eventually forming thin filament-like structures called amyloid fibrils. These amyloid deposits of aggregated alpha-synuclein, also known as Lewy bodies, are the hallmark of Parkinson’s disease.

Parkinson’s disease is the second-most common neurodegenerative disease worldwide (after Alzheimer’s disease). Close to 130,000 people in the UK, and more than seven million worldwide, have the disease. Symptoms include muscle tremors, stiffness and difficulty walking. Dementia is common in later stages of the disease.

“What hasn’t been clear is whether once alpha-synuclein fibrils have formed they are still toxic to the cell,” said Dr Dorothea Pinotsi of Cambridge’s Department of Chemical Engineering and Biotechnology, the paper’s first author.

Pinotsi and her colleagues from Cambridge’s Department of Chemical Engineering & Biotechnology and Department of Chemistry, and led by Dr Gabriele Kaminski Schierle, have used optical ‘super-resolution’ techniques to look into live neurons without damaging the tissue. “Now we can look at how proteins associated with neurodegenerative conditions grow over time, and how these proteins come together and are passed on to neighbouring cells,” said Pinotsi.

The researchers used different forms of alpha-synuclein and observed their behaviour in neurons from rats. They were then able to correlate what they saw with the amount of toxicity that was present.

They found that when they added alpha-synuclein fibrils to the neurons, they interacted with alpha-synuclein protein that was already in the cell, and no toxic effects were present.

“It was believed that amyloid fibrils that attack the healthy protein in the cell would be toxic to the cell,” said Pinotsi. “But when we added a different, soluble form of alpha-synuclein, it didn’t interact with the protein that was already present in the neuron and interestingly this was where we saw toxic effects and cells began to die. So somehow, when the soluble protein was added, it created this toxic effect. The damage appears to be done before visible fibrils are even formed.”

The researchers then observed that by adding the soluble form of alpha-synuclein together with amyloid fibrils, the toxic effect of the former could be overcome. It appeared that the amyloid fibrils acted like magnets for the soluble protein and mopped up the soluble protein pool, shielding against the associated toxic effects.

“These findings change the way we look at the disease, because the damage to the neuron can happen when there is simply extra soluble protein present in the cell – it’s the excess amount of this protein that appears to cause the toxic effects that lead to the death of brain cells,” said Pinotsi. Extra soluble protein can be caused by genetic factors or ageing, although there is some evidence that it could also be caused by trauma to the head.

The research shows how important it is to fully understand the processes at work behind neurodegenerative diseases, so that the right step in the process can be targeted.

“With these optical super-resolution techniques, we can really see details we couldn’t see before, so we may be able to counteract this toxic effect at an early stage,” said Pinotsi.

The research was funded by the Medical Research Council, the Engineering and Physical Sciences Research Council, and the Wellcome Trust.

Reference:
Dorothea Pinotsi et. al. ‘Nanoscopic insights into seeding mechanisms and toxicity of α-synuclein species in neurons.’ Proceedings of the National Academy of Sciences (2016). DOI: 10.1073/pnas.1516546113


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/researchers-identify-when-parkinsons-proteins-become-toxic-to-brain-cells#sthash.6ou2MzQe.dpuf

‘Good’ Cholesterol Doesn’t Always Lower Heart Attack Risk

‘Good’ cholesterol doesn’t always lower heart attack risk

source: www.cam.ac.uk

Some people with high levels of ‘good’ high density lipoprotein cholesterol (HDL-C) are at increased risk of coronary heart disease, contrary to earlier evidence that people with more HDL-C are usually at lower heart disease risk. This finding comes from an international study involving researchers at the University of Cambridge, funded by the British Heart Foundation (BHF).

Large-scale collaborative research like this paves the way for further studies of rare mutations that might be significantly increasing people’s risk of a deadly heart attack

Adam Butterworth

The discovery, published today in Science, could move researchers away from potentially ineffective HDL-raising drugs to treat coronary heart disease, and lead to the development of new treatments, helping to reduce their risk of heart attack.

The researchers studied people with a rare genetic mutation in the SCARB1 gene, called the P376L variant, which causes the body to have high levels of ‘good’ HDL-C. High levels of ‘good’ cholesterol are commonly associated with reduced risk for coronary heart disease. Challenging this view, the researchers unexpectedly found that people with the rare mutation, who had increased levels of HDL-C, had an 80 per cent increased relative risk of the disease – a figure almost equivalent to the increased risk caused by smoking.

Coronary heart disease is responsible for nearly 70,000 deaths every year, almost entirely through heart attacks, making it the UK’s single biggest killer. The disease involves the build-up of fatty material, or plaque, in the coronary artery walls. If large quantities accumulate in the vessel walls, blood flow to the heart can become restricted or blocked, increasing risk of a heart attack.

The international team of scientists included BHF-funded researchers Professor Sir Nilesh Samani at the University of Leicester and Professor John Danesh at the University of Cambridge. They initially looked at the DNA of 328 individuals with very high levels of HDL-C in the blood and compared them to 398 people with relatively low HDL-C. As the P376L variant they found was so rare, they then looked at its effects on HDL-C and heart disease in more than half a million additional people.

Dr Adam Butterworth, from the Cardiovascular Epidemiology Unit,  University of Cambridge,  and co-investigator of this study, said: “We found that people carrying a rare genetic mutation causing higher levels of the so-called ‘good’ HDL-cholesterol are, unexpectedly, at greater risk of heart disease. This discovery could lead to new drugs that improve the processing of HDL-C to prevent devastating heart attacks.

“Large-scale collaborative research like this paves the way for further studies of rare mutations that might be significantly increasing people’s risk of a deadly heart attack. These discoveries also give researchers the knowledge we need to develop better treatments.”

Professor Peter Weissberg, Medical Director at the BHF, added said: “This is an important study that sheds light on one of the major puzzles relating to cholesterol and heart disease, which is that despite strong evidence showing HDL-C reduces heart disease risk, clinical trials on the effects of HDL-C-raising drugs have been disappointing.

“These new findings suggest that the way in which HDL-C is handled by the body is more important in determining risk of a heart attack than the levels of HDL-C in the blood. Only by understanding the underlying biology that links HDL-C with heart attacks can we develop new treatments to prevent them. These unexpected findings pave the way for further research into the SCARB1 pathway to identify new treatments to reduce heart attacks in the future.”

Additional funding for the study in the USA came from the National Center for Research Resources and the National Center for Advancing Translational Sciences of the National Institute of Health.

Reference
Zanoni, P et al. Rare Variant in Scavenger Receptor BI raises HDL Cholesterol and Increases Risk of Coronary Heart Disease. Science; 10 Mar 2016; DOI: 10.1126/science.aad3517

Adapted from a press release from the British Heart Foundation


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

Lines of Thought: Discoveries That Changed the World

Lines of Thought: Discoveries that Changed the World

source: www.cam.ac.uk

Some of the world’s most valuable books and manuscripts – texts which have altered the very fabric of our understanding – will go on display in Cambridge this week as Cambridge University Library celebrates its 600th birthday with a once-in-a-lifetime exhibition of its greatest treasures.

What started in 1416 as a small collection of manuscripts locked in wooden chests, has now grown into a global institution housing eight million books and manuscripts, billions of words, and millions of images, all communicating thousands of years of human thought.

Anne Jarvis

Lines of Thought: Discoveries that Changed the World, opens free to the public this Friday (March 11) and celebrates 4,000 years of recorded thought through the Library’s unique and irreplaceable collections. More than 70 per cent of the exhibits are displayed to the public for the first time in this exhibition.

Tracing the connections between Darwin and DNA, Newton and Hawking, and 3,000-year-old Chinese oracle bones and Twitter,the exhibition investigates, through six distinct themes, how Cambridge University Library’s millions of books and manuscripts have transformed our understanding of life here on earth and our place among the stars.

The iconic Giles Gilbert Scott building, opened in the 1930s, now holds more than eight million books, journals, maps and magazines – as well as some of the world’s most iconic scientific, literary and cultural treasures.

The new exhibition puts on display Newton’s own annotated copy of Principia Mathematica, Darwin’s papers on evolution, 3,000-year-old Chinese oracle bones, a cuneiform tablet from 2,000BC, and the earliest reliable text for 20 of Shakespeare’s plays.

Other items going on display include:

  • Edmund Halley’s handwritten notebook/sketches of Halley’s Comet (1682)
  • Stephen Hawking’s draft typescript of A Brief History of Time
  • Darwin’s first pencil sketch of Species Theory and his Primate Tree
  • A 2nd century AD fragment of Homer’s Odyssey.
  • The Nash Papyrus – a 2,000-year-old copy of the Ten Commandments
  • Codex Bezae – 5th century New Testament, crucial to our understanding of The Bible.
  • A hand-coloured copy of Vesalius’ 1543 Epitome – one of the most influential works in western medicine
  •  The earliest known record of a human dissection in England (1564)
  • A Babylonian tablet dated 2039 BCE (the oldest object in the library)
  • The Gutenberg Bible – the earliest substantive printed book in Western Europe (1455)
  • The Book of Deer, 10th century gospel book: thought to be the oldest Scottish book and the first example of written Gaelic
  • The first catalogue listing the contents of the Library in 1424, barely a decade after it was first identified in the wills of William Loring and William Hunden

The six Lines of Thought featured in the exhibition are: From clay tablets to Twitter feed (Revolutions in human communication); The evolution of genetics (From Darwin to DNA); Beginning with the word (Communicating faith); On the shoulders of giants (Understanding gravity); Eternal lines (Telling the story of history) and Illustrating anatomy (Understanding the body).

University Librarian Anne Jarvis said: “It’s extraordinary to think that the University Library, which started in 1416 as a small collection of manuscripts locked in wooden chests, has now grown into a global institution housing eight million books and manuscripts, billions of words, and millions of images, all communicating thousands of years of human thought.

“Our spectacular exhibition showcases six key concepts in human history that have been critical in shaping the world and culture we know today, illustrating the myriad lines of thought that take us back into the past, and forward to tomorrow’s research, innovation and literature.”

The University Library, which is older than both the British Library and the Vatican Library, has more than 125 miles of shelving and more than two million books immediately available to readers – making it the largest open-access library in Europe.

The first Line of Thought featured in the exhibition: From clay tablet to Twitter begins with a tiny 4,000-year-old tablet used as a receipt for wool, evidence of an advanced civilisation using a cuneiform script and Sumerian language, probably written in Girsu (Southern Iraq) and precisely dated to 2039BCE. The tablet is on public display for the first time in this exhibition.

From there, it charts the many and varied revolutions in communications throughout history, taking in Chinese oracle bones, the Gutenberg Bible, a palm leaf manuscript written in 1015AD, newspapers, chapbooks and 20th century Penguin paperbacks, before ending with a book containing Shakespeare’s Hamlet written in tweets.

Objects going on display for the first time during Lines of Thought include: the Book of Deer, Vesalius’s 3D manikin of the human body, William Morris’s extensively annotated proofs of his edition of Beowulf, a wonderful caricature of Darwin, and works by Copernicus, Galileo and Jocelyn Bell Burnell, the discoverer of pulsars.

“For six centuries, the collections of Cambridge University Library have challenged and changed the world around us,” added Jarvis.  “Across science, literature and the arts, the millions of books, manuscripts and digital archives we hold have altered the very fabric of our understanding.

“Only in Cambridge, can you find Newton’s greatest works sitting alongside Darwin’s most important papers on evolution, or Sassoon’s wartime poetry books taking their place next to the Gutenberg Bible and the archive of Margaret Drabble.”

To celebrate the Library’s 600th anniversary, the Library has selected one iconic item from each theme within the exhibition to be digitised and made available within a free iPad app, Words that Changed the World. Readers can turn the pages of these masterworks of culture and science, from cover to cover, accompanied by University experts explaining their importance and giving contextual information.

Lines of Thought: Discoveries that Changed the World opens to the public on Friday, March 11, 2016 and runs until Friday, September 30, 2016. Entry is free.

The exhibition is also available to view online, and items from the exhibition have also been digitised and made available on the Cambridge Digital Library.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/lines-of-thought-discoveries-that-changed-the-world#sthash.TIYiftDw.dpuf

AI Crossword-Solving Application Could Make Machines Better at Understanding Language

AI crossword-solving application could make machines better at understanding language

source: www.cam.ac.uk

A web-based machine language system solves crossword puzzles far better than commercially-available products, and may help machines better understand language.

Despite recent progress in AI, problems involving language understanding are particularly difficult.

Felix Hill

Researchers have designed a web-based platform which uses artificial neural networks to answer standard crossword clues better than existing commercial products specifically designed for the task. The system, which is freely availableonline, could help machines understand language more effectively.

In tests against commercial crossword-solving software, the system, designed by researchers from the UK, US and Canada, was more accurate at answering clues that were single words (e.g. ‘culpability’ – guilt), a short combination of words (e.g. ‘devil devotee’ – Satanist), or a longer sentence or phrase (e.g. ‘French poet and key figure in the development of Symbolism’ – Baudelaire). The system can also be used a ‘reverse dictionary’ in which the user describes a concept and the system returns possible words to describe that concept.

The researchers used the definitions contained in six dictionaries, plus Wikipedia, to ‘train’ the system so that it could understand words, phrases and sentences – using the definitions as a bridge between words and sentences. Their results, published in the journal Transactions of the Association for Computational Linguistics, suggest that a similar approach may lead to improved output from more general language understanding and dialogue systems and information retrieval engines in general. All of the code and data behind the application has been made freely available for future research.

“Over the past few years, there’s been a mini-revolution in machine learning,” said Felix Hill of the University of Cambridge’s Computer Laboratory, one of the paper’s authors. “We’re seeing a lot more usage of deep learning, which is especially useful for language perception and speech recognition.”

Deep learning refers to an approach in which artificial neural networks with little or no prior ‘knowledge’ are trained to recreate human abilities using massive amounts of data. For this particular application, the researchers used dictionaries – training the model on hundreds of thousands of definitions of English words, plus Wikipedia.

“Dictionaries contain just about enough examples to make deep learning viable, but we noticed that the models get better and better the more examples you give them,” said Hill. “Our experiments show that definitions contain a valuable signal for helping models to interpret and represent the meaning of phrases and sentences.”

Working with Anna Korhonen from the Cambridge’s Department of Theoretical and Applied Linguistics, and researchers from the Université de Montréal and New York University, Hill used the model as a way of bridging the gap between machines that understand the meanings of individual words and machines that can understand the meanings of phrases and sentences.

“Despite recent progress in AI, problems involving language understanding are particularly difficult, and our work suggests many possible applications of deep neural networks to language technology,” said Hill. “One of the biggest challenges in training computers to understand language is recreating the many rich and diverse information sources available to humans when they learn to speak and read.”

However, there is still a long way to go. For instance, when Hill’s system receives a query, the machine has no idea about the user’s intention or the wider context of why the question is being asked. Humans, on the other hand, can use their background knowledge and signals like body language to figure out the intent behind the query.

Hill describes recent progress in learning-based AI systems in terms of behaviourism and cognitivism: two movements in psychology that effect how one views learning and education. Behaviourism, as the name implies, looks at behaviour without looking at what the brain and neurons are doing, while cognitivism looks at the mental processes that underlie behaviour. Deep learning systems like the one built by Hill and his colleagues reflect a cognitivist approach, but for a system to have something approaching human intelligence, it would have to have a little of both.

“Our system can’t go too far beyond the dictionary data on which it was trained, but the ways in which it can are interesting, and make it a surprisingly robust question and answer system – and quite good at solving crossword puzzles,” said Hill. While it was not built with the purpose of solving crossword puzzles, the researchers found that it actually performed better than commercially-available products that are specifically engineered for the task.

Existing commercial crossword-answering applications function in a similar way to a Google search, with one system able to reference over 1100 dictionaries. While this approach has advantages if you want to look up a definition verbatim, it works less well when you input a question or query that the model has never seen in training. It also makes it incredibly ‘heavy’ in terms of the amount of memory it requires. “Traditional approaches are like lugging many heavy dictionaries around with you, whereas our neural system is incredibly light,” said Hill.

According to the researchers, the results show the effectiveness of definition-based training for developing models that understand phrases and sentences. They are currently looking at ways of enhancing their system, specifically by combining it with more behaviourist-style models of language learning and linguistic interaction.

Reference:
Hill, Felix et al. Learning to Understand Phrases by Embedding the Dictionary. Transactions of the Association for Computational Linguistics, [S.l.], v. 4, p. 17-30, feb. 2016. ISSN 2307-387X. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/ai-crossword-solving-application-could-make-machines-better-at-understanding-language#sthash.vN5dFTwn.dpuf

Overcrowded Internet Domain Space is Stifling Demand, Suggesting a Future ‘not-com’ Boom

Overcrowded Internet domain space is stifling demand, suggesting a future ‘not-com’ boom

source: www.cam.ac.uk

New research suggests that a lack of remaining domain names with easy to remember – and consequently valuable – word combinations is restricting Internet growth, with an untapped demand of as much as 25% of all current domains being held back. The study’s author contends that the findings show ICANN’s release of new top level domains could prove a wise policy.

 

What I find fascinating is that the observed transaction prices of domain names reveal a free market valuation of linguistic characteristics and language itself

Thies Lindenthal

As the digital age dawned, pioneers successfully snapped up broad swathes of the most popular and memorable domain names, such as nouns, places and combinations thereof – claiming valuable ‘virtual real estate’ under the top level domains such as dot-com, dot-co-dot-uk and so on.

Now, the first research to try and define current demand for Internet domain names suggests that the drying up of intuitive and familiar word combinations has seen domain registration drop far below the expected appetite, given the extent to which we now live online, with new entrepreneurs struggling to find “their slice” of virtual space.

In fact, the study estimates that the lack of available high quality domains featuring popular names, locations and things could be stifling as much as a further 25% of the total current registered domains.

With the total standing at around 294 million as of summer 2015, this could mean over 73m potential domains stymied due to an inability to register relevant word combinations likely to drive traffic for personal or professional purposes.

However, as the Internet Corporation for Assigned Names and Numbers (ICANN) has begun to roll out the option to issue brand new top-level domains for almost any word, whether it’s dot-hotel, dot-books or dot-sex – dubbed the ‘not-coms’ – the research suggests there is substantial untapped demand that could fuel additional growth in the domain registrations.

Dr Thies Lindenthal from the University of Cambridge, who conducted the study, says that – while the domain name market may be new – the economics is not. The market fits nicely onto classic models of urban economics, he says, and – as with property – a lot rides on location.

“Cyberspace is no different from traditional cities, at least in economic terms. In a basic city model, you have a business district to which all residents commute, and property value is determined by proximity to that hub,” said Lindenthal, from Cambridge’s Department of Land Economy.

“It’s similar in cyberspace. The commute to, and consequent value of, virtual locations depend on linguistic attributes: familiarity, memorability and importantly length. A virtual commute is about the ease with which a domain name is remembered and the time it takes to type.

“The snappier and more recognisable a domain, the more it is going to be worth. What I find fascinating is that the observed transaction prices of domain names reveal a free market valuation of linguistic characteristics and language itself,” he said.

From 2007 onwards, annual additions to the domain stock began to lag, while between 2006 and 2012 re-sale prices of domain names already registered rose 63% – indicating a demand for virtual ‘locations’ outpacing the supply of available attractive names, with competition driving up prices.

Recently, ICANN began the release of 1,400 new top-level domains, the ‘not-coms’, to expand current extensions such as dot-com, dot-org etc, with the aim of expanding the domain supply.

Google were one of the first to use a ‘not-com’ to get around the domain name shortage. Finding all obvious domains taken for its new parent company ‘Alphabet’, the company acquired space on the new dot-xyz domain to create the canny web address:www.abc.xyz. 

Serious money is currently being invested in ‘not-coms’. With initial application fees around the $185k mark, Lindenthal says it could be as much as $2m before you have the necessary infrastructure to secure and manage your new top level domain, but, once owned, you are able to set prices for anyone who wants to acquire virtual real estate under that domain.

“By 2013, as much as $350m had already been put down in application fees alone, and further billions must have been invested. Clearly, corporations and entrepreneurs have trust in the new domains being able to serve a previously unmet demand, and from this research it appears some of them may be right,” said Lindenthal.

The set of catchy keywords that appeal to humans is still bound by the way we process language

For the study, published today in the Journal of Real Estate Finance & Economics, Lindenthal set out to get a rough idea of the demand for name registration not served by current top-level domains.

Looking at just dot-coms, he compared existing registrations with census data for popular family names in the US. “You have to assume someone called Miller is as likely to register a domain with their name in it as someone called Smith, for example. So, roughly speaking, if there are twice as many Millers, you would expect to see twice as many domains with that name in it.”

Lindenthal found that the more prevalent the family name, the lower the number of domains featuring that name per head of population. Moreover, a one per cent increase in a surname pushes up domains featuring that name by only 0.74% – suggesting a substantial gap between likely demand and current domain registration.

Lindenthal also explored domain registration featuring city names compared to size of the population, and found a similar gap between expected demand and current domains.

Using statistical modelling analysis, he concludes that – based on the available data – an estimate for domain name demand not met by available word combinations is as much as 25% of all currently registered Internet domains.

A shorter ‘cyber-commute’ was found to be more desirable. Increasing the length of a surname by just one character, from six to seven, reduces the number of registrations by a remarkable 24%.

The shorter the better holds true for emerging ‘not-coms’, says Lindenthal. “Shorter names are more valuable and lead to greater registrations. With new top level domains named for cities, for example, it was the concise city names – dot-london; dot-miami; dot-berlin – that went first, and now anyone who wants to buy virtual space under those domains has to buy it from the new owner.

“More cumbersome city names are not seen as a good investment. For example, despite being a global centre for Internet technology, dot-sanfranciso is still up for grabs. Do you want to be the digital mayor of a new San Francisco domain? $2 million and it’s yours!”

However, while the new ‘not-com’ boom will open up huge new areas of the Internet, Lindenthal says that the overarching constraints will kick in again further down the line – which may be precisely what makes the new top level domains a worthy investment.

“The set of catchy keywords that appeal to humans is still bound by the way we process language, even if we had unlimited choice in top level domains,” he said.

“Legend has it that Mark Twain advised to buy land, since ‘they have stopped making it’. Similarly, one can argue that investing into top level domains is a promising business venture, since we have stopped inventing language, at least at a large scale.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/overcrowded-internet-domain-space-is-stifling-demand-suggesting-a-future-not-com-boom#sthash.atyUNHMp.dpuf

520 Million-Year-Old Fossilised Nervous System Is Most Detailed Example Yet Found

520 million-year-old fossilised nervous system is most detailed example yet found

source: www.cam.ac.uk

A 520 million-year-old fossilised nervous system – so well-preserved that individually fossilised nerves are visible – is the most complete and best example yet found, and could help unravel how the nervous system evolved in early animals.

The more of these fossils we find, the more we will be able to understand how the nervous system – and how early animals – evolved.

Javier Ortega-Hernández

Researchers have found one of the oldest and most detailed fossils of the central nervous system yet identified, from a crustacean-like animal that lived more than 500 million years ago. The fossil, from southern China, has been so well preserved that individual nerves are visible, the first time this level of detail has been observed in a fossil of this age.

The findings, published in theProceedings of the National Academy of Sciences, are helping researchers understand how the nervous system of arthropods – creepy crawlies with jointed legs – evolved. Finding any fossilised soft tissue is rare, but this particular find, by researchers in the UK, China and Germany, represents the most detailed example of a preserved nervous system yet discovered.

The animal, called Chengjiangocaris kunmingensis, lived during the Cambrian ‘explosion’, a period of rapid evolutionary development about half a billion years ago when most major animal groups first appear in the fossil record. C. kunmingensis belongs to a group of animals called fuxianhuiids, and was an early ancestor of modern arthropods – the diverse group that includes insects, spiders and crustaceans.

“This is a unique glimpse into what the ancestral nervous system looked like,” said study co-author Dr Javier Ortega-Hernández, of the University of Cambridge’s Department of Zoology. “It’s the most complete example of a central nervous system from the Cambrian period.”

Over the past five years, researchers have identified partially-fossilised nervous systems in several different species from the period, but these have mostly been fossilised brains. And in most of those specimens, the fossils only preserved details of the profile of the brain, meaning the amount of information available has been limited.

C. kunmingensis looked like a sort of crustacean, with a broad, almost heart-shaped head shield, and a long body with pairs of legs of varying sizes. Through careful preparation of the fossils, which involved chipping away the surrounding rock with a fine needle, the researchers were able to view not only the hard parts of the body, but fossilised soft tissue as well.

The vast majority of fossils we have are mostly bone and other hard body parts such as teeth or exoskeletons. Since the nervous system and soft tissues are essentially made of fatty-like substances, finding them preserved as fossils is extremely rare. The researchers behind this study first identified a fossilised central nervous system in 2013, but the new material has allowed them to investigate the significance of these finding in much greater depth.

Click to enlarge

The central nervous system coordinates all neural and motor functions. In vertebrates, it consists of the brain and spinal cord, but in arthropods it consists of a condensed brain and a chain-like series of interconnected masses of nervous tissue called ganglia that resemble a string of beads.

Like modern arthropods, C. kunmingensis had a nerve cord – which is analogous to a spinal cord in vertebrates – running throughout its body, with each one of the bead-like ganglia controlling a single pair of walking legs.

Closer examination of the exceptionally preserved ganglia revealed dozens of spindly fibres, each measuring about five thousandths of a millimetre in length. “These delicate fibres displayed a highly regular distribution pattern, and so we wanted to figure out if they were made of the same material as the ganglia that form the nerve cord,” said Ortega-Hernández. “Using fluorescence microscopy, we confirmed that the fibres were in fact individual nerves, fossilised as carbon films, offering an unprecedented level of detail. These fossils greatly improve our understanding of how the nervous system evolved.”

For Ortega-Hernández and his colleagues, a key question is what this discovery tells us about the evolution of early animals, since the nervous system contains so much information. Further analysis revealed that some aspects of the nervous system in C. kunmingensis appear to be structured similar to that of modern priapulids (penis worms) and onychophorans (velvet worms), with regularly-spaced nerves coming out from the ventral nerve cord.

In contrast, these dozens of nerves have been lost independently in the tardigrades (water bears) and modern arthropods, suggesting that simplification played an important role in the evolution of the nervous system.

Possibly one of the most striking implications of the study is that the exceptionally preserved nerve cord of C. kunmingensis represents a unique structure that is otherwise unknown in living organisms. The specimen demonstrates the unique contribution of the fossil record towards understanding the early evolution of animals during the Cambrian period. “The more of these fossils we find, the more we will be able to understand how the nervous system – and how early animals – evolved,” said Ortega-Hernández.

The research was supported in part by Emmanuel College, Cambridge.

Reference:
Jie Yang et. al. ‘The fuxianhuiid ventral nerve cord and early nervous system evolution in Panarthropoda.’ PNAS (2016). DOI: 10.1073/pnas.1522434113


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/520-million-year-old-fossilised-nervous-system-is-most-detailed-example-yet-found#sthash.kY97m7Ne.dpuf

Pollinator Species Vital To Our Food Supply Are Under Threat, Warn Experts

Pollinator species vital to our food supply are under threat, warn experts

source: www.cam.ac.uk

A new report from experts and Government around the world addresses threats to animal pollinators such as bees, birds and bats that are vital to more than three-quarters of the world’s food crops, and intimately linked to human nutrition, culture and millions of livelihoods. Scientists say simple strategies could harness pollinator power to boost agricultural yield.

People’s livelihoods and culture are intimately linked with pollinators around the world. All the major world religions have sacred passages that mention bees

Lynn Dicks

Delegates from almost 100 national Governments have gathered in Kuala Lumpur to discuss how to address the threats facing animal pollinators: the bees, flies, birds, butterflies, moths, wasps, beetles and bats that transport the pollen essential to the reproduction of much of the world’s crops and plant life.

It is the first time the global community has gathered on this scale to focus on the preservation of the small species that help fertilise more than three quarters of the leading kinds of global food crops and nearly 90% of flowering wild plant species.

A report on pollinator species produced over two years by an international team of 77 scientists, including Cambridge’s Dr Lynn Dicks, has been adopted by theIntergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services(IPBES) today. IPBES has 124 member Governments.

The report is the first assessment ever issued by IPBES, and the first time that such an assessment has brought together multiple knowledge systems comprehensively, including scientific, and indigenous and local knowledge. It will highlight the threats to animal pollinators, and the major implications of these species’ declines for the world’s food supply and economy.

But the report also details the ways that pollinator power can be used for the benefit of biodiversity, food security and people: by harnessing natural relationships between plants and animals to improve agricultural yields and strengthen local communities.

“It is incredible to see international Governments coming together to discuss the problem of pollinators in this way,” says Lynn Dicks, from Cambridge University’s Department of Zoology.

“Without pollinators, many of us would not be able to enjoy chocolate, coffee and vanilla ice cream, or healthy foods like blueberries and brazil nuts. The value of pollinators goes way beyond this. People’s livelihoods and culture are intimately linked with pollinators around the world. All the major world religions have sacred passages that mention bees.”

The volume of pollinator-dependent food produced has increased by 300% over the past 50 years, including most fruits from apple to avocado, as well as coffee, cocoa, and nuts such as cashews. This shows an increasing dependence of agriculture on pollinators.

Such crops now occupy around 35% of all agricultural land. While these crops rely on animal pollination to varying degrees – along with, for example, wind-blown pollination – the scientists estimate that between 5 and 8% of all global crop production is directly attributable to animal pollinators, with an annual market value that may be as much as 577 billion US dollars.

However, the experts warn that a variety of agricultural practices are contributing to steep declines in key pollinating species across Europe and North America. In Europe, populations are declining for at least 37% of bee and 31% of butterfly species.

A lack of data for Africa, Latin America and Asia means we are currently in the dark about the status of pollinators in many parts of the world, say the scientists. Where national ‘red lists’ are available, they show that up to 50% of global bee species, for example, may be threatened with extinction.

For some crops, including cocoa, wild pollinators contribute more to global crop production than managed honey bees. Wild bee populations are of particular concern, as bees are “dominant” pollinators, say scientists, and visit over 90% of the leading global crop types.

Changes in land-use and habitat destruction are key drivers of pollinator decline. Increasing crop monocultures – where the same plant is homogenously grown across vast swathes of land – mean that the plant diversity required by many pollinators is dwindling.

Increased use of pesticides are a big problem for many species – insecticides such as neonicotinoids have been shown to harm the survival of wild bees, for example – and climate change is shifting seasonal activities of key pollinators, the full effects of which may not be apparent for several decades.

The decline of practices based on indigenous and local knowledge also threatens pollinators. These practices include traditional farming systems, maintenance of diverse landscapes and gardens, kinship relationships that protect specific pollinators, and cultures and languages that are connected to pollinators.

Everyone should think carefully about whether they need to use insecticides and herbicides in their own gardens

Many livelihoods across the world depend on pollinating animals, say scientists. Pollinator-dependent crops include leading export products in developing countries (such as coffee and cocoa) and developed countries (such as almonds), providing employment and income for millions of people.

If the worst-case scenario – a complete loss of animal pollinators – occurred, not only would between 5 and 8% of the world’s food production be wiped out, it would lower the availability of crops and wild plants that provide essential micronutrients to human diets, risking vastly increased numbers of people suffering from Vitamin A, iron and folate deficiency.

However, the assessment says that by deploying strategies for supporting pollinators, we could not only preserve the volume of food they help us produce, but we could boost populations and in doing so could even improve production in sustainable farming systems, so-called “ecological intensification”.

Many pollinator-friendly strategies are relatively straightforward. Maintaining patches of semi-natural habitats throughout productive agricultural land would provide nesting and ‘floral resources’ for many pollinators. This could be as simple as strips of wild flowers breaking up crop monocultures, for example, and identifying and tending to nest trees in farming settings.

Certain traditional crop rotation practices using seasonal indicators such as flowering to trigger planting also help to maintain diversity – and it is diversity that is at the heart of flourishing pollinator populations.

There are actions that Governments around the world could take, says Dr Dicks, such as raising the standards of pesticide and GMO risk assessment, or supporting training for farmers in how to manage pollination and reduce pesticide use. National-level monitoring of wild pollinators, especially bees, would help to address the lack of long term data on pollinator numbers.

“There are many things individual people can do to help pollinators, and safeguard them for the future,” says Dr Dicks.

“Planting flowers that pollinators use for food, or looking after their habitats in urban and rural areas, will help. Everyone should also think carefully about whether they need to use insecticides and herbicides in their own gardens.”

More information about how to help wild pollinators can be found at the Bees Needs website, which is part of the National Pollinator Strategy for England.

Inset image: Lynn Dicks at the IPBES meeting in Kuala Lumpur. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/pollinator-species-vital-to-our-food-supply-are-under-threat-warn-experts#sthash.8XS69JDq.dpuf

Being Overweight Linked To Poorer Memory

Being overweight linked to poorer memory

source: www.cam.ac.uk

Overweight young adults may have poorer episodic memory – the ability to recall past events – than their peers, suggests new research from the University of Cambridge, adding to increasing evidence of a link between memory and overeating.

How vividly we remember a recent meal, for example today’s lunch, can make a difference to how hungry we feel

Lucy Cheke

In a preliminary study published in The Quarterly Journal of Experimental Psychology, researchers from the Department of Psychology at Cambridge found an association between high body mass index (BMI) and poorer performance on a test of episodic memory.

Although only a small study, its results support existing findings that excess bodyweight may be associated with changes to the structure and function of the brain and its ability to perform certain cognitive tasks optimally. In particular, obesity has been linked with dysfunction of the hippocampus, an area of the brain involved in memory and learning, and of the frontal lobe, the part of the brain involved in decision making, problem solving and emotions, suggesting that it might also affect memory; however, evidence for memory impairment in obesity is currently limited.

Around 60% of UK adults are overweight or obese: this number is predicted to rise to approximately 70% by 2034. Obesity increases the risk of physical health problems, such as diabetes and heart disease, as well as psychological health problems, such as depression and anxiety.

“Understanding what drives our consumption and how we instinctively regulate our eating behaviour is becoming more and more important given the rise of obesity in society,” says Dr Lucy Cheke. “We know that to some extent hunger and satiety are driven by the balance of hormones in our bodies and brains, but psychological factors also play an important role – we tend to eat more when distracted by television or working, and perhaps to ‘comfort eat’ when we are sad, for example.

“Increasingly, we’re beginning to see that memory – especially episodic memory, the kind where you mentally relive a past event – is also important. How vividly we remember a recent meal, for example today’s lunch, can make a difference to how hungry we feel and how much we are likely to reach out for that tasty chocolate bar later on.”

The researchers tested 50 participants aged 18-35, with body mass indexes (BMIs) ranging from 18 through to 51 – a BMI of 18-25 is considered healthy, 25-30 overweight, and over 30 obese. The participants took part in a memory test known as the ‘Treasure-Hunt Task’, where they were asked to hide items around complex scenes (for example, a desert with palm trees) across two ‘days’. They were then asked to remember which items they had hidden, where they had hidden them, and when they were hidden. Overall, the team found an association between higher BMI and poorer performance on the tasks.

The researchers say that the results could suggest that the structural and functional changes in the brain previously found in those with higher BMI may be accompanied by a reduced ability to form and/or retrieve episodic memories. As the effect was shown in young adults, it adds to growing evidence that the cognitive impairments that accompany obesity may be present early in adult life.

This was a small, preliminary study and so the researchers caution that further research will be necessary to establish whether the results of this study can be generalised to overweight individuals in general, and to episodic memory in everyday life rather than in experimental conditions.

“We’re not saying that overweight people are necessarily more forgetful,” cautions Dr Cheke, “but if these results are generalizable to memory in everyday life, then it could be that overweight people are less able to vividly relive details of past events – such as their past meals. Research on the role of memory in eating suggests that this might impair their ability to use memory to help regulate consumption.

“In other words, it is possible that becoming overweight may make it harder to keep track of what and how much you have eaten, potentially making you more likely to overeat.”

Dr Cheke believes that this work is an important step in understanding the role of psychological factors in obesity. “The possibility that there may be episodic memory deficits in overweight individuals is of concern, especially given the growing evidence that episodic memory may have a considerable influence on feeding behaviour and appetite regulation,” she says.

Co-author Dr Jon Simons adds: “By recognising and addressing these psychological factors head-on, not only can we come to understand obesity better, but we may enable the creation of interventions that can make a real difference to health and wellbeing.”

The study was funded by the Medical Research Council and Girton College, University of Cambridge, and the James S McDonnell Foundation.

Reference
Cheke, LG et al. Higher BMI is Associated with Episodic Memory Deficits in Young Adults. The Quarterly Journal of Experimental Psychology; 22 Feb 2016. DOI:10.1080/17470218.2015.1099163


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/being-overweight-linked-to-poorer-memory#sthash.2AOx9JGd.dpuf

Flowers Tone Down The Iridescence of Their Petals and Avoid Confusing Bees

Flowers tone down the iridescence of their petals and avoid confusing bees

source: www.cam.ac.uk

Latest research shows that flowers’ iridescent petals, which may look plain to human eyes, are perfectly tailored to a bee’s-eye-view.

There are lots of optical effects in nature that we don’t yet understand… we are finding out that animals and plants have a lot more to say to the world and to each other

Beverley Glover

Iridescent flowers are never as dramatically rainbow-coloured as iridescent beetles, birds or fish, but their petals produce the perfect signal for bees, according to a new study published today in Current Biology.

Bees buzzing around a garden, looking for nectar, need to be able to spot flower petals and recognise which coloured flowers are full of food for them. Professor Beverley Glover from the University of Cambridge’s Department of Plant Sciences and Dr Heather Whitney from the University of Bristol found that iridescence – the shiny, colour-shifting effect seen on soap bubbles – makes flower petals more obvious to bees, but that too much iridescence confuses bees’ ability to distinguish colours.

Whitney, Glover and their colleagues found that flowers use more subtle, or imperfect, iridescence on their petals, which doesn’t interfere with the bees’ ability to distinguish subtly different colours, such as different shades of purple. Perfect iridescence, for example as found on the back of a CD, would make it more difficult for bees to distinguish between subtle colour variations and cause them to make mistakes in their flower choices.

“In 2009 we showed that some flowers can be iridescent and that bees can see that iridescence, but since then we have wondered why floral iridescence is so much less striking than other examples of iridescence in nature,” says Glover. “We have now discovered that floral iridescence is a trade-off that makes flower detection by bumblebees easier, but won’t interfere with their ability to recognise different colours.”

Bees use ‘search images’, based on previously-visited flowers, to remember which coloured flowers are a good source of nectar.

“On each foraging trip a bee will usually retain a single search image of a particular type of flower,” explains Glover, “so if they find a blue flower that is rich in nectar, they will then visit more blue flowers on that trip rather than hopping between different colours. If you watch a bee on a lavender plant, for example, you’ll see it visit lots of lavender flowers and then fly away – it won’t usually move from a lavender flower to a yellow or red flower.”

This colour recognition is vital for both the bees and the plants, which rely on the bees to pollinate them. If petals were perfectly iridescent, then bees could struggle to identify and recognise which colours are worthwhile visiting for nectar – instead, flowers have developed an iridescence signal that allows them to talk to bees in their own visual language.

The researchers created replica flowers that were either perfectly iridescent (using a cast of the back of a CD), imperfectly iridescent (using casts of natural flowers), or non-iridescent. They then tested how long it took for individual bees to find the flowers.

They found that the bees were much quicker to locate the iridescent flowers than the non-iridescent flowers, but it didn’t make a difference whether the flowers were perfectly or imperfectly iridescent. The bees were just as quick to find the replicas modelled on natural petals as they were to find the perfectly iridescent replicas.

When they tested how fast the bees were to find nectar-rich flowers amongst other, similarly-coloured flowers, they found that perfect iridescence impeded the bees’ ability to distinguish between the flowers – the bees were often confused and visited the similarly-coloured flowers that contained no nectar. However, imperfect iridescence, found on natural petals, didn’t interfere with this ability, and the bees were able to successfully locate the correct flowers that were full of nectar.

“Bees are careful shoppers in the floral supermarket, and floral advertising has to tread a fine line between dazzling its customers and being recognisable,” says Lars Chittka from Queen Mary University of London, another co-author of the study.

“To our eyes most iridescent flowers don’t look particularly striking, and we had wondered whether this is simply because flowers aren’t very good at producing iridescence,” says Glover. “But we are not the intended target – bees are, and they see the world differently from humans.”

“There are lots of optical effects in nature that we don’t yet understand. We tend to assume that colour is used for either camouflage or sexual signalling, but we are finding out that animals and plants have a lot more to say to the world and to each other.”

Glover and her colleagues are now working towards developing real flowers that vary in their amount of iridescence so that they can examine how bees interact with them.

“The diffraction grating that the flowers produce is not as perfectly regular as those we can produce on things like CDs, but this ‘advantageous imperfection’ appears to benefit the flower-bee interaction,” says Whitney.

Reference: Whitney, Heather et al “Flower Iridescence Increases Object Detection in the Insect Visual System without Compromising Object Identity” Current Biology (2016). DOI: http://dx.doi.org/10.1016/j.cub.2016.01.026

Professor Glover will be giving the talk ‘Can we improve crop pollination by breeding better flowers?’ at the Cambridge Science Festival on Sunday 20 March 2016. More information can be found here: http://www.sciencefestival.cam.ac.uk/events/can-we-improve-crop-pollinat…

Inset images: Iridescent flower (Copyright Howard Rice); Bee on non-iridescent flower (Edwige Moyroud); Bee on non-iridescent flower (Edwige Moyroud).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/flowers-tone-down-the-iridescence-of-their-petals-and-avoid-confusing-bees#sthash.YUBqwA0A.dpuf

Honeypot Britain? EU Migrants’ Benefits and The UK Referendum

Honeypot Britain? EU migrants’ benefits and the UK referendum

source: www.cam.ac.uk

Ahead of Britain’s EU referendum, research will explore the experiences of EU migrants working in the UK, and attitudes to employment and social security – for which there is little empirical evidence, despite intense political rhetoric. An initial study suggests workers from the EU are significantly under-represented in employment tribunals.

Accusations that the UK has become a ‘honeypot nation’ has become a key issue in the debate about the UK’s membership of the EU

Amy Ludlow

A new Cambridge University research project is gathering “robust empirical evidence” on the experience of EU migrant workers in the UK, exploring everything from hopes and expectations to how they find work and what use EU migrants make of benefits.

The research is timely, as perceptions of EU migrants undercutting British workers or acting as ‘benefits tourists’ are fuelling much of the debate in the lead-up to June’s EU Referendum.

Some MPs are warning that Britain has become a “honeypot nation” with its social security system acting as a primary pull factor, leading to David Cameron’s negotiation of a so-called ‘emergency brake’ on benefits for EU migrants.

However, critics argue that the government have been consistently unable to provide any evidence that this is the case. For example, last week’s response to a Parliamentary question on the amount spent on benefits to EU migrants was simply: “the information is not available”.

The EU Migrant Worker Project will aim to fill some of that knowledge gap. By combining interviews and focus groups with new methodologies for analysing available data, the research team hope to build an evidential base for EU migrants’ experiences of and attitudes toward Britain’s employment and social security systems.

The project, funded by the Economic and Social Research Council, is led by Professor Catherine Barnard and Dr Amy Ludlow from Cambridge’s Faculty of Law, and is launched today (Friday 26th February) with a roundtable discussion involving Labour former Home Secretary Charles Clarke and current Conservative MP Heidi Allen among others.

Professor Barnard said: “We hope to shed new light on the big question of how we adequately regulate migration within a socio-economically diverse EU and a post-financial crisis context. This question is central to Brexit and to the outcome of the UK’s referendum on EU membership.”

Initial work has already been carried out, and a study published last October in the journal Industrial Law shows that EU migrants are using UK employment tribunals at much lower rates than would be expected relative to population size.

The study, the only one of its kind, is based on analysis of three years of Employment Tribunal decisions alongside field interviews. It suggests that migrant workers from EU-8 nations use employment tribunals over 85% less than would be expected, given the size of the workforce they represent.

The researchers identified various factors affecting migrants’ willingness and ability to use tribunals, including: lack of knowledge of their rights, reluctance to engage with the judicial system and, for those in the UK for a short time, a desire to maximise their earnings that is prioritised over complaints about mistreatment.

Under current EU law, EU migrants have rights to equal treatment in their terms and conditions of employment offered to domestic workers.

However, this initial study suggests that when it comes to employment conditions these may be rights that “exist more ‘on paper’ than in practice”, write the researchers.

“While we found good evidence to suggest that EU-8 workers were fairly treated by Employment Tribunal judges, navigating the system and accessing enough advice to understand the basic elements of the rights these workers are due is deeply problematic,” said Dr Ludlow.

“In interviews, we were told that largescale cuts to local authorities have had a negative impact on resources such as Citizen Advice Bureaus. These are important sources of guidance for workers who cannot afford legal advice, including workers from the EU.”

Professor Barnard said that the introduction of Employment Tribunal fees has meant that many workers are now priced out of claiming their employment rights. “If the Government is concerned about migrant workers’ undercutting employment terms and conditions and labour standards for domestic workers, our research suggests that resource needs to be directed to enabling migrant workers to enforce their rights, and to properly resourcing enforcement organisations such as the Gangmasters’ Licencing Authority.”

Unlike some other EU Member States, the UK did not impose restrictions on the admission of workers coming from the so-called EU-8 countries (such as Poland and the Czech Republic), apart from the requirement to register under the Workers’ Registration Scheme.

Over a million EU-8 workers, taking advantage of their free movement rights under Article 45 of the Treaty on the Functioning of the European Union (TFEU), have arrived in the UK since 2004. They enjoy rights to equal treatment in any social and tax advantages offered to domestic workers – including the payment of child benefit and ‘in-work benefits’ such as tax credits.

Barnard and Ludlow plan to use the research design from their employment enforcement study and apply it to social security tribunals, to help give some sense of the number of EU migrants who claim benefits and the nature of the cases in which they are involved. They will also interview EU migrants and those that work closely with them, to explore migrants’ hopes, expectations and experiences.

I didn’t come to the UK just to work in any kind of job

Early interviews have highlighted the importance of online grass roots communities such as Facebook groups for migrant workers seeking advice, and that stopping child benefit for EU migrants may result in fewer family units making the transition to the UK, and an increase in younger, unattached men working in the UK, who are likely to integrate less permanently within their host community.

While some interviewees are preparing to leave Britain, citing a better quality of life in their home nation (“I’m not interested in staying in the UK just because it’s possible”), the researchers also found migrant success stories.

One interviewee spoke of her determination to work in nursing: “I didn’t come to the UK just to work in any kind of job. Either I’m working my way towards nursing or, if that’s not possible, I’m going back.” After struggling through bar work and learning medical English on her days off, the woman is now a nurse in a local hospital.

“Many of the EU migrants we’ve talked to so far don’t understand our complex social security system; their only interest is in finding work,” said Dr Ludlow.

As well as one-to-one interviews and focus groups, the researchers will be making a documentary and providing migrant workers with disposable cameras. “It’s another way of trying to capture the migrant experience that offers an alternative insight to words on paper,” said Professor Barnard.

The project is a two-way process, she says, with minute-long podcasts summarizing relevant aspects of the law, which will be available on EU Migrant Worker Project later this month.

“What we can offer the migrant community in return is quite detailed knowledge of the law and their rights and how to enforce those rights, both to claim employment rights but also social security benefits.”

Added Dr Ludlow: “Accusations that the UK has become a ‘honeypot nation’ has become a key issue in the debate about the UK’s membership of the EU.

“By gathering empirical evidence about EU migrants’ experiences of navigating the labour market and social security system in the UK, we hope to increase our understanding of EU and domestic law as it works in practice and to inform public opinion in anticipation of the referendum on 23 June and beyond.”

If you are interested in learning more about Professor Barnard and Dr Ludlow’s work please email euworker@hermes.cam.ac.uk, tweet @eumigranworker, or contact them on their Facebook page https://www.facebook.com/eu.migrantworker/. Their project website is: www.eumigrantworker.law.cam.ac.uk.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/honeypot-britain-eu-migrants-benefits-and-the-uk-referendum#sthash.ADfds434.dpuf

If General Practice Fails, The Whole NHS Fails, Argue Healthcare Experts

If general practice fails, the whole NHS fails, argue healthcare experts

source: www.cam.ac.uk

The current focus on financial crises in hospitals diverts attention from the crisis in general practice, argue Professor Martin Roland from the University of Cambridge and GP Sir Sam Everington in an editorial published in The BMJ today.

Hospitals’ £2bn deficit “certainly sounds dramatic”, they argue, “but hospitals don’t go bust – someone usually picks up the bill.” General practice doesn’t have that luxury, and its share of the NHS budget has fallen from 11% in 2006 to under 8.5% now.

Recent research shows that GPs are experiencing unprecedented levels of stress with increasing workload and overwhelming bureaucracy. A GP’s comment at a recent national conference encapsulates the sense of despair: “The pressure of work leaves me in constant fear of making mistakes”.

GPs are finding it harder to recruit trainees and to find partners to replace those increasingly retiring in their 50s.

Politicians and NHS leaders want more care to be moved into primary care, yet the share of funding devoted to general practice is falling as a high proportion of the NHS budget is channelled into hospitals, and in the past 10 years, the number of hospital consultants has increased at twice the rate of GPs.

GPs currently manage the great majority of patients without referral or admission to hospital but if this balance shifted only slightly, hospitals would be overwhelmed.

“It is general practice that makes the NHS one of the world’s most cost effective health services,” they say. The £136 cost per patient per year for unlimited general practice care is less than the cost of a single visit to a hospital outpatient department.

The authors, who are both internationally renowned experts in general practice, present a number of solutions. They say GPs need a “substantial injection of new funding” to provide more staff in primary care.

In addition, new roles are needed to take the “strain off” clinical staff, for example, physician associates, pharmacists, and advanced practice nurses. They also argue that reviews of practices’ contracts that threaten serious financial destabilisation should be put on hold while a fair funding formula is developed to replace the 25 year old ‘Carr-Hill’ formula.

NHS England should tackle spiralling indemnity costs by providing Crown Indemnity similar to that for hospital doctors, as GPs increasingly do work previously done by specialists.

Bureaucracy should be slashed, in part by changing the £224m Care Quality Commission inspection regime to one where only the 5-10% of practices found to be struggling are revisited within five years.

In hospitals, the ‘Choose and Book’ referral system needs radical reform – the authors estimate that communicating by phone, email, and online video link could reduce outpatient attendance by as much as 50% in some specialties. And the ‘Payment by Results’ system for funding hospitals must become a population based, capitated budget that incentivises hospitals to support patients and clinicians in the community.

The authors identify two ‘elephants in the room’ that can no longer be ignored. First, cuts to social care make it increasingly difficult for hospitals to discharge patients.

Second, the UK’s funding for healthcare has fallen well behind its European neighbours – now thirteenth out of 15 in healthcare expenditure as a percentage of gross domestic product. In 2000, Tony Blair promised to raise NHS spending to mid-European levels. Today, this would require another £22bn a year.

“Urgent action is needed to restore the NHS,” warn the authors. “But the crisis will not be averted by focusing on hospitals. If general practice fails, the whole NHS fails.”

Professor Martin Roland, Professor of Health Services Research at the University of Cambridge, adds: “GPs need to feel valued rather than continually criticised by politicians and regulators. Many other countries see primary care as the jewel in the crown of the NHS, yet many practices are at breaking point, with an increasing number simply handing in their contracts and closing.”

Sir Sam Everington, Tower Hamlets GP and chair of Tower Hamlets CCG, says: “Patients really value the support of their family doctor, particularly in crises like end of life care. Moving care into the community means supporting patients to die at home surrounded by their loved ones – this is one of many reasons why family medicine is critical to the NHS.

“Family medicine and new developments like social prescribing show the strengths of general practice in supporting vulnerable patients in all aspects of their physical and mental well-being.”

Reference
Martin Roland and Sam Everington. Tackling the crisis in general practice. The BMJ. 18 Feb 2016. dx.doi.org/10.1136/bmj.i942

Adapted from a press release by The BMJ.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/if-general-practice-fails-the-whole-nhs-fails-argue-healthcare-experts#sthash.26vud4eZ.dpuf

UK Online Alternative Finance Market Grows To £3.2 Billion In 2015

UK online alternative finance market grows to £3.2 billion in 2015

source: www.cam.ac.uk

The UK online alternative finance sector grew 84% in 2015, facilitating £3.2 billion in investments, loans and donations, according to a new report published today.

These areas of finance are increasingly becoming part of our everyday economic life.

Robert Wardrop

This is a significant increase in volume, but growth of the online alternative finance market is slowing down, with the annual growth in 2014/2015 being nearly half the 161% growth from 2013/14. Although the absolute year-on-year growth rate is slowing down, the report said, the alternative finance industry still recorded substantive expansion across almost all models.

The report also highlights the rapid expansion of donations-based crowdfunding, the perceived risk of fraud and malpractice by the industry, and increasing institutionalisation – as around a quarter of P2P (peer-to-peer) loans are now funded by institutional investors, including traditional banks and government through organisations such as the British Business Bank.

Pushing Boundaries – 2015 UK Alternative Finance is jointly published by the Cambridge Centre for Alternative Finance at the University of Cambridge and UK innovation foundation Nesta, in partnership with KPMG and with the support of CME Group Foundation. It is the latest in an annual series of reports from the University of Cambridge Judge Business School and Nesta, which track the size and development of online alternative finance, such as P2P lending and crowdfunding, in the UK.

Key findings of the report, a survey of 94 crowdfunding and P2P lending platforms, include:

  • Increased share of the market for business finance: in 2015 it is estimated that online alternative finance platforms provided the equivalent of over 3% of all lending to SMEs (small and medium-sized enterprises) in the UK. For small businesses – those with a turnover of less than £1 million a year – P2P platforms provided an amount lending equivalent of 13% of all new bank loans.
  • Institutionalisation is taking off: 2015 saw increased involvement from institutional investors in the online alternative finance market. The report shows that 32% of loans in P2P consumer lending and 26% of P2P business lending were funded by institutional investors.
  • Donation-based crowdfunding is the fastest growing model: although starting from a relatively small base (£2 million), donation-based crowdfunding is the fastest growing model in the 2015 study, up by 500% to £12 million.
  • Real estate is the single most popular sector: in 2014/2015 the most popular sector for online alternative finance investments and loans was real estate, with the combined debt and equity-based funding for this sector reaching £700m in 2015.
  • The equity market is growing fast: the second fastest growing area of the alternative finance market is equity-based crowdfunding, up by 295% – from £84 million raised in 2014, to £332m in 2015. Excluding real estate crowdfunding, in 2014/2015 the equity-based crowdfunding sector contributed to £245 million worth of venture financing in 2015 – equivalent to over 15% of total UK seed and venture equity investment.
  • The industry is generally satisfied with current regulation: when asked what they thought of existing regulation, more than 90% of P2P lending and equity-based crowdfunding platforms stated that they thought the current level was appropriate.
  • The biggest risk to market growth is fraud or malpractice: when asked what they saw as the biggest risk to the future growth of the market, 57% of P2P lending and equity-based crowdfunding platforms cited the potential collapse of one or more of the well-known industry player due to fraud or malpractice.

“The substantive growth of alternative finance in the UK last year is not surprising, given that these new channels of finance are increasingly moving mainstream,” said Robert Wardrop, Executive Director of Cambridge Centre for Alternative Finance. “One of the key drivers underpinning this development is the growing institutionalisation of the sector. The Cambridge Centre for Alternative Finance is proud to shed light on this fascinating and dynamic industry, to help inform policymakers, regulators and the general public about how these areas of finance are increasingly becoming part of our everyday economic life.”

“2015 has seen another year of remarkable growth for Alternative Finance in the UK,” said Stian Westlake, Nesta’s Executive Director of Policy & Research. “Little more than a collection of plucky startups just six years ago, the sector now does £3.2 billion of business a year. As the sector grows and matures it is sure to face challenges – investors will be keen to see returns, and another financial crisis would certainly test the robustness of P2P lending.”

Warren Mead, Global co-lead of Fintech at KPMG, said: “After years of pushing boundaries, 2016 will be the year where ‘alternative’ financial options finally join the ranks of the mainstream. But while this evolution gives the industry the platform to grow, it also brings its own set of challenges. Being part of the financial establishment doesn’t sit well with its original social purpose. Incumbents are also playing catch up with their own digital investment, and are closing in on the disrupters’ lead. Meanwhile, platform failures within these growing networks are inevitable. So the question is, will the hard won enthusiasm for these platforms start to wane?”

The report was led by Bryan Zhang, a Director of the Cambridge Centre for Alternative Finance, and Peter Baeck, Principal Researcher at Nesta. It has been supported in part by funding from audit, tax and advisory service KPMG and CME Group Foundation, the foundation affiliated with CME Group.

Originally published on the Cambridge Judge Business School website. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/uk-online-alternative-finance-market-grows-to-ps32-billion-in-2015#sthash.aJ4BI2YD.dpuf

Students Share Their Lives At Cambridge

Students share their lives at Cambridge

Shadows and mentors walking along King's Parade in Cambridge

source: www.cam.ac.uk

350 Year 12s and mature learners have just experienced life at Cambridge thanks to one of the UKs largest student-led access initiatives.

This university is for anyone, no matter what their background is, as long as they love their subject and have real academic potential

Helena Blair, CUSU Access Officer

Cambridge University Students’ Union’s (CUSU) annual Shadowing Scheme gives state school educated Year 12s and mature prospective applicants from across the UK the opportunity to come to Cambridge, stay in one of its Colleges and ‘shadow’ a current undergraduate for three days.
By giving participants the chance to see what undergraduate life is really like, the Shadowing Scheme shatters the popular myths and misconceptions that might otherwise deter them from applying.
Established in 2000, the Scheme is run over three long weekends in January and February. During this time, each ‘shadow’ accompanies a ‘mentor, a current undergraduate who is studying a subject which they are interested in.
During their stay, ‘Shadows’ get a taste of lectures, supervisions and for the scientists, laboratory classes. They also have the opportunity to sample some of the University’s student societies and chat to current students from a wide range of backgrounds and courses.
The CUSU Shadowing Scheme targets academically strong Year 12s and mature learners who have little or no family experience of higher education, and who attend schools and colleges which do not have tradition of progression to leading universities.
Helena Blair, a Cambridge graduate and now CUSU’s Access Officer, says ‘Our student mentors are eager to spread the word that this university is for anyone, no matter what their background is, as long as they love their subject and have real academic potential. No one from my school had applied before and the myths convinced me that I’d never get in, or fit in, but meeting so many friendly, accepting and diverse Cambridge students changed my outlook’.
This year, Michaela Chan, a Trinity Hall engineering student from Luton, has taken two shadows under her wing – Talent from Welwyn Garden City and Sam from Bradford. Sam hopes to study engineering and got the chance to attend a few second year lectures in the Faculty. Talent kept her options open, attending a maths lecture and joining Cambridge medics at Addenbrookes hospital.
Meanwhile, Alia Khalid, a Sidney Sussex philosophy student from Harrogate, is mentoring Josh, a Year 12 from Stoke. Josh is trying to work out whether he’s more interested in Philosophy or Psychological and Behavioural Sciences. The pair emerge from a morning lecture on Physicalism, the philosophical position that everything which exists is no more extensive than its physical properties.
Unfazed but hungry, they head towards Sidney Sussex College where an informal ‘Meet the Students’ with pizza event has taken over the College bar. Joining them on the stroll along the iconic King’s Parade is Olivia, a philosophy student and her shadow, Adriana, a fellow Londoner.
Olivia applied to Cambridge after taking part in one of the Summer Schools run by the University with the Sutton Trust and is now working hard to give Adriana, who hasn’t decided whether to study English, Philosophy or Law, as much information as possible. At Sidney Sussex, she introduces her to Damian, a first year English student at Christ’s College. “He said some useful things about his course and how to prepare for applying” reports Adriana,”I’ve got some more reading to do … Is that pizza spicy?”
At the University’s Careers Service, ‘shadows’ from Norwich, Wales, Liverpool and London have gathered to hear from its Director, Gordon Chesterman, about the opportunities which a Cambridge degree can offer. Cambridge graduates are some of the most employable anywhere in the world but Gordon is also keen to emphasise that a Cambridge degree provides flexibility and choice. Luke, a Year 12 from Swansea, would like to study Natural Sciences but he’s worried that if he makes the wrong choice now, he’ll struggle to get the job he wants later on. Gordon immediately reassures him.
“What do you think these people studied?” he asks, “a fraud investigator, an investment banker, a long-haul pilot, a community outreach officer in Iraq?”
“Maths?” someone suggests.
“No, they actually all studied music”.
Everyone is surprised but as Gordon explains, studying Music at Cambridge develops the analytical skills, organisation and self-discipline which all of these careers demand.
At the end of the session, conversation turns to life in Cambridge and Lara Grace, a Year 12 from Streatham admits “If I got in, I’d have to learn how to ride a bike. I’ve never cycled in London”. Lara Grace wants to apply to study Human, Social, and Political Sciences and then pursue a career in Human Rights. If she’s only anxious about the cycling, the Shadowing Scheme has done its job – busting myths and inspiring confidence.

Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/news/students-share-their-lives-at-cambridge#sthash.rlbv2Z8h.dpuf

Five-Dimensional Black Hole Could ‘Break’ General Relativity

Five-dimensional black hole could ‘break’ general relativity

source: www.cam.ac.uk

 Researchers have successfully simulated how a ring-shaped black hole could cause general relativity to break down: assuming the universe contains at least five dimensions, that is.

As long as singularities stay hidden behind an event horizon, they do not cause trouble and general relativity holds

Markus Kunesch

Researchers have shown how a bizarrely shaped black hole could cause Einstein’s general theory of relativity, a foundation of modern physics, to break down. However, such an object could only exist in a universe with five or more dimensions.

The researchers, from the University of Cambridge and Queen Mary University of London, have successfully simulated a black hole shaped like a very thin ring, which gives rise to a series of ‘bulges’ connected by strings that become thinner over time. These strings eventually become so thin that they pinch off into a series of miniature black holes, similar to how a thin stream of water from a tap breaks up into droplets.

Ring-shaped black holes were ‘discovered’ by theoretical physicists in 2002, but this is the first time that their dynamics have been successfully simulated using supercomputers. Should this type of black hole form, it would lead to the appearance of a ‘naked singularity’, which would cause the equations behind general relativity to break down. The results are published in the journal Physical Review Letters.

 

General relativity underpins our current understanding of gravity: everything from the estimation of the age of the stars in the universe, to the GPS signals we rely on to help us navigate, is based on Einstein’s equations. In part, the theory tells us that matter warps its surrounding spacetime, and what we call gravity is the effect of that warp. In the 100 years since it was published, general relativity has passed every test that has been thrown at it, but one of its limitations is the existence of singularities.

A singularity is a point where gravity is so intense that space, time, and the laws of physics, break down. General relativity predicts that singularities exist at the centre of black holes, and that they are surrounded by an event horizon – the ‘point of no return’, where the gravitational pull becomes so strong that escape is impossible, meaning that they cannot be observed from the outside.

“As long as singularities stay hidden behind an event horizon, they do not cause trouble and general relativity holds – the ‘cosmic censorship conjecture’ says that this is always the case,” said study co-author Markus Kunesch, a PhD student at Cambridge’s Department of Applied Mathematics and Theoretical Physics (DAMTP). “As long as the cosmic censorship conjecture is valid, we can safely predict the future outside of black holes. Because ultimately, what we’re trying to do in physics is to predict the future given knowledge about the state of the universe now.”

But what if a singularity existed outside of an event horizon? If it did, not only would it be visible from the outside, but it would represent an object that has collapsed to an infinite density, a state which causes the laws of physics to break down. Theoretical physicists have hypothesised that such a thing, called a naked singularity, might exist in higher dimensions.

“If naked singularities exist, general relativity breaks down,” said co-author Saran Tunyasuvunakool, also a PhD student from DAMTP. “And if general relativity breaks down, it would throw everything upside down, because it would no longer have any predictive power – it could no longer be considered as a standalone theory to explain the universe.”

We think of the universe as existing in three dimensions, plus the fourth dimension of time, which together are referred to as spacetime. But, in branches of theoretical physics such as string theory, the universe could be made up of as many as 11 dimensions. Additional dimensions could be large and expansive, or they could be curled up, tiny, and hard to detect. Since humans can only directly perceive three dimensions, the existence of extra dimensions can only be inferred through very high energy experiments, such as those conducted at the Large Hadron Collider.

Einstein’s theory itself does not state how many dimensions there are in the universe, so theoretical physicists have been studying general relativity in higher dimensions to see if cosmic censorship still holds. The discovery of ring-shaped black holes in five dimensions led researchers to hypothesise that they could break up and give rise to a naked singularity.

What the Cambridge researchers, along with their co-author Pau Figueras from Queen Mary University of London, have found is that if the ring is thin enough, it can lead to the formation of naked singularities.

Using the COSMOS supercomputer, the researchers were able to perform a full simulation of Einstein’s complete theory in higher dimensions, allowing them to not only confirm that these ‘black rings’ are unstable, but to also identify their eventual fate. Most of the time, a black ring collapses back into a sphere, so that the singularity would stay contained within the event horizon. Only a very thin black ring becomes sufficiently unstable as to form bulges connected by thinner and thinner strings, eventually breaking off and forming a naked singularity. New simulation techniques and computer code were required to handle these extreme shapes.

“The better we get at simulating Einstein’s theory of gravity in higher dimensions, the easier it will be for us to help with advancing new computational techniques – we’re pushing the limits of what you can do on a computer when it comes to Einstein’s theory,” said Tunyasuvunakool. “But if cosmic censorship doesn’t hold in higher dimensions, then maybe we need to look at what’s so special about a four-dimensional universe that means it does hold.”

The cosmic censorship conjecture is widely expected to be true in our four-dimensional universe, but should it be disproved, an alternative way of explaining the universe would then need to be identified. One possibility is quantum gravity, which approximates Einstein’s equations far away from a singularity, but also provides a description of new physics close to the singularity.

The COSMOS supercomputer at the University of Cambridge is part of the Science and Technology Facilities Council (STFC) DiRAC HPC Facility.

Inset image: A video of a very thin black ring starting to break up into droplets. In this process a naked singularity is created and weak cosmic censorship is violated. Credit: Pau Figueras, Markus Kunesch, and Saran Tunyasuvunakool

Reference:
Pau Figueras, Markus Kunesch, and Saran Tunyasuvunakool ‘End Point of Black Ring Instabilities and the Weak Cosmic Censorship Conjecture.’ Physical Review Letters (2016). DOI: 10.1103/PhysRevLett.116.071102


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/five-dimensional-black-hole-could-break-general-relativity#sthash.yCJot22O.dpuf

Researchers Identify ‘Neurostatin’ That May Reduce The Risk of Alzheimer’s Disease

Researchers identify ‘neurostatin’ that may reduce the risk of Alzheimer’s disease

source: www.cam.ac.uk

An approved anti-cancer drug successfully targets the first step in the toxic chain reaction that leads to Alzheimer’s disease, suggesting that treatments may be found to lower the risk of developing the neurodegenerative condition.

The body has a variety of natural defences to protect itself against neurodegeneration, but as we age, these defences become progressively impaired and can get overwhelmed.

Michele Vendruscolo

Researchers have identified a drug that targets the first step in the toxic chain reaction leading to the death of brain cells, suggesting that treatments could be developed to protect against Alzheimer’s disease, in a similar way to how statins are able to reduce the risk of developing heart disease.

The drug, which is an approved anti-cancer treatment, has been shown to delay the onset of Alzheimer’s disease, both in a test tube and in nematode worms. It has previously been suggested that statin-like drugs – which are safe and can be taken widely by those at risk of developing disease – might be a prospect, but this is the first time that a potential ‘neurostatin’ has been reported.

When the drug was given to nematode worms genetically programmed to develop Alzheimer’s disease, it had no effect once symptoms had already appeared. But when the drug was given to the worms before any symptoms became apparent, no evidence of the condition appeared, raising the possibility that this drug, or other molecules like it, could be used to reduce the risk of developing Alzheimer’s disease. The results are reported in the journal Science Advances.

By analysing the way the drug, called bexarotene, works at the molecular level, the international team of researchers, from the University of Cambridge, Lund University and the University of Groningen, found that it stops the first step in the molecular cascade that leads to the death of brain cells. This step, called primary nucleation, occurs when naturally occurring proteins in the body fold into the wrong shape and stick together with other proteins, eventually forming thin filament-like structures called amyloid fibrils. This process also creates smaller clusters called oligomers, which are highly toxic to nerve cells and are thought to be responsible for brain damage in Alzheimer’s disease.

“The body has a variety of natural defences to protect itself against neurodegeneration, but as we age, these defences become progressively impaired and can get overwhelmed,” said Professor Michele Vendruscolo of Cambridge’s Department of Chemistry, the paper’s senior author. “By understanding how these natural defences work, we might be able to support them by designing drugs that behave in similar ways.”

For the past two decades, researchers have attempted to develop treatments for Alzheimer’s that could stop the aggregation and proliferation of oligomers. However, these attempts have all failed, in part because there was not a precise knowledge of the mechanics of the disease’s development: Vendruscolo and his colleagues have been working to understand exactly that.

Using a test developed by study co-author Professor Tuomas Knowles, also from the Department of Chemistry, and by Professor Sara Linse, from Lund University, the researchers were able to determine what happens during each stage of the disease’s development, and also what might happen if one of those stages was somehow switched off.

“In order to block protein aggregation, we need accurate understanding of exactly what is happening and when,” said Vendruscolo. “The test that we have developed not only measures the rates of the process as a whole, but also the rates of its specific component sub-processes, so that we can reduce the toxicity of the aggregates rather than simply stopping them forming.”

Johnny Habchi, the first author of the paper, and colleagues assembled a library of more than 10,000 small molecules which interact in some way with amyloid-beta, a molecule that plays a vital role in Alzheimer’s disease. Using the test developed by Knowles and Linse, the researchers first analysed molecules that were either drugs already approved for some other purpose, or drugs developed for Alzheimer’s disease or other similar conditions which had failed clinical trials.

The first successful molecule they identified was bexarotene, which is approved by the US Food and Drug Administration for the treatment of lymphoma. “One of the real steps forward was to take a molecule that we thought could be a potential drug and work out exactly what it does. In this case, what it does is suppress primary nucleation, which is the aim for any neurostatin-type molecule,” said Vendruscolo. “If you stop the process before aggregation has started, you can’t get proliferation.”

One of the key advances of the current work is that by understanding the mechanisms of how Alzheimer’s disease develops in the brain, the researchers were able to target bexarotene to the correct point in the process.

“Even if you have an effective molecule, if you target the wrong step in the process, you can actually make things worse by causing toxic protein assemblies to build up elsewhere,” said study co-author Professor Chris Dobson, Master of St John’s College, University of Cambridge. “It’s like traffic control – if you close a road to try to reduce jams, you can actually make the situation worse if you put the block in the wrong place. It is not necessarily the case that all the molecules in earlier drug trials were ineffective, but it may be that in some cases the timing of the delivery was wrong.”

Earlier studies of bexarotene had suggested that the drug could actually reverse Alzheimer’s symptoms by clearing amyloid-beta aggregates in the brain, which received a great deal of attention. However, the earlier results, which were later called into question, were based on a completely different mode of action – the clearance of aggregates – than the one reported in the current study. By exploiting their novel approach, which enables them to carry out highly quantitative analysis of the aggregation process, the researchers have now shown that compounds such as bexarotene could instead be developed as preventive drugs, because its primary action is to inhibit the crucial first step in the aggregation of amyloid-beta.

“We know that the accumulation of amyloid is a hallmark feature of Alzheimer’s and that drugs to halt this build-up could help protect nerve cells from damage and death,” said Dr Rosa Sancho, Head of Research at Alzheimer’s Research UK. “A recent clinical trial of bexarotene in people with Alzheimer’s was not successful, but this new work in worms suggests the drug may need to be given very early in the disease. We will now need to see whether this new preventative approach could halt the earliest biological events in Alzheimer’s and keep damage at bay in in further animal and human studies.”

Over the next 35 years, the number of people with Alzheimer’s disease is predicted to go from 40 million to 130 million, with 70% of those in middle or low-income countries. “The only way of realistically stopping this dramatic rise is through preventive measures: treating Alzheimer’s disease only after symptoms have already developed could overwhelm healthcare systems around the world.”

The body has a number of natural defences designed to keep proteins in check. But as we get older, these processes can become impaired and get overwhelmed, and some proteins can slip through the safety net, resulting in Alzheimer’s disease and other protein misfolding conditions. While neurostatins are not a cure for Alzheimer’s disease, the researchers say that they could reduce its risk by acting as a backup for the body’s natural defences against misfolding proteins.

“You wouldn’t give statins to someone who had just had a heart attack, and we doubt that giving a neurostatin to an Alzheimer’s patient who could no longer recognise a family member would be very helpful,” said Dobson. “But if it reduces the risk of the initial step in the process, then it has a serious prospect of being an effective preventive treatment.”

But is there hope for those already affected by the disease? The methods that have led to the present advance have enabled the researchers to identify compounds that, rather than preventing the disease, could slow down its progression even when symptoms have become evident. “The next target of our research is also to be able to treat victims of this dreadful disease,” said Vendruscolo.

Reference:
Johnny Habchi et. al. ‘An anti-cancer drug suppresses the primary nucleation reaction that produces the toxic Aβ42 aggregates linked with Alzheimer’s disease.’ Science Advances (2016). DOI: 10.1126/sciadv.1501244


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/researchers-identify-neurostatin-that-may-reduce-the-risk-of-alzheimers-disease#sthash.YXHB75WV.dpuf

Could The Food We Eat Affect Our Genes? Study In Yeast Suggests This May Be The Case

Could the food we eat affect our genes? Study in yeast suggests this may be the case

source: www.cam.ac.uk

Almost all of our genes may be influenced by the food we eat, according to new research published in the journal Nature Microbiology. The study, carried out in yeast – which can be used to model some of the body’s fundamental processes – shows that while the activity of our genes influences our metabolism, the opposite is also true and the nutrients available to cells influence our genes.

In many cases the effects were so strong, that changing a cell’s metabolic profile could make some of its genes behave in a completely different manner

Markus Ralser

The behaviour of our cells is determined by a combination of the activity of its genes and the chemical reactions needed to maintain the cells, known as metabolism. Metabolism works in two directions: the breakdown of molecules to provide energy for the body and the production of all compounds needed by the cells.

Knowing the genome – the complete DNA ‘blueprint’ of an organism – can provide a substantial amount of information about how a particular organism will look. However, this does not give the complete picture: genes can be regulated by other genes or regions of DNA, or by ‘epigenetic’ modifiers – small molecules attached to the DNA that act like switches to turn genes on and off.

Previous studies have suggested that another player in gene regulation may exist: the metabolic network – the biochemical reactions that occur within an organism. These reactions mainly depend on the nutrients a cell has available – the sugars, amino acids, fatty acids and vitamins that are derived from the food we eat.

To examine the scale at which this happens, an international team of researchers, led by Dr Markus Ralser at the University of Cambridge and the Francis Crick Institute, London, addressed the role of metabolism in the most basic functionality of a cell. They did so using yeast cells. Yeast is an ideal model organism for large scale experiments at it is much simpler to manipulate than animal models, yet many of its important genes and fundamental cellular mechanisms are the same as or very similar to those in animals and humans.

The researchers manipulated the levels of important metabolites – the products of metabolic reactions – in the yeast cells and examined how this affected the behaviour of the genes and the molecules they produced. Almost nine out of ten genes and their products were affected by changes in cellular metabolism.

“Cellular metabolism plays a far more dynamic role in the cells than we previously thought,” explains Dr Ralser. “Nearly all of a cell’s genes are influenced by changes to the nutrients they have access to. In fact, in many cases the effects were so strong, that changing a cell’s metabolic profile could make some of its genes behave in a completely different manner.

“The classical view is that genes control how nutrients are broken down into important molecules, but we’ve shown that the opposite is true, too: how the nutrients break down affects how our genes behave.”

The researchers believe that the findings may have wide-ranging implications, including on how we respond to certain drugs. In cancers, for example, tumour cells develop multiple genetic mutations, which change the metabolic network within the cells. This in turn could affect the behaviour of the genes and may explain with some drugs fail to work for some individuals.

“Another important aspect of our findings is a practical one for scientists,” explains says Dr Ralser. “Biological experiments are often not reproducible between laboratories and we often blame sloppy researchers for that. It appears however, that small metabolic differences can change the outcomes of the experiments. We need to establish new laboratory procedures that control better for differences in metabolism. This will help us to design better and more reliable experiments.”

Reference
Alam, MT et al. The metabolic background is a global player in Saccharomyces gene expression epistasis. Nature Microbiology; 1 Feb. DOI: 10.1038/nmicrobiol.2015.30

 

 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/could-the-food-we-eat-affect-our-genes-study-in-yeast-suggests-this-may-be-the-case#sthash.V8p3of9G.dpuf

Gravitational Waves Detected 100 Years After Einstein’s Prediction

Gravitational waves detected 100 years after Einstein’s prediction

source: www.cam.ac.uk

New window on the universe is opened with the observation of gravitational waves – ripples in spacetime – caused by the collision of two black holes.

I feel incredibly lucky to be part of the team – this discovery will change the way we do astronomy.

Christopher Moore

An international team of scientists have observed ripples in the fabric of spacetime called gravitational waves, arriving at the earth from a cataclysmic event in the distant universe. This confirms a major prediction of Albert Einstein’s 1915 general theory of relativity and opens an unprecedented new window onto the cosmos.

The gravitational waves were detected on 14 September 2015 at 09:51 UK time by both LIGO (Laser Interferometer Gravitational-wave Observatory) detectors in Louisiana and Washington State in the US. They originated from two black holes, each around 30 times the mass of the Sun and located more than 1.3 billion light years from Earth, coalescing to form a single, even more massive black hole.

The LIGO Observatories are funded by the National Science Foundation (NSF), and were conceived, built, and are operated by Caltech and MIT. The discovery, published in the journal Physical Review Letters, was made by the LIGO Scientific Collaboration (which includes the GEO Collaboration and the Australian Consortium for Interferometric Gravitational Astronomy) and the Virgo Collaboration using data from the two LIGO detectors.

“The discovery of gravitational waves by the LIGO team is an incredible achievement,” said Professor Stephen Hawking, the Dennis Stanton Avery and Sally Tsui Wong-Avery Director of Research at the Department of Applied Mathematics and Theoretical Physics at the University of Cambridge. “It is the first observation of gravitational waves as predicted by Einstein and will allow us new insights into our universe. The gravitational waves were released from the collision of two black holes, the properties of which are consistent with predictions I made in Cambridge in the 1970s, such as the black hole area and uniqueness theorems. We can expect this observation to be the first of many as LIGO sensitivity increases, keeping us all busy with many further surprises.”

Gravitational waves carry unique information about the origins of our Universe and studying them is expected to provide important insights into the evolution of stars, supernovae, gamma-ray bursts, neutron stars and black holes. However, they interact very weakly with particles and require incredibly sensitive equipment to detect. British and German teams, including researchers from the University of Cambridge, working with US, Australian, Italian and French colleagues as part of the LIGO Scientific Collaboration and the Virgo Collaboration, are using a technique called laser interferometry.

Each LIGO site comprises two tubes, each four kilometres long, arranged in an L-shape. A laser is beamed down each tube to very precisely monitor the distance between mirrors at each end. According to Einstein’s theory, the distance between the mirrors will change by a tiny amount when a gravitational wave passes by the detector. A change in the lengths of the arms of close to 10-19 metres (just one-ten-thousandth the diameter of a proton) can be detected.

According to general relativity, a pair of black holes orbiting around each other lose energy through the emission of gravitational waves, causing them to gradually approach each other over billions of years, and then much more quickly in the final minutes. During the final fraction of a second, the two black holes collide into each other at nearly one-half the speed of light and form a single more massive black hole, converting a portion of the combined black holes’ mass to energy, according to Einstein’s formula E=mc2. This energy is emitted as a final strong burst of gravitational waves. It is these gravitational waves that LIGO has observed.

Independent and widely separated observatories are necessary to verify the direction of the event causing the gravitational waves, and also to determine that the signals come from space and are not from some other local phenomenon.

To ensure absolute accuracy, the consortium of nearly 1,000 scientists from 16 countries spent several months carefully checking and re-checking the data before submitting their findings for publication.

Christopher Moore, a PhD student from Cambridge’s Institute of Astronomy, was part of the discovery team who worked on the data analysis.

“Since September, we’ve known that something was detected, but it took months of checking to confirm that it was actually gravitational waves,” he said. “This team has been looking for evidence of gravitational waves for decades – a huge amount of work has gone into it, and I feel incredibly lucky to be part of the team. This discovery will change the way we do astronomy.”

Over coming years, the Advanced LIGO detectors will be ramped up to full power, increasing their sensitivity to gravitational waves, and in particular allowing more distant events to be measured. With the addition of further detectors, initially in Italy and later in other locations around the world, this first detection is surely just the beginning. UK scientists continue to contribute to the design and development of future generations of gravitational wave detectors.

The UK Minister for Universities and Science, Jo Johnson MP, said: “Einstein’s theories from over a century ago are still helping us to understand our universe. Now that we have the technological capability to test his theories with the LIGO detectors his scientific brilliance becomes all the more apparent. The Government is increasing support for international research collaborations, and these scientists from across the UK have played a vital part in this discovery.”

LIGO was originally proposed as a means of detecting these gravitational waves in the 1980s by Kip Thorne, Caltech’s Richard P. Feynman Professor of Theoretical Physics, Emeritus; Ronald Drever, professor of physics, emeritus also from Caltech; and Rainer Weiss, professor of physics, emeritus, from MIT.

“The description of this observation is beautifully described in the Einstein theory of General Relativity formulated 100 years ago and comprises the first test of the theory in strong gravitation. It would have been wonderful to watch Einstein’s face had we been able to tell him,” said Weiss.

“With this discovery, we humans are embarking on a marvelous new quest: the quest to explore the warped side of the universe—objects and phenomena that are made from warped spacetime. Colliding black holes and gravitational waves are our first beautiful examples,” said Thorne.

The discovery was made possible by the enhanced capabilities of Advanced LIGO, a major upgrade that increases the sensitivity of the instruments compared to the first generation LIGO detectors, enabling a large increase in the volume of the universe probed—and the discovery of gravitational waves during its first observation run.

The US National Science Foundation leads in financial support for Advanced LIGO. Funding organisations in Germany (Max Planck Society), the UK (Science and Technology Facilities Council, STFC) and Australia (Australian Research Council) also have made significant commitments to the project.

Several of the key technologies that made Advanced LIGO so much more sensitive have been developed and tested by the German UK GEO collaboration. Significant computer resources have been contributed by the AEI Hannover Atlas Cluster, the LIGO Laboratory, Syracuse University, and the University of Wisconsin-Milwaukee.

Several universities designed, built, and tested key components for Advanced LIGO: The Australian National University, the University of Adelaide, the University of Florida, Stanford University, Columbia University of New York, and Louisiana State University.

Cambridge has a long-standing involvement in the field of gravitational wave science, and specifically with the LIGO experiment. Until recently these efforts were spearheaded by Dr Jonathan Gair, who left last year for a post at the University of Edinburgh and who has made significant contributions to a wide range of gravitational wave and LIGO science; he is one of the authors on the new paper. Several scientists in Cambridge are current members of the collaboration, including PhD students Christopher Moore and Alvin Chua from the Institute of Astronomy; Professor Anthony Lasenby and PhD student Sonke Hee from the Cavendish Laboratory and the Kavli Institute of Cosmology; and Professor Mike Hobson from the Cavendish Laboratory.

Further members of the collaboration until recently based at Cambridge, include Dr Philip Graff (author on the detection paper) and Dr Farhan Feroz, who, jointly with Mike Hobson and Anthony Lasenby, developed a machine learning method of analysis used currently within LIGO, as well as Dr Christopher Berry (author) and Dr Priscilla Canizares.

These findings will be discussed at next month’s Cambridge Science Festival during theopen afternoon at the Institute of Astronomy.  

Reference:
B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration) ‘Observation of Gravitational Waves from a Binary Black Hole Merger.’ Physical Review Letters (2016). DOI: 10.1103/PhysRevLett.116.061102. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/gravitational-waves-detected-100-years-after-einsteins-prediction#sthash.IC9M412v.dpuf