All posts by Admin

First Evidence of Icy Comets Orbiting a Sun-Like Star

First evidence of icy comets orbiting a sun-like star

source: www.cam.ac.uk

Astronomers have found the first evidence of comets around a star similar to the sun, providing an opportunity to study what our solar system was like as a ‘baby’.

The system has a similar ice composition to our own, so it’s a good one to study in order to learn what our solar system looked like early in its existence.

Sebastián Marino

An international team of astronomers have found evidence of ice and comets orbiting a nearby sun-like star, which could give a glimpse into how our own solar system developed.

Using data from the Atacama Large Millimeter Array (ALMA), the researchers, led by the University of Cambridge, detected very low levels of carbon monoxide gas around the star, in amounts that are consistent with the comets in our own solar system.

The results, which will be presented today at the ‘Resolving Planet Formation in the era of ALMA and extreme AO’ conference in Santiago, Chile, are a first step in establishing the properties of comet clouds around sun-like stars just after the time of their birth.

Comets are essentially ‘dirty snowballs’ of ice and rock, sometimes with a tail of dust and evaporating ice trailing behind them, and are formed early in the development of stellar systems. They are typically found in the outer reaches of our solar system, but become most clearly visible when they visit the inner regions. For example, Halley’s Comet visits the inner solar system every 75 years, some take as long as 100,000 years between visits, and others only visit once before being thrown out into interstellar space.

It’s believed that when our solar system was first formed, the Earth was a rocky wasteland, similar to how Mars is today, and that as comets collided with the young planet, they brought many elements and compounds, including water, along with them.

The star in this study, HD 181327, has a mass about 30% greater than the sun and is located 160 light years away in the Painter constellation. The system is about 23 million years old, whereas our solar system is 4.6 billion years old.

“Young systems such as this one are very active, with comets and asteroids slamming into each other and into planets,” said Sebastián Marino, a PhD student from Cambridge’s Institute of Astronomy and the paper’s lead author. “The system has a similar ice composition to our own, so it’s a good one to study in order to learn what our solar system looked like early in its existence.”

Using ALMA, the astronomers observed the star, which is surrounded by a ring of dust caused by the collisions of comets, asteroids and other bodies. It’s likely that this star has planets in orbit around it, but they are impossible to detect using current telescopes.

“Assuming there are planets orbiting this star, they would likely have already formed, but the only way to see them would be through direct imaging, which at the moment can only be used for very large planets like Jupiter,” said co-author Luca Matrà, also a PhD student at Cambridge’s Institute of Astronomy.

In order to detect the possible presence of comets, the researchers used ALMA to search for signatures of gas, since the same collisions which caused the dust ring to form should also cause the release of gas. Until now, such gas has only been detected around a few stars, all substantially more massive than the sun. Using simulations to model the composition of the system, they were able to increase the signal to noise ratio in the ALMA data, and detect very low levels of carbon monoxide gas.

“This is the lowest gas concentration ever detected in a belt of asteroids and comets – we’re really pushing ALMA to its limits,” said Marino.

“The amount of gas we detected is analogous to a 200 kilometre diameter ice ball, which is impressive considering how far away the star is,” said Matrà. “It’s amazing that we can do this with exoplanetary systems now.”

The results have been accepted for publication in the Monthly Notices of the Royal Astronomical Society.

Reference:
S. Marino et al. ‘Exocometary gas in the HD 181327 debris ring.’ Paper presented to the Resolving Planet Formation in the era of ALMA and extreme AO conference, Santiago, May 16-20, 2016. http://www.eso.org/sci/meetings/2016/Planet-Formation2016/program.html

Inset image: ALMA image of the ring of comets around HD 181327 (colours have been changed). The white contours represent the size of the Kuiper Belt in the Solar System. Credit: Amanda Smith, University of Cambridge.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/first-evidence-of-icy-comets-orbiting-a-sun-like-star#sthash.T2Lp5Koq.dpuf

Body-Worn Cameras Associated With Increased Assaults Against Police, and Increase In Use-Of-Force If Officers Choose When To Activate Cameras

Body-worn cameras associated with increased assaults against police, and increase in use-of-force if officers choose when to activate cameras

source: www.cam.ac.uk

Preliminary results from eight UK and US police forces reveal rates of assault against officers are 15% higher when they use body-worn cameras. The latest findings, from one of the largest randomised-controlled trials in criminal justice research, highlight the need for cameras to be kept on and recording at all stages of police-public interaction – not just when an individual officer deems it necessary – if police use-of-force and assaults against police are to be reduced.

It may be that in some places it’s a bad idea to use body-worn cameras, and the only way you can find that out is to keep doing these tests in different kinds of places

Barak Ariel

New evidence from the largest-yet series of experiments on use of body-worn cameras by police has revealed that rates of assault against police by members of the public actually increased when officers wore the cameras.

The research also found that on average across all officer-hours studied, and contrary to current thinking, the rate of use-of-force by police on citizens was unchanged by the presence of body-worn cameras, but a deeper analysis of the data showed that this finding varied depending on whether or not officers chose when to turn cameras on.

If officers turned cameras on and off during their shift then use-of-force increased, whereas if they kept the cameras rolling for their whole shift, use-of-force decreased.

The findings are released today across two articles published in the European Journal of Criminology and the Journal of Experimental Criminology.

While researchers describe these findings as unexpected, they also urge caution as the work is ongoing, and say these early results demand further scrutiny. However, gathering evidence for what works in policing is vital, they say.

“At present, there is a worldwide uncontrolled social experiment taking place – underpinned by feverish public debate and billions of dollars of government expenditure. Robust evidence is only just keeping pace with the adoption of new technology,” write criminologists from the University of Cambridge and RAND Europe, who conducted the study.

For the latest findings, researchers worked with eight police forces across the UK and US – including West Midlands, Cambridgeshire and Northern Ireland’s PSNI, as well as Ventura, California and Rialto, California PDs in the United States – to conduct ten randomised-controlled trials.

Over the ten trials, the research team found that rates of assault against officers wearing cameras on their shift were an average of 15% higher, compared to shifts without cameras.

The researchers say this could be due to officers feeling more able to report assaults once they are captured on camera – providing them the impetus and/or confidence to do so.

The monitoring by camera also may make officers less assertive and more vulnerable to assault. However, they point out these are just possible explanations, and much more work is needed to unpick the reasons behind these surprising findings.

In the experimental design, the shift patterns of 2,122 participating officers across the forces were split at random between those allocated a camera and those without a camera. A total of 2.2 million officer-hours policing a total population of more than 2 million citizens were covered in the study.

The researchers set out a protocol for officers allocated cameras during the trials: record all stages of every police-public interaction, and issue a warning of filming at the outset. However, many officers preferred to use their discretion, activating cameras depending on the situation.

Researchers found that during shifts with cameras in which officers stuck closer to the protocol, police use-of-force fell by 37% over camera-free shifts. During shifts in which officers tended to use their discretion, police use-of-force actually rose 71% over camera-free shifts.

“The combination of the camera plus the early warning creates awareness that the encounter is being filmed, modifying the behaviour of all involved,” said principle investigator Barak Ariel from the University of Cambridge’s Institute of Criminology.

“If an officer decides to announce mid-interaction they are beginning to film, for example, that could provoke a reaction that results in use-of-force,” Ariel said. “Our data suggests this could be what is driving the results.”

The new results are the latest to come from the research team since their ground-breaking work reporting the first experimental evidence on body-worn cameras with Rialto PD in California – a study widely-cited as part of the rationale for huge investment in this policing technology.

“With so much at stake, these findings must continue to be scrutinised through further research and more studies. In the meantime, it’s clear that more training and engagement with police officers are required to ensure they are confident in the decisions they make while wearing cameras, and are safe in their job,” said co-author and RAND Europe researcher Alex Sutherland.

Ariel added, “It may be that in some places it’s a bad idea to use body-worn cameras, and the only way you can find that out is to keep doing these tests in different kinds of places. After all, what might work for a sheriff’s department in Iowa may not necessarily apply to the Tokyo PD.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/body-worn-cameras-associated-with-increased-assaults-against-police-and-increase-in-use-of-force-if#sthash.8k0E6T2S.dpuf

A Shaggy Dog Story: The Contagious Cancer That Conquered The World

A shaggy dog story: The contagious cancer that conquered the world

 

source: www.cam.ac.uk

A contagious form of cancer that can spread between dogs during mating has highlighted the extent to which dogs accompanied human travellers throughout our seafaring history. But the tumours also provide surprising insights into how cancers evolve by ‘stealing’ DNA from their host.

It is remarkable that this unusual and long-lived cancer can teach us so much about the history of dogs, and also about the genetic and evolutionary processes that underlie cancer more generally

Elizabeth Murchison

‘Canine transmissible venereal tumour’ (CTVT) is a cancer that spreads between dogs through the transfer of living cancer cells, primarily during mating. The disease usually manifests as genital tumours in both male and female domestic dogs. The cancer first arose approximately 11,000 years ago from the cells of one individual dog; remarkably, it survived beyond the death of this original dog by spreading to new dogs. The cancer is now found in dog populations worldwide, and is the oldest and most prolific cancer lineage known in nature.

In a study published today in the journal eLife, an international team led by researchers at the University of Cambridge studied the DNA of mitochondria – the ‘batteries’ that provide cells with their energy – in 449 CTVT tumours from dogs in 39 countries across six continents. Previous research has shown that at occasional points in history, mitochondrial DNA has transferred from infected dogs to their tumours – and hence to tumour cells in subsequently-infected dogs.

In the new study, the researchers show that this process of swapping mitochondrial DNA has occurred at least five times since the original cancer arose. This discovery has allowed them to create an evolutionary ‘family tree’, showing how the tumours are related to each other. In addition, the unusual juxtaposition of different types of mitochondrial DNA within the same cell unexpectedly revealed that cancer cells can shuffle or ‘recombine’ DNA from different mitochondria.

“At five distinct time-points in its history, the cancer has ‘stolen’ mitochondrial DNA from its host, perhaps to help the tumour survive,” explains Andrea Strakova, from the Department of Veterinary Medicine at the University of Cambridge, co-first author of the study. “This provides us with a set of unique genetic tags to trace how dogs have travelled the globe over the last few hundred years.”

In the evolutionary ‘family tree’, the five main branches are known as ‘clades’, each representing a point in history when mitochondria transferred between dog and tumour. By mapping tumours within these clades to the geographical location where they were found, the researchers were able to see how the cancers have spread across the globe. The distance and speed with which the clades have spread suggests that the dogs commonly travelled with human companions, often by sea.

One branch of the CTVT evolutionary tree appears to have spread from Russia or China around 1,000 years ago, but probably only came to the Americas within the last 500 years, suggesting that it was taken there by European colonialists. Conquistadors are known to have travelled with dogs – contemporary artworks have portrayed them both as attack dogs and as a source of food.

Image: 1598 fictional engraving by Theodor de Bry supposedly depicting a Spaniard feeding Indian children to his dogs. Wikipedia

The disease probably arrived in Australia around the turn of the twentieth century, most likely imported inadvertently by dogs accompanying European settlers.

One of the most surprising findings from the study related to how mitochondrial DNA transfers – and mixes – between the tumour and the host. The researchers found that mitochondrial DNA molecules from host cells that have migrated into tumour cells occasionally fuse with the tumour’s own  mitochondrial DNA, sharing host and tumour DNA in a process known as ‘recombination’. This is the first time this process has been observed in cancers.

Máire Ní Leathlobhair, the study’s co-first author, explains: “Mitochondrial DNA recombination could be happening on a much wider scale, including in human cancers, but it may usually be very difficult to detect. When recombination occurs in transmissible cancers, two potentially very different mitochondrial DNAs – one from the tumour, one from the host – are merging and so the result is more obvious. In human cancer, the tumour’s mitochondrial DNA is likely to be very similar to the mitochondrial DNA in the patient’s normal cells, so the result of recombination would be almost impossible to recognise.”

Although the significance of mitochondrial DNA recombination in cancer is not yet known, its discovery is now leading scientists to explore how this process may help cancer cells to survive – and if blocking it may stop cancer cells from growing.

Dr Elizabeth Murchison, senior author of the study, said: “The genetic changes in CTVT have allowed us to reconstruct the global journeys taken by this cancer over two thousand years. It is remarkable that this unusual and long-lived cancer can teach us so much about the history of dogs, and also about the genetic and evolutionary processes that underlie cancer more generally.”

The research was funded by the Wellcome Trust, the Leverhulme Trust and the Royal Society.

Reference
Strakova, A et al. Mitochondrial genetic diversity, selection and recombination in a canine transmissible cancer. eLife; 17 May 2016; DOI: 10.7554/eLife.14552


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/a-shaggy-dog-story-the-contagious-cancer-that-conquered-the-world#sthash.FZx2L1xg.dpuf

Cambridge Oncometrix News

Cambridge Oncometrix is developing a non-invasive, accurate, sensitive and affordable diagnostic test for the early detection of prostate cancer – the CAMONX test.

This highly accurate diagnostic test can be performed by men in the comfort of their homes to check the health of their prostate or by their GP or urologist. The test can be used either on a sample of seminal fluid or prostatic secretion.

People’s Choice Award at Pitch@Palace business competition shows, how such a test is needed.

Photo from St.James’s Palace March 9, 2016. From right to Left: D.Soloviev & M.Rossmann from Cambridge Oncometrix, other award winners and HRH The Duke Of York announcing the awards.

The results of the test will show whether the prostate is in a healthy condition or not. Importantly, the test will specifically distinguish between Prostate Cancer and  Benign Prostatic Hyperplasia. Benign Prostatic Hyperplasia is the most common prostatic condition in men over 45 and it mimics the symptoms of prostate cancer. The test will be significantly more precise, convenient and accurate than the PSA blood test, which is currently the gold standard screening test that is used before biopsies are taken.

Our vision:

– The CAMONX-Home test will be a simple colour based dip stick test that will be as simple to use as a pregnancy test. It will provide peace of mind for men and their families who have concerns about the condition of their prostate. When used at home, the CAMONX test will provide men with a very accurate indication of whether they need to visit their doctor for further diagnosis.

– The CAMONX-Pro test will be an important, non-invasive and accurate tool for doctors managing prostate cancer patients who have been offered an active surveillance programme. The instrument is very easy to use and the GP or urologist will receive the results of the test immediately.

– Our test will become a very important non-invasive diagnostic tool to determine whether a patient is at a high-risk for prostate cancer and will prevent unnecessary repeat biopsies.

– Being easy-to-use and affordable we hope that our test will become available to men all over the world, even in the most remote of locations.

– Our test has the potential to become a life-saving prostate cancer screening test.

Current status of development:

– We have discovered a set of novel biomarkers for the diagnosis of prostate cancer.

– We have developed an accurate and robust assay for an instrumental version of the CAMONX test, which will be performed at the point of care.

– We have set up an international collaboration with urologists and specialised clinics.

– We have started a proof of concept clinical investigation to complete the characterisation of cancer biomarkers and refine the corresponding assays.

– We have set up collaboration with an academic partner.

University of Central Lancashire, March 2016.  Dr.Carole Rolph, Dr.Maxim Rossmann and Joe Mather, B.Sc. 

What are the Funds needed for?

To make our test available to every man on the planet we need £4.5 Million. This crowdfunding campaign will speed up the completion of stage 1 and kickstart the project towards stage 2 and beyond.

  1. £150,000 is needed to complete the on going proof of concept clinical investigation of CAMONX biomarkers.
  2. £350,000 is needed to complete the development of measurement methods and data processing algorithms.
  3. £1 M is needed to develop test device prototypes.
  4. £3 M is needed to conduct clinical trials and obtain regulatory approvals for the test device.

Rewards:

We can’t offer much in the way of material rewards but as a group of scientists fighting the most prevalent cancer among men we can offer the following:

Gratitude: A personal thank you letter from the test developers to the pledger for supporting those hoping to help millions of people.

Inspiration: A download of “My Life” single composed by inspirational prostate cancer fighter and campaigner Kevin Vardy.

Readiness: A download of Cambridge Oncometrix’s guide to prostate health and longevity.

Awareness: A specially commissioned  CitrusFriends’ T-shirt.

Peace of mind: The opportunity to be among the first men to receive the test.

Hope: An original painting by Mr. Shaun George, a double cancer survivor.

We realize, that most of you want to help us selflessly. You can also choose to pledge anonimously not asking for any material rewards. Just choose £1 pledge and add any amount to your pledge. Your support is very important to us!

To our backers:

Huge thank you to everybody who has supported us!

We have already raised enough funds to support MSc fellowship at University of Central Lancashire for a very gifted student Joe Mather featured at the photo below 4th left.

From left to right: Mr. Kevin Vardy, prostate cancer fighter and campaignerDmitry Soloviev, PhD, Cambridge Oncometrix Chief Scientist, Cambridge Oncometrix CEO Maxim Rossmann, PhD,  UCLan biomedical master’s student Joe Mather and Dr. Carole Rolph, Senior Lecturer in Clinical Biochemistry at the University of Central Lancashire.

About our Citrus Friends supporters on the T-shirt:

Did you know, that healthy prostate is the size of a wallnut and produces loads of citiric acid? This is the same acid, that makes the taste of lemon. If you range the concentrations of citric acid in the increasing order you will obtain the following row:

Orange (5mM)< Healthy Prostate (75mM)< Lemon (300mM).

When the cancer resides inside the prostate, this picture changes dramatically – as you can see it on the banner above. Our Citrus Friends are campaiging against the Porstate cancer.

The bad grey guy represents the canerous prostate. If detected early it could be treated!

By the way, the bad grey guy on the banner was designed by our main supporter, prostate cancer fighter and Prostate cancer test campaigner Mr.Kevin Vardy. Please visit his page and say hello to him! 

Our supporters on the world map:

People watch our campaign video all over the world: French Polinesia, Caribbean , South Africa, New Zealand and the whole of Europe, of course!

With your help we already start making difference – prostate cancer awareness! Please share this project with you friends. To win big we need to keep momentum!

Natural Selection Sculpts Genetic Information To Limit Diversity

Natural selection sculpts genetic information to limit diversity

source: www.cam.ac.uk

A study of butterflies suggests that when a species adapts, other parts of its genetic make-up  can be linked to that adaptation, limiting diversity in the population.

While we cannot forecast the future, an emerging idea is that mutations that have no effect on survival today may be a source of beneficial variation in the future

Simon Martin

A study of tropical butterflies has added to growing evidence that natural selection reduces species’ diversity by moulding parts of their genetic structure, including elements that have no immediate impact on their survival.

The research, by a University of Cambridge-led team of academics, focused on genetic data from South American Heliconius butterflies. It showed that when these butterflies develop a beneficial adaptation through a mutation in their DNA, other parts of the same chromosome – the long strings of DNA that make up the butterfly’s genome – may end up being defined by the fact that they are “linked” to the point where the mutation took place. Natural selection ends up influencing the fate of these linked sites, even though they have no impact on the species’ fitness and long-term survival prospects.

As the adaptation is passed down through the generations as a result of natural selection, this collection of linked genetic sites can be passed on intact, removing genetic variation that previously existed in the population at these sites. This effectively limits the overall amount of variation in the butterfly population.

The study complements similar findings in other species, including humans and fruit flies, which together offer one possible solution to a long-standing paradox in population genetics. This is the fact that while species with bigger populations should be more genetically diverse – because there is more potential for new mutations to occur – in practice they often only exhibit as much diversity as smaller populations.

For example, in the Cambridge-led study, the researchers found that the genetic diversity of Heliconius butterflies is very similar to that of fruit flies, even though fruit flies are far more numerous. They also estimated that the amount of adaptation within Heliconius butterflies caused by natural selection is probably about half that of fruit flies. In other words, because natural selection affects the fruit flies more, reducing variation, they end up exhibiting roughly the same amount of genetic diversity, even though there are more of them.

The researchers stress that this explanation for variable levels of genetic diversity between different species is still, at the moment, a theory. Not all scientists are convinced that natural selection has this effect and argue that the variable diversity of species relative to population size may well have other causes.

Understanding more about what these causes are will, however, help to answer even more fundamental scientific questions – such as how and why species vary in the first place, and when they can really be said to have become distinct enough from their ancestors to represent a species in their own right.

Dr Simon Martin, a Research Fellow at St John’s College, Cambridge, who led the study, said: “We will only be able to understand this fully if we can compare results from across different species. Extending our knowledge to butterflies is a step towards explaining these much broader patterns in nature; it’s only by doing this kind of research that we will know whether these ideas are right or not.”

Martin and his colleagues examined a very large data set of 79 whole genome sequences representing 12 related species of Heliconius butterfly. This large-scale data has only become available in recent years, as a result of advances in genome sequencing which have made the process both easier and more affordable.

The study involved scouring the sequences for an apparent pattern of association between particular sites within the genome and low variation. “That acts as a kind of signature,” Martin said. “If you can see that in a genome, then as far as we can tell it is an indication of selection.”

In addition, the researchers compared the number of variations in the parts of the genome where proteins are coded – and therefore may be responsible for adaptations – with the number of variations in other parts of the genome. It is possible to predict what this ratio would be if variations only occurred by chance. The difference between that prediction, and the actual statistics, suggests the extent to which natural selection has shaped species differences.

The study estimated that around 30% of the protein differences between species of Heliconius are adaptations caused by natural selection. In keeping with theories about diversity in the population of other species, this turned out to be about half the number of protein variations in fruit flies – a larger population with less genetic diversity overall.

Intriguingly, the study effectively suggests that natural selection could limit a species’ ability to adapt to future environmental change by removing linked variations that, despite having no immediate beneficial impact on the species, could become relevant to its survival and capacity to cope with its environment in the future.

“Variation is a kind of raw material and you don’t necessarily use it all at any one time,” Martin explained. “It’s something that could be used to adapt and change in the future.”

“Something that has turned up during the last few years in research of this kind is a phenomenon where we see that a species has adapted, and we discover, when we look for the origin of that adaptation, that the mutation was not actually new. Instead, it was a variation that previously existed in the population. So while we cannot forecast the future, an emerging idea is that mutations that have no effect on survival today may be a source of beneficial variation in the future.”

The study appears in the May 2016 issue of Genetics.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/natural-selection-sculpts-genetic-information-to-limit-diversity#sthash.M9Bo6PPU.dpuf

Ageing Affects Test-Taking, Not Language, Study Shows

Ageing affects test-taking, not language, study shows

source: www.cam.ac.uk

The ability to understand language could be much better preserved into old age than previously thought, according to researchers from the University of Cambridge, who found older adults struggle more with test conditions than language processing.

Scientists claim that they are studying language, when really they are studying language plus your motivation to do well, plus your understanding of the instructions, plus your ability to focus, and so on

Karen Campbell

Scientists from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) scanned participants during testing and found that the areas of the brain responsible for language performed just as well in older adults as in younger ones.

The research, published in the Journal of Neuroscience, suggests that increased neural activation in the frontal brain regions of older adults reflects differences in the way they respond to the demands of the task compared with younger adults, rather than any difference in language processing itself.

“These findings suggest our ability to understand language is remarkably preserved well into old age, and it’s not through some trick of the mind, or reorganisation of the brain,” says co-author Professor Lorraine Tyler, who leads Cam-CAN. “Instead, it’s through the continued functioning of a well-used language processing machine common to all humans.”

Professor Tyler says cognitive neuroscientists attempting to explain how the mind and brain work typically approach the question with tasks designed to measure particular cognitive abilities, such as memory or language. However, it’s rarely as simple as that, she says, and tasks never end up measuring only one thing.

“Scientists claim that they are studying language, when really they are studying language plus your motivation to do well, plus your understanding of the instructions, plus your ability to focus, and so on,” says lead author Dr Karen Campbell, now based at Harvard University. “These poorly defined tasks become even more problematic when it comes to studying the older brain, because older adults sometimes show increased neural activation in frontal brain regions, which is thought to reflect a change in how older brains carry out a given cognitive function. However, this extra activation may simply reflect differences in how young and older adults respond to the demands of the task.”

Campbell and her Cam-CAN colleagues tried to isolate the effect of the testing by scanning 111 participants aged 22-87 using functional magnetic resonance imaging (fMRI) while they either passively listened to sentences or decided if the sentences were grammatical or not.

The researchers found that simply listening to and comprehending language, as we do in everyday life, “lights up” brain networks responsible for hearing and language, whereas performing a cognitive task with the same sentences leads to the additional activation of several task-related networks.

Age had no effect on the language network itself, but it did affect this network’s ability to “talk with” other task-related networks.

The Cambridge Centre for Ageing and Neuroscience is funded by the Biotechnology and Biological Sciences Research Council and is jointly based at the University of Cambridge and the Medical Research Council Cognition and Brain Sciences Unit.

Reference
Campbell, KL et al. Robust Resilience of the Frontotemporal Syntax System to Aging.Journal of Neuroscience; 11 May 2016; DOI: 10.1523/JNEUROSCI.4561-15.2016


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/ageing-affects-test-taking-not-language-study-shows#sthash.FiBo2yD9.dpuf

Reading The Face Of A Leader

Reading the face of a leader

source: www.cam.ac.uk

Women (but not men) with both high and low facial masculinity are perceived as competitive leaders, finds new study co-authored by a Cambridge Judge Business School academic.

This study challenges gender theory that says women with feminine facial characteristics are associated with communal behaviour and nurturing

Jochen Menges

Past studies have shown that, in competitive settings, people prefer both male and female leaders to have masculine facial characteristics – because these are perceived as signalling competitive personality traits.

A new academic study finds, however, thatlow facial masculinity in women is also linked in people’s minds with competitiveness, and not only to cooperation – suggesting that traits of facial masculinity in men and women are interpreted differently.

“Whereas men in competitive settings benefit from high levels of facial masculinity, women fare well when they either look particularly masculine or when they do not look masculine at all,” concludes the study published in the journal Academy of Management Discoveries.

The practical implications of these findings, says study co-author Jochen Menges, work both ways for women: while there may be less of a disadvantage to some women than previously assumed based on traditional facial-characteristic leadership theories, recruitment in competitive settings “may be biased” against women whose faces simply fit in the middle between masculine-looking and not masculine looking at all.

“This study challenges gender theory that says women with feminine facial characteristics are associated with communal behaviour and nurturing, while men with masculine features are associated with being driven and competitive,” says Menges. “The study finds that it’s much more nuanced – that when women look very feminine people associate competitiveness with them as well.”

More masculine facial characteristics, as shown in digitally altered photos of a man and a woman in the study, include thicker and flatter eyebrows, a squarer jaw and more pronounced cheekbones.

The study – entitled “Reading the face of a leader: Women with low facial masculinity are perceived as competitive” – was co-authored by Cambridge Judge PhD alumnus Raphael Silberzahn of IESE Business School at the University of Navarra in Barcelona, and Jochen Menges, University Lecturer in Organisational Behaviour at University of Cambridge Judge Business School and Professor of Leadership at WHU – Otto Beisheim School of Management in Germany.

The study cites Yahoo’s Marissa Mayer, Hewlett Packard’s Meg Whitman and Facebook’s Sheryl Sandberg – three high-profile women executives – as having three particular things in common: “They are all top-level leaders in highly competitive companies, they are all women, and none of them look particular masculine.” In fact, the study finds, that in S&P 500 companies, “a greater range of facial masculinity is present among women CEOs compared to men CEOs.”

The researchers based their findings on a series of studies involving hundreds of American adult participants, a mixture of men and women.

In one study, participants selected a suitable leader of a company that “has many rivals and competes heavily” from a series of images showing faces of women or men with digitally altered degrees of masculinity, while in another study participants were asked to assign certain competition-themed statements (such as “She wants it her way or you’re out” and “He treats others with respect to a degree, but mostly believes he is right”) to such modified images.

Among the results: For women leaders, more than 50 per cent of study participants associated such statements as “She was feared by those around her” or “There is only one boss, and that is her” with both a low-masculinity and high-masculinity image of the same woman. For men leaders, the statement “Coworkers consider him very driven” was associated by 64 per cent of participants with high-masculinity images compared to 33 per cent for low-masculinity images, while “Doesn’t tolerate people trying to act like they are smarter or wiser than he is” had a 63 percent link to a high-masculinity image compared to 27 per cent for a low-masculinity image.

“Our findings suggest that there has been a misalignment between past research and the reality,” says Menges, emphasizing that feminine-looking women have a better chance of being seen as leaders than previously thought.

Reference: 

“Reading the face of a leader: Women with low facial masculinity are perceived as competitive” Academy of Management Discoveries, Raphael Silberzahn and Jochen Menges

DOI:10.5465/amd.2014.0070


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/reading-the-face-of-a-leader#sthash.0RIj10FK.dpuf

First Global Map of Flow Within The Earth’s Mantle Finds The Surface Is Moving Up and Down “like a yo-yo”

First global map of flow within the Earth’s mantle finds the surface is moving up and down “like a yo-yo”

source: www.cam.ac.uk

Researchers have compiled the first global set of observations of flow within the Earth’s mantle – the layer between the crust and the core – and found that it is moving much faster than has been predicted.

Although we’re talking about timescales that seem incredibly long to you or me, in geological terms, the Earth’s surface bobs up and down like a yo-yo.

Mark Hoggard

 

Researchers have compiled the first global set of observations of the movement of the Earth’s mantle, the 3000-kilometre-thick layer of hot silicate rocks between the crust and the core, and have found that it looks very different to predictions made by geologists over the past 30 years.

The team, from the University of Cambridge, used more than 2000 measurements taken from the world’s oceans in order to peer beneath the Earth’s crust and observe the chaotic nature of mantle flow, which forces the surface above it up and down. These movements have a huge influence on the way that the Earth looks today – the circulation causes the formation of mountains, volcanism and other seismic activity in locations that lie in the middle of tectonic plates, such as at Hawaii and in parts of the United States.

They found that the wave-like movements of the mantle are occurring at a rate that is an order of magnitude faster than had been previously predicted. The results, reported in the journal Nature Geoscience, have ramifications across many disciplines including the study of oceanic circulation and past climate change.

“Although we’re talking about timescales that seem incredibly long to you or me, in geological terms, the Earth’s surface bobs up and down like a yo-yo,” said Dr Mark Hoggard of Cambridge’s Department of Earth Sciences, the paper’s lead author. “Over a period of a million years, which is our standard unit of measurement, the movement of the mantle can cause the surface to move up and down by hundreds of metres.”

Besides geologists, the movement of the Earth’s mantle is of interest to the oil and gas sector, since these motions also affect the rate at which sediment is shifted around and hydrocarbons are generated.

Most of us are familiar with the concept of plate tectonics, where the movement of the rigid plates on which the continents sit creates earthquakes and volcanoes near their boundaries. The flow of the mantle acts in addition to these plate motions, as convection currents inside the mantle – similar to those at work in a pan of boiling water – push the surface up or down. For example, although the Hawaiian Islands lie in the middle of a tectonic plate, their volcanic activity is due not to the movement of the plates, but instead to the upward flow of the mantle beneath.

“We’ve never been able to accurately measure these movements before – geologists have essentially had to guess what they look like,” said Hoggard. “Over the past three decades, scientists had predicted that the movements caused continental-scale features which moved very slowly, but that’s not the case.”

The inventory of more than 2000 spot observations was determined by analysing seismic surveys of the world’s oceans. By examining variations in the depth of the ocean floor, the researchers were able to construct a global database of the mantle’s movements.

They found that the mantle convects in a chaotic fashion, but with length scales on the order of 1000 kilometres, instead of the 10,000 kilometres that had been predicted.

“These results will have wider reaching implications, such as how we map the circulation of the world’s oceans in the past, which are affected by how quickly the sea floor is moving up and down and blocking the path of water currents,” said Hoggard. “Considering that the surface is moving much faster than we had previously thought, it could also affect things like the stability of the ice caps and help us to understand past climate change.”

Reference:
M.J. Hoggard et al. ‘Global dynamic topography observations reveal limited influence of large-scale mantle flow.’ Nature Geoscience (2016). DOI: 10.1038/ngeo2709


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/first-global-map-of-flow-within-the-earths-mantle-finds-the-surface-is-moving-up-and-down-like-a-yo#sthash.3vBs4wrG.dpuf

Walking and Cycling Good For Health Even In Cities With Higher Levels of Air Pollution

Walking and cycling good for health even in cities with higher levels of air pollution

source: www.cam.ac.uk

The health benefits of walking and cycling outweigh the negative effects on health of air pollution, even in cities with high levels of air pollution, according to a study led by researchers from the Centre for Diet and Activity Research (CEDAR) and Medical Research Council Epidemiology Unit at the University of Cambridge. This new evidence strengthens the case for supporting cycling even in polluted cities – an effort that in turn can help reduce vehicle emissions.

Our model indicates that in London health benefits of active travel always outweigh the risk from pollution

Marko Tainio

Regular physical activity reduces the risk of diseases such as diabetes, heart disease, and several cancers. One way for people to increase their levels of physical activity is through ‘active travel’ – for example walking and cycling; however, concern has been raised about the potential risk due to air pollution while walking and cycling in urban environments.

Air pollution is one of the leading environmental risk factors for people’s health. A recent report from the Royal Colleges of Physicians and of Paediatrics and Child Health suggested that it contributes to around 40,000 early deaths a year in the UK. One of the main sources of air pollution in cities is transport and a shift from cars, motorbikes and buses to active travel would help to reduce emissions. However, people who walk or cycle in such environments will inhale more pollution, which could be detrimental to their health.

Previous studies conducted in Europe, the USA and several other developed countries found that the health benefits of active travel are greater than the risks, but these were undertaken in areas of relatively low air pollution, and the applicability of their results to more polluted cities in emerging economies has been uncertain.

Researchers from CEDAR, a partnership between the Universities of Cambridge and East Anglia, and the Medical Research Council, used computer simulations to compare the risks and benefits for different levels of intensity and duration of active travel and of air pollution in different locations around the world, using information from international epidemiological studies and meta-analyses. The study, published in Preventive Medicine, is the first to model the risks and benefits of walking and cycling across a range of air pollution concentrations around the world.

Using this data, the researchers calculated that in practical terms, air pollution risks will not negate the health benefits of active travel in the vast majority of urban areas worldwide. Only 1% of cities in the World Health Organization’s Ambient Air Pollution Database had pollution levels high enough that the risks of air pollution could start to overcome the benefits of physical activity after half an hour of cycling every day.

Dr Marko Tainio from the MRC Epidemiology Unit at the University of Cambridge, who led the study, says: “Our model indicates that in London health benefits of active travel always outweigh the risk from pollution. Even in Delhi, one of the most polluted cities in the world – with pollution levels ten times those in London – people would need to cycle over five hours per week before the pollution risks outweigh the health benefits.

“We should remember, though, that a small minority of workers in the most polluted cities, such as bike messengers, may be exposed to levels of air pollution high enough to cancel out the health benefits of physical activity.”

Senior author Dr James Woodcock, also from CEDAR, says: “Whilst this research demonstrates the benefits of physical activity in spite of air quality, it is not an argument for inaction in combatting pollution. It provides further support for investment in infrastructure to get people out of their cars and onto their feet or their bikes – which can itself reduce pollution levels at the same time as supporting physical activity.”

The authors caution that their model does not take into account detailed information on conditions within different localities in individual cities, the impact of short-term episodes of increased air pollution, or information on the background physical activity or disease history of individuals. For individuals who are highly active in non-transport settings, for example recreational sports, the marginal health benefits from active travel will be smaller, and vice versa for those who are less active than average in other settings.

The research was undertaken by the Centre for Diet and Activity Research, a UKCRC Public Health Research Centre of Excellence. The work was also supported by the project Physical Activity through Sustainable Transportation Approaches, funded by the European Union.

Reference
Tainio et al. Can air pollution negate the health benefits of cycling and walking?Preventive Medicine; 5 May 2016; DOI: 10.1016/j.ypmed.2016.02.002


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/walking-and-cycling-good-for-health-even-in-cities-with-higher-levels-of-air-pollution#sthash.EEYp25Fb.dpuf

Genetic Variant May Help Explain Why Labradors Are Prone To Obesity

Genetic variant may help explain why Labradors are prone to obesity

source: www.cam.ac.uk

A genetic variation associated with obesity and appetite in Labrador retrievers – the UK and US’s favourite dog breed – has been identified by scientists at the University of Cambridge. The finding may explain why Labrador retrievers are more likely to become obese than dogs of other breeds.

People who live with Labradors often say they are obsessed by food, and that would fit with what we know about this genetic change

Eleanor Raffan

In developed countries, between one and two in three dogs (34-59%) is  overweight, a condition associated with reduced lifespan, mobility problems, diabetes, cancer and heart disease, as it is in humans. In fact, the increase in levels of obesity in dogs mirrors that in humans, implicating factors such as reduced exercise and ready access to high calorie food factors. However, despite the fact that dog owners control their pets’ diet and exercise, some breeds of dog are more susceptible to obesity than others, suggesting the influence of genetic factors. Labradors are the most common breed of dog in the UK, USA and many other countries worldwide and the breed is known as being particularly obesity-prone.

In a study published today in the journal Cell Metabolism, an international team led by researchers at the Wellcome Trust-Medical Research Council Institute of Metabolic Science, University of Cambridge, report a study of 310 pet and assistance dog Labradors. Independent veterinary professionals weighed the dogs and assessed their body condition score, and the scientists searched for variants of three candidate obesity-related genes. The team also assessed ‘food motivation’ using a questionnaire in which owners reported their dog’s behavior related to food.

The researchers found that a variant of one gene in particular, known as POMC, was strongly associated with weight, obesity and appetite in Labradors and flat coat retrievers. Around one in four (23%) Labradors is thought to carry at least one copy of the variant. In both breeds, for each copy of the gene carried, the dog was on average 1.9kg heavier, an effect size particularly notable given the extent to which owners, rather than the dogs themselves, control the amount of food and exercise their dogs receive.

“This is a common genetic variant in Labradors and has a  significant effect on those dogs that carry it, so it is likely that this helps explain why Labradors are more prone to being overweight in comparison to other breeds,” explains first author Dr Eleanor Raffan from the University of Cambridge. “However, it’s not a straightforward picture as the variant is even more common among flat coat retrievers, a breed not previously flagged as being prone to obesity.”

The gene affected is known to be important in regulating how the brain recognises hunger and the feeling of being full after a meal.  “People who live with Labradors often say they are obsessed by food, and that would fit with what we know about this genetic change,” says Dr Raffan.

Senior co-author Dr Giles Yeo adds: “Labradors make particularly successful working and pet dogs because they are loyal, intelligent and eager to please, but importantly, they are also relatively easy to train. Food is often used as a reward during training, and carrying this variant may make dogs more motivated to work for a titbit.

“But it’s a double-edged sword – carrying the variant may make them more trainable, but it also makes them susceptible to obesity. This is something owners will need to be aware of so they can actively manage their dog’s weight.”

The researchers believe that a better understanding of the mechanisms behind the POMC gene, which is also found in humans, might have implications for the health of both Labradors and human.

Professor Stephen O’Rahilly, Co-Director of the Wellcome Trust-Medical Research Council Institute of Metabolic Science, says: “Common genetic variants affecting the POMC gene are associated with human body weight and there are even some rare obese people who lack a very similar part of the POMC gene to the one that is missing in the dogs. So further research in these obese Labradors may not only help the wellbeing of companion animals but also have important lessons for human health.”

The research was funded by the Wellcome Trust, the Medical Research Council and the Dogs Trust.

Reference
Raffan, E et al. A deletion in the canine POMC gene is associated with weight and appetite in obesity prone Labrador retriever dogs. Cell Metabolism; 3 May 2016; DOI: 10.1016/j.cmet.2016.04.012


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/genetic-variant-may-help-explain-why-labradors-are-prone-to-obesity#sthash.Al2vIEiD.dpuf

Algae Use Their ‘Tails’ To Gallop and Trot Like Quadrupeds

Algae use their ‘tails’ to gallop and trot like quadrupeds

source: www.cam.ac.uk

Species of single-celled algae use whip-like appendages called flagella to coordinate their movements and achieve a remarkable diversity of swimming gaits.

As physicists our instinct is to seek out generalisations and universal principles, but the world of biology often presents us with many fascinating counterexamples.

Raymond Goldstein

Long before there were fish swimming in the oceans, tiny microorganisms were using long slender appendages called cilia and flagella to navigate their watery habitats. Now, new research reveals that species of single-celled algae coordinate their flagella to achieve a remarkable diversity of swimming gaits.

When it comes to four-legged animals such as cats, horses and deer, or even humans, the concept of a gait is familiar, but what about unicellular green algae with multiple limb-like flagella? The latest discovery, published in the journal Proceedings of the National Academy of Sciences, shows that despite their simplicity, microalgae can coordinate their flagella into leaping, trotting or galloping gaits just as well.

Many gaits are periodic: whether it is the stylish walk of a cat, the graceful gallop of a horse, or the playful leap of a springbok, the key is the order or sequence in which these limbs are activated. When springboks arch their backs and leap, or ‘pronk’, they do so by lifting all four legs simultaneously high into the air, yet when horses trot it is the diagonally opposite legs that move together in time.

In vertebrates, gaits are controlled by central pattern generators, which can be thought of as networks of neural oscillators that coordinate output. Depending on the interaction between these oscillators, specific rhythms are produced, which, mathematically speaking, exhibit certain spatiotemporal symmetries. In other words, the gait doesn’t change when one leg is swapped with another – perhaps at a different point in time, say a quarter-cycle or half-cycle later.

It turns out the same symmetries also characterise the swimming gaits of microalgae, which are far too simple to have neurons. For instance, microalgae with four flagella in various possible configurations can trot, pronk or gallop, depending on the species.

“When I peered through the microscope and saw that the alga was performing two sets of perfectly synchronous breaststrokes, one directly after the other, I was amazed,” said the paper’s first author Dr Kirsty Wan of the Department of Applied Mathematics and Theoretical Physics (DAMTP) at the University of Cambridge. “I realised immediately that this behaviour could only be due to something inside the cell rather than passive hydrodynamics. Then of course to prove this I had to expand my species collection.”

The researchers determined that it is in fact the networks of elastic fibres which connect the flagella deep within the cell that coordinate these diverse gaits. In the simplest case of Chlamydomonas, which swims a breaststroke with two flagella, absence of a particular fibre between the flagella leads to uncoordinated beating. Furthermore, deliberately preventing the beating of one flagellum in an alga with four flagella has zero effect on the sequence of beating in the remainder.

However, this does not mean that hydrodynamics play no role. In recent work from the same group, it was shown that nearby flagella can be synchronised solely by their mutual interaction through the fluid. There is a distinction between unicellular organisms for which good coordination of a few flagella is essential, and multicellular species or tissues that possess a range of cilia and flagella. In the latter case, hydrodynamic interactions are much more important.

“As physicists our instinct is to seek out generalisations and universal principles, but the world of biology often presents us with many fascinating counterexamples,” said Professor Ray Goldstein, Schlumberger Professor of Complex Physical Systems at DAMTP, and senior author of the paper. “Until now there have been many competing theories regarding flagellar synchronisation, but I think we are finally making sense of how these different organisms make best use of what they have.”

The findings also raise intriguing questions about the evolution of the control of peripheral appendages, which must have arisen in the first instance in these primitive microorganisms.

This research was supported by a Neville Research Fellowship from Magdalene College, and a Senior Investigator Award from the Wellcome Trust.

Reference:
Kirsty Y. Wan and Raymond E. Goldstein. ‘Coordinated beating of algal flagella is mediated by basal coupling.’ PNAS (2016). DOI: 10.1073/pnas.1518527113


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/algae-use-their-tails-to-gallop-and-trot-like-quadrupeds#sthash.QiQAlxeQ.dpuf

Scientists Double Number of Known Genetic Risk Factors For Endometrial Cancer

Scientists double number of known genetic risk factors for endometrial cancer

source: www.cam.ac.uk

An international collaboration of researchers has identified five new gene regions that increase a woman’s risk of developing endometrial cancer, one of the most common cancers to affect women, taking the number of known gene regions associated with the disease to nine.

Interestingly, several of the gene regions we identified in the study were already known to contribute to the risk of other common cancers

Deborah Thompson

Endometrial cancer affects the lining of the uterus. It is the fourth most commonly diagnosed cancer in UK women, with around 9,000 new cases being diagnosed each year.

Researchers at the University of Cambridge, Oxford University and QIMR Berghofer Medical Research Institute in Brisbane studied the DNA of over 7,000 women with endometrial cancer and 37,000 women without cancer to identify genetic variants that affected a woman’s risk of developing the disease. The results are published today in the journal Nature Genetics.

Dr Deborah Thompson from the Department of Public Health and Primary Care at the University of Cambridge said: “Our findings help us to paint a clearer picture of the genetic causes of endometrial cancer in women, particularly where there no strong family history of cancer. Prior to this study, we only knew of four regions of the genome in which a common genetic variant increases a woman’s risk of endometrial cancer.

“In this study we have identified another five regions, bringing the total to nine. This finding doubles the number of known risk regions, and therefore makes an important contribution to our knowledge of the genetic drivers of endometrial cancer.

“Interestingly, several of the gene regions we identified in the study were already known to contribute to the risk of other common cancers such as ovarian and prostate.

“Although each individual variant only increases risk by around 10-15%, their real value will be in looking at the total number of such variants inherited by a woman, together with her other risk factors, in order to identify those women at higher risk of endometrial cancer so that they can be regularly checked and be alert to the early signs and symptoms of the disease.”

The study also looked at how the identified gene regions might be increasing the risk of cancer, and these findings have implications for the future treatment of endometrial cancer patients.

“As we develop a more comprehensive view of the genetic risk factors for endometrial cancer, we can start to work out which genes could potentially be targeted with new treatments down the track,” said Associate Professor Amanda Spurdle from QIMR Berghofer.

“In particular, we can start looking into whether there are drugs that are already approved and available for use that can be used to target those genes.”

The study was an international collaboration involving researchers from Australia, the United Kingdom, German, Belgium, Norway, Sweden, the United States and China. The UK part of the study received funding from Cancer Research UK.

Dr Emma Smith, Cancer Research UK’s science information manager, said: “The discovery of genetic changes that affect women’s risk of developing endometrial – or womb – cancer could help doctors identify women at higher risk, who could benefit from being more closely monitored for signs of the disease.

“It might also provide clues into the faulty molecules that play an important role in womb cancer, leading to potential new treatments. More than a third of womb cancer cases in the UK each year could be prevented, and staying a healthy weight and keeping active are both great ways for women to reduce the risk.”

Reference
Cheng, THT et al. Five endometrial cancer risk loci identified through genome-wide association analysis. Nature Genetics; 2 May 2016; DOI: 10.1038/ng.3562


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/scientists-double-number-of-known-genetic-risk-factors-for-endometrial-cancer#sthash.JM28nixE.dpuf

Little ANTs: Researchers Build the World’s Tiniest Engine

Little ANTs: researchers build the world’s tiniest engine

source: www.cam.ac.uk

Researchers have built a nano-engine that could form the basis for future applications in nano-robotics, including robots small enough to enter living cells.

Like real ants, they produce large forces for their weight.

Jeremy Baumberg

Researchers have developed the world’s tiniest engine – just a few billionths of a metre in size – which uses light to power itself. The nanoscale engine, developed by researchers at the University of Cambridge, could form the basis of future nano-machines that can navigate in water, sense the environment around them, or even enter living cells to fight disease.

The prototype device is made of tiny charged particles of gold, bound together with temperature-responsive polymers in the form of a gel. When the ‘nano-engine’ is heated to a certain temperature with a laser, it stores large amounts of elastic energy in a fraction of a second, as the polymer coatings expel all the water from the gel and collapse. This has the effect of forcing the gold nanoparticles to bind together into tight clusters. But when the device is cooled, the polymers take on water and expand, and the gold nanoparticles are strongly and quickly pushed apart, like a spring. The results are reported in the journal PNAS.

“It’s like an explosion,” said Dr Tao Ding from Cambridge’s Cavendish Laboratory, and the paper’s first author. “We have hundreds of gold balls flying apart in a millionth of a second when water molecules inflate the polymers around them.”

“We know that light can heat up water to power steam engines,” said study co-author Dr Ventsislav Valev, now based at the University of Bath. “But now we can use light to power a piston engine at the nanoscale.”

Nano-machines have long been a dream of scientists and public alike, but since ways to actually make them move have yet to be developed, they have remained in the realm of science fiction. The new method developed by the Cambridge researchers is incredibly simple, but can be extremely fast and exert large forces.

The forces exerted by these tiny devices are several orders of magnitude larger than those for any other previously produced device, with a force per unit weight nearly a hundred times better than any motor or muscle. According to the researchers, the devices are also bio-compatible, cost-effective to manufacture, fast to respond, and energy efficient.

Professor Jeremy Baumberg from the Cavendish Laboratory, who led the research, has named the devices ‘ANTs’, or actuating nano-transducers. “Like real ants, they produce large forces for their weight. The challenge we now face is how to control that force for nano-machinery applications.”

The research suggests how to turn Van de Waals energy – the attraction between atoms and molecules – into elastic energy of polymers and release it very quickly. “The whole process is like a nano-spring,” said Baumberg. “The smart part here is we make use of Van de Waals attraction of heavy metal particles to set the springs (polymers) and water molecules to release them, which is very reversible and reproducible.”

The team is currently working with Cambridge Enterprise, the University’s commercialisation arm, and several other companies with the aim of commercialising this technology for microfluidics bio-applications.

The research is funded as part of a UK Engineering and Physical Sciences Research Council (EPSRC) investment in the Cambridge NanoPhotonics Centre, as well as the European Research Council (ERC).

Reference:
Tao Ding et al. ‘Light-induced actuating nanotransducers.’ PNAS (2016). DOI: 10.1073/pnas.1524209113


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/little-ants-researchers-build-the-worlds-tiniest-engine#sthash.ztIssTeQ.dpuf

Winds a Quarter the Speed of Light Spotted Leaving Mysterious Binary Systems

Winds a quarter the speed of light spotted leaving mysterious binary systems

source: www.cam.ac.uk

Astronomers have observed two black holes in nearby galaxies devouring their companion stars at an extremely high rate, and spitting out matter at a quarter the speed of light.

This is the first time we’ve seen winds streaming away from ultra-luminous x-ray sources.

Ciro Pinto

Two black holes in nearby galaxies have been observed devouring their companion stars at a rate exceeding classically understood limits, and in the process, kicking out matter into surrounding space at astonishing speeds of around a quarter the speed of light.

The researchers, from the University of Cambridge, used data from the European Space Agency’s (ESA) XMM-Newton space observatory to reveal for the first time strong winds gusting at very high speeds from two mysterious sources of x-ray radiation. Thediscovery, published in the journal Nature, confirms that these sources conceal a compact object pulling in matter at extraordinarily high rates.

When observing the Universe at x-ray wavelengths, the celestial sky is dominated by two types of astronomical objects: supermassive black holes, sitting at the centres of large galaxies and ferociously devouring the material around them, and binary systems, consisting of a stellar remnant – a white dwarf, neutron star or black hole – feeding on gas from a companion star.

In both cases, the gas forms a swirling disc around the compact and very dense central object. Friction in the disc causes the gas to heat up and emit light at different wavelengths, with a peak in x-rays.

But an intermediate class of objects was discovered in the 1980s and is still not well understood. Ten to a hundred times brighter than ordinary x-ray binaries, these sources are nevertheless too faint to be linked to supermassive black holes, and in any case, are usually found far from the centre of their host galaxy.

“We think these so-called ‘ultra-luminous x-ray sources’ are special binary systems, sucking up gas at a much higher rate than an ordinary x-ray binary,” said Dr Ciro Pinto from Cambridge’s Institute of Astronomy, the paper’s lead author. “Some of these sources host highly magnetised neutron stars, while others might conceal the long-sought-after intermediate-mass black holes, which have masses around one thousand times the mass of the Sun. But in the majority of cases, the reason for their extreme behaviour is still unclear.”

Pinto and his colleagues collected several days’ worth of observations of three ultra-luminous x-ray sources, all located in nearby galaxies located less than 22 million light-years from the Milky Way. The data was obtained over several years with the Reflection Grating Spectrometer on XMM-Newton, which allowed the researchers to identify subtle features in the spectrum of the x-rays from the sources.

In all three sources, the scientists were able to identify x-ray emission from gas in the outer portions of the disc surrounding the central compact object, slowly flowing towards it.

But two of the three sources – known as NGC 1313 X-1 and NGC 5408 X-1 – also show clear signs of x-rays being absorbed by gas that is streaming away from the central source at 70,000 kilometres per second – almost a quarter of the speed of light.

“This is the first time we’ve seen winds streaming away from ultra-luminous x-ray sources,” said Pinto. “And the very high speed of these outflows is telling us something about the nature of the compact objects in these sources, which are frantically devouring matter.”

While the hot gas is pulled inwards by the central object’s gravity, it also shines brightly, and the pressure exerted by the radiation pushes it outwards. This is a balancing act: the greater the mass, the faster it draws the surrounding gas; but this also causes the gas to heat up faster, emitting more light and increasing the pressure that blows the gas away.

There is a theoretical limit to how much matter can be pulled in by an object of a given mass, known as the Eddington limit. The limit was first calculated for stars by astronomer Arthur Eddington, but it can also be applied to compact objects like black holes and neutron stars.

Eddington’s calculation refers to an ideal case in which both the matter being accreted onto the central object and the radiation being emitted by it do so equally in all directions.

But the sources studied by Pinto and his collaborators are potentially being fed through a disc which has been puffed up due to internal pressures arising from the incredible rates of material passing through it. These thick discs can naturally exceed the Eddington limit and can even trap the radiation in a cone, making these sources appear brighter when we look straight at them. As the thick disc moves material further from the black hole’s gravitational grasp it also gives rise to very high-speed winds like the ones observed by the Cambridge researchers.

“By observing x-ray sources that are radiating beyond the Eddington limit, it is possible to study their accretion process in great detail, investigating by how much the limit can be exceeded and what exactly triggers the outflow of such powerful winds,” said Norbert Schartel, ESA XMM-Newton Project Scientist.

The nature of the compact objects hosted at the core of the two sources observed in this study is, however, still uncertain.

Based on the x-ray brightness, the scientists suspect that these mighty winds are driven from accretion flows onto either neutron stars or black holes, the latter with masses of several to a few dozen times that of the Sun.

To investigate further, the team is still scrutinising the data archive of XMM-Newton, searching for more sources of this type, and are also planning future observations, in x-rays as well as at optical and radio wavelengths.

“With a broader sample of sources and multi-wavelength observations, we hope to finally uncover the physical nature of these powerful, peculiar objects,” said Pinto.

Reference:
C. Pinto et al. ‘Resolved atomic lines reveal outflows in two ultraluminous X-ray sources’ Nature (2016). DOI: 10.1038/nature17417.

Adapted from an ESA press release. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/winds-a-quarter-the-speed-of-light-spotted-leaving-mysterious-binary-systems#sthash.UnuLb6Dl.dpuf

Three Potentially Habitable Worlds Found Around Nearby Ultracool Dwarf Star

Three potentially habitable worlds found around nearby ultracool dwarf star

source: www.cam.ac.uk

Three Earth-sized planets have been discovered orbiting a dim and cool star, and may be the best place to search for life beyond the Solar System.

The discovery of a planetary system around such a small star opens up a brand new avenue for research.

Didier Queloz

An international team of astronomers has discovered three planets orbiting an ultracool dwarf star just 40 light years from Earth. These worlds have sizes and temperatures similar to those of Venus and Earth and may be the best targets found so far for the search for life outside the Solar System. They are the first planets ever discovered around such a tiny and dim star. The results are reported in the journalNature.

Using the TRAPPIST telescope at the European Southern Observatory’s (ESO) La Silla Observatory in Chile, the astronomers observed the star 2MASS J23062928-0502285, now also known as TRAPPIST-1, and located in the Aquarius constellation. They found that this dim and cool star faded slightly at regular intervals, indicating that several objects were transiting, or passing between the star and the Earth. Detailed analysis showed that three planets with similar sizes to the Earth were present.

TRAPPIST-1 is an ultracool dwarf star — much cooler and redder than the Sun and barely larger than Jupiter. Such stars are very common in the Milky Way and very long-lived, but this is the first time that planets have been found around one of them. Despite being so close to the Earth, this star is too dim and too red to be seen with the naked eye or even with a large amateur telescope.

“The discovery of a planetary system around such a small star opens up a brand new avenue for research,” said Professor Didier Queloz from the University of Cambridge’s Cavendish Laboratory, the paper’s senior author. “Before this discovery it was not at all clear whether such a small star could host an Earth-sized planet. Nobody had seriously studied it, but now that’s likely to change.”

“Systems around these tiny stars are the only places where we can detect life on an Earth-sized exoplanet with our current technology,” said the paper’s lead author Michaël Gillon, from the University of Liège in Belgium. “So if we want to find life elsewhere in the Universe, this is where we should start to look.”

Astronomers will search for signs of life by studying the effect that the atmosphere of a transiting planet has on the light reaching Earth. For Earth-sized planets orbiting stars similar to our Sun this tiny effect is swamped because of the large size ratio between the planet and the star. Only for the case of faint red ultra-cool dwarf stars — like TRAPPIST-1 — is this effect big enough to be detected.

Follow-up observations with larger telescopes have shown that the planets orbiting TRAPPIST-1 have sizes very similar to Earth. Two of the planets complete an orbit of the star in 1.5 days and 2.4 days respectively, and the third planet has a less well determined orbital period in the range of 4.5 to 73 days.

“With such short orbital periods, the planets are between 20 and 100 times closer to their star than the Earth to the Sun,” said Gillon. “The structure of this planetary system is much more similar in scale to the system of Jupiter’s moons than to that of the Solar System.”

Although they orbit very close to their host dwarf star, the inner two planets only receive four and two times, respectively, the amount of radiation received by the Earth, because their star is much fainter than the Sun. That puts them closer to the star than the habitable zone for this system. The third, outer, planet’s orbit is not yet well known – it probably receives less radiation than the Earth does, but perhaps still enough to lie within the habitable zone.

The next generation of giant telescopes, such as NASA’s James Webb Telescope due to launch in 2018, will allow researchers to study the atmospheric composition of these planets and to explore them first for water, and then for traces of biological activity.

“While this is not the first time that planets have been found in the habitable zone of a star, the TRAPPIST-1 system provides humanity with our first opportunity to remotely explore Earth-like environments and empirically determine their suitability for life,” said study co-author Dr Amaury Triaud from Cambridge’s Institute of Astronomy. “Because the system contains many planets, we will even be able to compare the climates of each to one another and to the Earth’s.”

This work opens up a new direction for exoplanet hunting, as around 15% of the stars near to the Sun are ultra-cool dwarf stars, and it also serves to highlight that the search for exoplanets has now entered the realm of potentially habitable cousins of the Earth. The TRAPPIST survey is a prototype for a more ambitious project called SPECULOOS that will be installed at ESO’s Paranal Observatory in Chile.

Reference:
Michaël Gillon et al. ‘Temperate Earth-sized planets transiting a nearby ultracool dwarf star.’ Nature (2016). DOI: 10.1038/nature17448

​Adapted from an ESO press release. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/three-potentially-habitable-worlds-found-around-nearby-ultracool-dwarf-star#sthash.67AM9vbe.dpuf

UK’s Top Student Hackers Compete For Cyber Security

UK’s top student hackers compete for cyber security

source: www.cam.ac.uk

Students from the UK’s top cyber security universities will compete in Cambridge this weekend, in part to address the country’s looming cyber security skills gap.

We have a huge cyber security skills gap looming in the UK, and we need to close it.

Frank Stajano

The best student hackers in the UK will take place in a cyber security competition this weekend, in order to demonstrate and improve their skills both as attackers and defenders in scenarios similar to the TalkTalk hack and the leak of the Panama Papers.

The event, hosted by the University of Cambridge Computer Laboratory in partnership with Facebook, will bring together 10 of the UK’s Academic Centres of Excellence in Cyber Security Research – the first time they have taken part in such an event together. The hacking event will take place on Saturday, 23 April.

Cyber security is considered one of the biggest threats facing our economy and infrastructure today, and talented hackers are being recruited by government and other agencies to fight cyber criminals. This hacking event will showcase the best student hackers in the country.

The students will be working on challenges which require them to exploit some common vulnerabilities – the very type that underpinned recent high-profile hacking incidents.

Each of the 10 universities is sending a team of four students to this ‘Capture the Flag’-themed event. Throughout the afternoon, the hackers will attempt to solve a series of puzzles, with the winners gaining points; and compete in a series of challenges by attempting to hack the other teams.

An example of the type of challenges the hackers may face is to hack into a server and attempt to keep the other teams from getting in for as long as they can. The Panama Papers hack likely involved exploiting vulnerabilities in WordPress and Drupal and the competitors may be tasked with finding similar holes in other software.

Facebook has chosen to visualise the progress of the game on a board loosely based on the classic game Risk. The goal is to conquer the world, with points awarded for each country that is captured. Each country has a couple of challenges based on different areas of cyber security, and students must be able to extract the ‘flag’ to claim the points for that country.

In addition to the teams taking part in the event in Cambridge, other students from the participating universities will also be able to take part in the event remotely, in order that additional students can polish their hacking skills.

“We have a huge cyber security skills gap looming in the UK, and we need to close it,” said Dr Frank Stajano of Cambridge’s Computer Laboratory, Head of the Cambridge Academic Centre of Excellence in Cyber Security Research. “Training our students for those challenges closes the gap between theory and practice in cyber security education. With any type of security, you can’t develop a strong defence against these types of attacks if you’re not a good attacker yourself – you need to stay one step ahead of the criminals.”

These hacking events also help highlight the different challenges involved in attack and defence. “Attacking is more difficult in general because there is no guaranteed recipe for finding a vulnerability, but in many ways it’s actually easier,” he said. “If you’re defending something, you have to keep absolutely everything safe all the time, but if you’re attacking, all you’ve got to do is find the one weak point and then you’re in – like finding the one weak point in the Death Star that allowed it to be destroyed. When attackers and defenders run on similar platforms it is also the case that, if you attack your opponents, they may reverse-engineer your attack and reuse it against you.”

In a meeting last year, Prime Minister Cameron and President Obama agreed to strengthen the ties between the UK and the US, and to cooperate on matters of cyber security affecting both countries.

A ‘Cambridge 2 Cambridge’ cyber security competition, held last month at MIT, was one of the outcomes of the meeting between the two leaders, who also expressed a desire that part of this cooperation should include an improvement in cyber security teaching and training for students.

From next year, some of the exercises prepared for these events will be part of the undergraduate teaching programme at Cambridge.

“Our team was able to gel well together, and that feeling of being ‘in the zone’ and working seamlessly together in attacking other teams, scripting our exploits and rushing to patch our services was fantastic,” said computer science undergraduate Daniel Wong, following last month’s Cambridge 2 Cambridge event.

“Maybe somewhat surprisingly for a computer hacking competition, the Cambridge 2 Cambridge event was also an exercise in interpersonal skills, since effectively collaborating with people you have just met under significant time pressure in a generally stressful environment does not come naturally, but I was very fortunate to have had teammates that really made this aspect feel like a walk in the park,” said fellow computer science undergraduate Gábor Szarka, a co-winner of the $15,000 top team prize at the Cambridge 2 Cambridge event.

The Academic Centres of Excellence in Cyber Security Research (ACE-CSR) scheme is sponsored by the Department for Business, Innovation and Skills, the Centre for the Protection of National Infrastructure, Government Communications Headquarters, the Office of Cyber Security and Information Assurance and Research Councils UK.

The 10 universities sending a team to Saturday’s event are: Imperial College London, Queens University Belfast, Royal Holloway University of London, University College London, University of Birmingham, University of Cambridge, University of Kent, University of Oxford, University of Southampton, and University of Surrey.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/uks-top-student-hackers-compete-for-cyber-security#sthash.XoCQ2c8V.dpuf

Flexible Hours ‘Controlled By Management’ Cause Stress and Damage Home Lives of Low-Paid Workers

Flexible hours ‘controlled by management’ cause stress and damage home lives of low-paid workers

source: www.cam.ac.uk

Researcher Alex Wood calls on new DWP Minister Stephen Crabb to acknowledge distinction between flexible scheduling controlled by managers to maximise profit, damaging lives of the low-paid in the process, and high-end professionals who set their own schedules – an issue he says was publicly fudged by Ian Duncan-Smith to justify zero-hour contracts.

I had to change hours, or accept another position, or try another store… I felt really sick, it just hit me, it hit all of us…

Colin, worker at the unnamed supermarket

A researcher who embedded himself in several London branches of one of the UK’s largest supermarkets found that management used a combination of ‘flexed-time’ contracts and overtime to control worker shifts to meet times of anticipated demand, while ensuring costs are kept to a minimum.

Workers at the supermarket chain were frequently expected to extend or change shifts with little or no notice, often to the detriment of their home and family lives – causing the majority of workers interviewed to feel negatively about their jobs.

Low wages and lack of guaranteed hours, combined with convoluted contractual terms, weak union presence, and pressure from managers that at times bordered on coercion (“…there are plenty of people out there who need jobs”) meant that many felt they had no choice but to work when ordered, despite the impact on childcare, work-life balance and, in some cases, health – both physical and mental.

Dr Alex Wood, who conducted the research while at Cambridge’s Department of Sociology, has chosen not to name the retailer in the new study, published today in the journal Human Relations. Having spoken with union representatives from across the retail sector, however, Wood believes the practises he encountered are now endemic across major supermarkets in the UK.

The Government’s website describes flexible working as something that “suits an employee’s needs”. However, Wood says there is a critical distinction – one overlooked by the Department of Work and Pensions (DWP) – between workers controlling their own schedules, and management imposing control.

“Control over flexible working enables a better work-life balance. However, such control is the privilege of high-end workers. When low-paid, vulnerable workers experience flexible working time, it is at the whim of managers who alter schedules in order to maximise profits, with little consideration for the work-life balance of employees,” said Wood.

The practice of low core-hour contracts that can be ‘flexed up’ are most notoriously embodied in zero-hour contracts – recently reported to affect over 800,000 British workers. Last year, then DWP Minister Iain Duncan Smith held up a survey claiming to show “most” workers on such contracts find them to be beneficial.

Wood says this is an example of conflating low-end, hourly-paid workers who have schedules dictated by management – those in supermarkets, for example – with highly paid professionals such as consultants who control their own hours of work. While all are technically on zero-hours contracts, their experiences of work are dramatically different.

“It is misleading to claim that flexibility provided by zero-hour contracts is beneficial for ‘most’ workers’ work-life balance, and it is simply implausible to suggest this is the case for low-paid, vulnerable workers who by definition lack the power to control their working time,” said Wood, who contributed evidence to the coalition government’s zero-hours policy review in 2014.

For the study, Wood conducted interviews with a number of workers from across four of the UK retailer’s stores, ranging from check-out operators to online delivery drivers, as well as interviewing union reps and officials. He also conducted two months of “participatory observation”: working as a shelf stacker in one of the larger supermarket stores.

His findings have led Wood to conclude that the problem of precarious contracts goes far beyond just zero-hours, encompassing most management-controlled flexible contracts.

At the time of the research, the UK retailer had a policy of new stores reserving 20% of all payroll costs for short-term changes in shifts, which requires around 45% of all staff to be on flexible contracts, says Wood, although interviews with union representatives indicated this was likely higher.

While contracted for as little as 7.5 core hours, all flexible workers had to provide 48 hours of availability per week at the point of application – with greater availability increasing the chances of being hired.

Officially, ‘flexed’ hours were not to exceed 60% of workers’ core hours. However, despite being contracted for a weekly average of just nine core hours, Wood found that standard flexible workers were working an average of 36 hour weeks.

Management used combinations of ‘overtime’ – additional hours that are voluntary but can be offered on-the-spot – with ‘flexed time’ – additional hours that are compulsory but require 24 hours’ notice – to ensure staffing levels could be manipulated at short notice to meet expected demand.

Both overtime and flexed time were paid at standard rates, keeping payroll costs down, and Wood found distinctions between the two were frequently blurred – disregarding what little contractual protection existed.

“In reality, the nature of low pay and low hours contracts means these workers can’t afford to turn down hours,” said Wood.

“Whether zero core hours, or seven, or nine – none provide enough to live on. This precarious situation of not having enough hours to make ends meet is heightened by a perception that refusal to work additional hours meant they would not be offered them again in future, something most workers simply couldn’t afford.”

The stress caused by management-controlled flexed time of low hour contracts, and the impact on home and family lives, were frequently raised by the workers that Wood spoke to.

One worker provided what Wood describes as a “characteristic experience”. Sara co-habited with her partner Paul, also employed at the UK retailer. “[W]e’ve set aside Saturday as a day to do something – me, Paul and my son – as a family… She [Sara’s manager] now wants me to work Saturdays… it’s all up in the air.”

Colin, another worker, described the impact of dramatic schedule alterations to his wellbeing: “I had to change hours, or accept another position, or try another store… I felt really sick, it just hit me, it hit all of us…”

Asim, a union rep, made it clear that management bullying occurred: “People have been told, wrongly, that they can be sacked for it if they don’t change their hours.”

Under Duncan-Smith, the UK government legislated to ban ‘exclusive’ zero-hours contracts – those that have no guaranteed hours but restrict workers from getting another job – but Wood says this is simply a straw man, and new DWP Minister Stephen Crabb must go much further.

‘It’s imperative that Stephen Crabb breaks from his predecessor and recognises the damage which wider manager-controlled flexible scheduling practices, including all zero hours contracts, do to work-life balance,” Wood said.

“Policies are needed which strengthen low-end workers’ voice. When alterations to schedules are made solely by managers and driven by cost containment, flexibility is only beneficial for the employer not the employees.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/flexible-hours-controlled-by-management-cause-stress-and-damage-home-lives-of-low-paid-workers#sthash.4VZKCRmY.dpuf

New Cases of Dementia in the UK Fall By 20% Over Two Decades

New cases of dementia in the UK fall by 20% over two decades

source: www.cam.ac.uk

The UK has seen a 20% fall in the incidence of dementia over the past two decades, according to new research from England, led by the University of Cambridge, leading to an estimated 40,000 fewer cases of dementia than previously predicted. However, the study, published today in Nature Communications, suggests that the dramatic change has been observed mainly in men.

Our evidence shows that the so-called dementia ‘tsunami’ is not an inevitability: we can help turn the tide if we take action now

Carol Brayne

Reports in both the media and from governments have suggested that the world is facing a dementia ‘tsunami’ of ever-increasing numbers, particularly as populations age. However, several recent studies have begun to suggest that the picture is far more complex. Although changing diagnostic methods and criteria are identifying more people as having dementia, societal measures which improve health such as education, early- and mid-life health promotion including smoking reduction and attention to diet and exercise may be driving a reduction in risk in some countries. Prevalence (the proportion of people with dementia) has been reported to have dropped in some European countries but it is incidence (the proportion of people developing dementia in a given time period) that provides by far the most robust evidence of fundamental change in populations.

As part of the Medical Research Council Cognitive Function and Ageing Study (CFAS), researchers at the University of Cambridge, Newcastle University, Nottingham University and the University of East Anglia interviewed a baseline of 7,500 people in three regions of the UK (Cambridgeshire, Newcastle and Nottingham) between 1991 and 1994 with repeat interviews at two years to estimate incidence. Then 20 years later a new sample of over 7,500 people from the same localities aged 65 and over was interviewed with a two year repeat interview again. This is the first time that a direct comparison of incidence across time in multiple areas, using identical methodological approaches, has been conducted in the world.

The researchers found that dementia incidence across the two decades has dropped by 20% and that this fall is driven by a reduction in incidence among men at all ages. These findings suggest that in the UK there are just under 210,000 new cases per year: 74,000 men and 135,000 women – this is compared to an anticipated 250,000 new cases based on previous levels. Incidence rates are higher in more deprived areas.

Even in the presence of an ageing population, this means that the number of people estimated to develop dementia in any year has remained relatively stable, providing evidence that dementia in whole populations can change.   It is not clear why rates among men have declined faster than those among women, though it is possible that it is related to the drop in smoking and vascular health improving in men.

Professor Carol Brayne, Director of the Cambridge Institute of Public Health, University of Cambridge, says: “Our findings suggest that brain health is improving significantly in the UK across generations, particularly among men, but that deprivation is still putting people at a disadvantage. The UK in earlier eras has seen major societal investments into improving population health and this appears to be helping protect older people from dementia. It is vital that policies take potential long term benefits into account.”

Professor Fiona Matthews from the Institute of Health and Society, Newcastle University and the MRC Biostatistics Unit, Cambridge adds: “Public health measures aimed at reducing people’s risk of developing dementia are vital and potentially more cost effective in the long run than relying on early detection and treating dementia once it is present. Our findings support a public health approach for long term dementia prevention, although clearly this does not reduce the need for alternative approaches for at-risk groups and for those who develop dementia.”

The researchers argue that while influential reports continue to promote future scenarios of huge increases of people with dementia across the globe, their study shows that global attention and investment in reducing the risk of dementia can help prevent such increases.

“While we’ve seen investment in Europe and many other countries, the lack of progress in access to education, malnutrition in childhood and persistent inequalities within and across other countries means that dementia will continue to have a major impact globally,” says Professor Brayne. “Our evidence shows that the so-called dementia ‘tsunami’ is not an inevitability: we can help turn the tide if we take action now.”

Dr Rob Buckle, director of science programmes at the Medical Research Council, which funded the study, added: “It is promising news that dementia rates, especially amongst men, have dropped by such a significant amount over the last twenty years, and testament to the benefits of an increased awareness of a brain-healthy lifestyle. However, the burden of dementia will continue to have significant societal impact given the growing proportion of elderly people within the UK population and it is therefore as important as ever that we continue to search for new ways of preventing and treating the disease. This study does, however, reinforce the importance of long-term, quality studies that create a wealth of data of invaluable resource for researchers.”

Reference
Matthews, FE et al. A two decade comparison of incidence of dementia in individuals aged 65 years and older from three geographical areas of England: results of the Cognitive Function Ageing Study I and II. Nature Communications; 19 April 2016; DOI 10.1038/ncomms11398


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/new-cases-of-dementia-in-the-uk-fall-by-20-over-two-decades#sthash.Xu3D2IED.dpuf

Dr Belinda Quinn Appointed As Chief Executive Officer of the Precision Medicine Catapult

Dr Belinda Quinn appointed as Chief Executive Officer of the Precision Medicine Catapult

Dr Belinda Quinn

source: https://pm.catapult.org.uk

19 April 2016, Cambridge, UK – The Precision Medicine Catapult, the UK’s innovation centre for precision medicine, announces today it has appointed Dr Belinda Quinn as its new Chief Executive Officer.

Belinda has been working as Chief Clinical Officer of the Precision Medicine Catapult over the past six months, formulating and landscaping Precision Medicine opportunities across the UK. She has played a pivotal leadership role in mobilising the seven Centres of Excellence integral to the success of the Precision Medicine Catapult.

Belinda trained as a doctor specialising in neurology before diversifying into IT and clinical leadership roles. She has held executive and transformational change roles across the public and private sector including big pharma, global management consulting, the NHS, data and regulatory in the UK, Australia and the Middle East.

Richard Barker, Chairman of the Precision Medicine Catapult said: “Belinda brings deep experience and great energy to the task of building the Precision Medicine Catapult and delivering its strategy. With our ambition to create value for the NHS, private sector companies and most importantly for patients through the development of precision medicine, her ability to bring together all these interests will be invaluable.”

Ruth McKernan, Chief Executive of Innovate UK, the Precision Medicine Catapult primary funder, said: “The Precision Medicine Catapult will transform the UK’s capability for innovation in this important sector, helping to drive economic growth and deliver significant improvements in health. Belinda has already built a strong reputation with colleagues across the UK and I am confident that the Catapult will be a powerful force in making the UK a leader in precision medicine.”

Belinda Quinn said: “I am delighted to accept the role as CEO of the Precision Medicine Catapult and will continue to build on the momentum and progress we have made in the last year. I am excited by the opportunity to work with the many world-leading experts in research and clinical practice that we have in this country, coupled with industry partners and Innovate UK to help bring forward innovative, sustainable and more cost-effective solutions that will build the UK’s precision medicine industry, benefit patients and improve the effectiveness and efficiency of the healthcare system in the UK.”

Read the full press release

UK Steel Can Survive If It Transforms Itself, Say Researchers

UK steel can survive if it transforms itself, say researchers

source: www.cam.ac.uk

A new report from the University of Cambridge claims that British steel could be saved, if the industry is willing to transform itself.

We will never need more capacity for making steel from iron ore than we have today.

Julian Allwood

The report, by Professor Julian Allwood, argues that in order to survive, the UK steel industry needs to refocus itself on steel recycling and on producing products for end users. He argues that instead of viewing Tata Steel’s UK exit as a catastrophe, it can instead be viewed as an opportunity.

Allwood’s report, A bright future for UK steel: A strategy for innovation and leadership through up-cycling and integration, uses evidence gathered from over six years of applied research by 15 researchers, funded by the UK’s Engineering and Physical Sciences Research Council (EPSRC) and industrial partners spanning the global steel supply chain. It is published online today (15 April).

“Tata Steel is pulling out of the UK, for good reason, and there are few if any willing buyers,” said Allwood, from Cambridge’s Department of Engineering. “Despite the sale of the Scunthorpe plant announced earlier this week, the UK steel industry is in grave jeopardy, and it appears that UK taxpayers must either subsidise a purchase, or accept closure and job losses.

“However, we believe that there is a third option, which would allow a transformation of the UK’s steel industry.”

Instead of producing new steel, one option for the UK steel industry is to refocus itself toward recycling steel rather than producing it from scratch. The global market for steel recycling is projected to grow at least three-fold in the next 30 years, but despite the fact that more than 90% of steel is recycled, the processes by which recycling happens are out of date. The quality of recycled steel is generally low, due to poor control of its composition.

Because of this, old steel is generally ‘down-cycled’ to the lowest value steel application – reinforcing bar. According to Allwood, the UK’s strengths in materials innovation could be applied to instead ‘up-cycle’ old steel to today’s high-tech compositions.

According to Allwood, today’s global steel industry has more capacity for making steel from iron ore than it will ever need again. On average, products made with steel last 35-40 years, and around 90% of all old steel is collected. It is likely that, despite the current downturn, global demand for steel will continue to grow, but all future growth can be met by recycling our existing stock of steel. “We will never need more capacity for making steel from iron ore than we have today,” said Allwood.

Apart from the issue of recycling, today’s UK steel industry focuses on products such as plates, bars and coils of strip, all of which have low profit margins. “The steel industry fails to capture the value and innovation potential from making final components,” said Allwood. “As a result, more than a quarter of all steel is cut off during fabrication and never enters a product, and most products use at least a third more steel than actually required. The makers of liquid steel could instead connect directly to final customer requirements.”

These two opportunities create the scope for a transformation of the steel industry in the UK, says the report. In response to Tata Steel’s decision, UK taxpayers will have to bear costs. If the existing operations are to be sold, taxpayers must subsidise the purchase without the guarantee of a long term national gain. If the plants are closed, the loss of tax income and payment of benefits will cost taxpayers £300m-£800m per year, depending on knock-on job losses.

Allwood’s strategy requires taxpayers to invest in a transformation, for example through the provision of a long term loan. This would allow UK to innovate more than any other large player, with the potential of leadership in a global market that is certain to triple in size.

He singles out the example of the Danish government’s Wind Power Programme, initiated in 1976, which provided a range of subsidies and support for Denmark’s nascent wind industry, allowing it to establish a world-leading position in a growing market. Allwood believes a similar initiative by the UK government could mirror this success and transform the steel industry. “Rapid action now to initiate working groups on the materials technologies, business model innovations, financing and management of the proposed transformation could convert this vision to a plan for action before the decision for plant closure or subsidised sale is finalised,” he said. “This is worth taking a real shot on.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/uk-steel-can-survive-if-it-transforms-itself-say-researchers#sthash.RzAW4ZHL.dpuf

Cambridge to Research Future Computing Tech That Could “Ignite A Technology Field”

Cambridge to research future computing tech that could “ignite a technology field”

Source: www.cam.ac.uk

A Cambridge-led project aiming to develop a new architecture for future computing based on superconducting spintronics – technology designed to increase the energy-efficiency of high-performance computers and data storage – has been announced.

Superconducting spintronics offer extraordinary potential because they combine the properties of two traditionally incompatible fields to enable ultra-low power digital electronics

Jason Robinson

A project which aims to establish the UK as an international leader in the development of “superconducting spintronics” – technology that could significantly increase the energy-efficiency of data centres and high-performance computing – has been announced.

Led by researchers at the University of Cambridge, the “Superspin” project aims to develop prototype devices that will pave the way for a new generation of ultra-low power supercomputers, capable of processing vast amounts of data, but at a fraction of the huge energy consumption of comparable facilities at the moment.

As more economic and cultural activity moves online, the data centres which house the servers needed to handle internet traffic are consuming increasing amounts of energy. An estimated three per cent of power generated in Europe is, for example, already used by data centres, which act as repositories for billions of gigabytes of information.

Superconducting spintronics is a new field of scientific investigation that has only emerged in the last few years. Researchers now believe that it could offer a pathway to solving the energy demands posed by high performance computing.

As the name suggests, it combines superconducting materials – which can carry a current without losing energy as heat – with spintronic devices. These are devices which manipulate a feature of electrons known as their “spin”, and are capable of processing large amounts of information very quickly.

Given the energy-efficiency of superconductors, combining the two sounds like a natural marriage, but until recently it was also thought to be completely impossible. Most spintronic devices have magnetic elements, and this magnetism prevents superconductivity, and hence reduces any energy-efficiency benefits.

Stemming from the discovery of spin polarized supercurrents in 2010 at the University of Cambridge, recent research, along with that of other institutions, has however shown that it is possible to power spintronic devices with a superconductor. The aim of the new £2.7 million project, which is being funded by the Engineering and Physical Sciences Research Council, is to use this as the basis for a new style of computing architecture.

Although work is already underway in several other countries to exploit superconducting spintronics, the Superspin project is unprecedented in terms of its magnitude and scope.

Researchers will explore how the technology could be applied in future computing as a whole, examining fundamental problems such as spin generation and flow, and data storage, while also developing sample devices. According to the project proposal, the work has the potential to establish Britain as a leading centre for this type of research and “ignite a technology field.”

The project will be led by Professor Mark Blamire, Head of the Department of Materials Sciences at the University of Cambridge, and Dr Jason Robinson, University Lecturer in Materials Sciences, Fellow of St John’s College, University of Cambridge, and University Research Fellow of the Royal Society. They will work with partners in the University’s Cavendish Laboratory (Dr Andrew Ferguson) and at Royal Holloway, London (Professor Matthias Eschrig).

Blamire and Robinson’s core vision of the programme is “to generate a paradigm shift in spin electronics, using recent discoveries about how superconductors can be combined with magnetism.” The programme will provide a pathway to making dramatic improvements in computing energy efficiency.

Robinson added: “Many research groups have recognised that superconducting spintronics offer extraordinary potential because they combine the properties of two traditionally incompatible fields to enable ultra-low power digital electronics.”

“However, at the moment, research programmes around the world are individually studying fascinating basic phenomena, rather than looking at developing an overall understanding of what could actually be delivered if all of this was joined up. Our project will aim to establish a closer collaboration between the people doing the basic science, while also developing demonstrator devices that can turn superconducting spintronics into a reality.”

The initial stages of the five-year project will be exploratory, examining different ways in which spin can be transported and magnetism controlled in a superconducting state. By 2021, however, the team hope that they will have manufactured sample logic and memory devices – the basic components that would be needed to develop a new generation of low-energy computing technologies.

The project will also report to an advisory board, comprising representatives from several leading technology firms, to ensure an ongoing exchange between the researchers and industry partners capable of taking its results further.

“The programme provides us with an opportunity to take international leadership of this as a technology, as well as in the basic science of studying and improving the interaction between superconductivity and magnetism,” Blamire said. “Once you have grasped the physics behind the operation of a sample device, scaling up from the sort of models that we are aiming to develop is not, in principle, too taxing.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/cambridge-to-research-future-computing-tech-that-could-ignite-a-technology-field#sthash.5y9mGCfu.dpuf

Graduate Earnings: What You Study and Where Matters – But So Does Parents’ Income

Graduate earnings: what you study and where matters – but so does parents’ income

source: www.cam.ac.uk

First ‘big data’ research approach to graduate earnings reveals significant variations depending on student background, degree subject and university attended.

The research illustrates strongly that, for most graduates, higher education leads to much better earnings than those earned by non-graduates, although students need to realise that their subject choice is important in determining how much of an earnings advantage they will have

Anna Vignoles

Latest research has shown that graduates from richer family backgrounds earn significantly more after graduation than their poorer counterparts, even after completing the same degrees from the same universities.

The finding is one of many from a new study, published today, which looks at the link between earnings and students’ background, degree subject and university.

The research also found that those studying medicine and economics earn far more than those studying other degree subjects, and that there is considerable variation in graduates’ earnings depending on the university attended.

The study was carried out by the Institute of Fiscal Studies and the universities of Cambridge and Harvard, including Professor Anna Vignoles from Cambridge’s Faculty of Education. It is the first time a ‘big data’ approach has been used to look at how graduate earnings vary by institution of study, degree subject and parental income.

The researchers say that many other factors beyond graduate earnings, such as intrinsic interest, will and should drive student choice. However, they write that the research shows the potential value of providing some useful information that might inform students’ choice of degree – particularly to assist those from more disadvantaged backgrounds who might find it harder to navigate the higher education system.

“It would seem important to ensure there is adequate advice and guidance given that graduates’ future earnings are likely to vary depending on the institution and subject they choose, with implications for social mobility,” write the researchers in the study’s executive summary.

The research used anonymised tax data and student loan records for 260,000 students up to ten years after graduation. The dataset includes cohorts of graduates who started university in the period 1998-2011 and whose earnings (or lack of earnings) are then observed over a number of tax years. The paper focuses on the tax year 2012/13.

The study found that those from richer backgrounds (defined as being approximately from the top 20% of households of those applying to higher education in terms of family income) did better in the labour market than the other 80% of students.

The average gap in earnings between students from higher and lower income backgrounds is £8,000 a year for men and £5,300 a year for women, ten years after graduation.

Even after taking account of subject studied and the characteristics of the institution of study, the average student from a higher income background earned about 10% more than other students.

The gap is bigger at the top of the distribution – the 10% highest earning male graduates from richer backgrounds earned about 20% more than the 10% highest earners from relatively poorer backgrounds. The equivalent premium for the 10% highest earning female graduates from richer backgrounds was 14%.

The study also showed that graduates are much more likely to be in work, and earn much more than non-graduates. Non-graduates are twice as likely to have no earnings as are graduates ten years on (30% against 15% for the cohort who enrolled in higher education in 1999).

Partly as a result of this, half of non-graduate women had earnings below £8,000 a year at around age 30, say the researchers. Only a quarter of female graduates were earning less than this. Half were earning more than £21,000 a year.

Among those with significant earnings (which the researchers define as above £8,000 a year), median earnings for male graduates ten years after graduation were £30,000. For non-graduates of the same age median earnings were £21,000. The equivalent figures for women with significant earnings were £27,000 and £18,000.

“The research illustrates strongly that, for most graduates, higher education leads to much better earnings than those earned by non-graduates, although students need to realise that their subject choice is important in determining how much of an earnings advantage they will have,” said Professor Vignoles.

The researchers also found substantial differences in earnings according to which university was attended, as well as which subject was studied. They say however that this is in large part driven by differences in entry requirements.

For instance, more than 10% of male graduates from LSE, Oxford and Cambridge were earning in excess of £100,000 a year ten years after graduation, with LSE graduates earning the most. LSE was the only institution with more than 10% of its female graduates earning in excess of £100,000 a year ten years on.

Even without focusing on the very top, the researchers say they found a large number of institutions (36 for men and 10 for women) had 10% of their graduates earning more than £60,000 a year ten years on. At the other end of the spectrum, there were some institutions (23 for men and 9 for women) where the median graduate earnings were less than those of the median non-graduate ten years on.

However, the researchers say that it is important to put this in context. “Given regional differences in average wages, some very locally focused institutions may struggle to produce graduates whose wages outpace English wide earnings, which includes those living in London where full time earnings for males are around 50% higher than in some other regions, such as Northern Ireland,” they write.

In terms of earnings according to subject, medical students were easily the highest earners at the median ten years out, followed by those who studied economics. For men, median earnings for medical graduates were about £50,000 after ten years, and for economics graduates £40,000.

Those studying the creative arts had the lowest earnings, and earned no more on average than non-graduates. However, the researchers say that some of these earnings differences are, of course, attributable to differences in student intake – since students with different levels of prior achievement at A-level take different subject options.

“When we account for different student intakes across subjects, only economics and medicine remain outliers with much higher earnings at the median as compared to their peers in other subjects,” write the researchers.

After allowing for differences in the characteristics of those who take different subjects, male medical graduates earn around £13,000 more at the median than similar engineering and technology graduates, the gap for women is approximately £16,000. Both male and female medical graduates earn around £14,000 more at the median than similar law graduates.

“Earnings vary substantially with university, subject, gender and cohort,” said study co-author Neil Shepherd of Harvard University. “This impacts on which parts of the HE sector the UK Government funds through the subsidy inherent within income contingent student loans. The next step in the research is to quantifying that variation in funding, building on today’s paper.”

Reference:
Institute for Fiscal Studies working paper: ‘How English domiciled graduate earnings vary with gender, institution attended, subject and socio-economic background‘, Jack Britton , Lorraine Dearden , Neil Shephard and Anna Vignoles.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/graduate-earnings-what-you-study-and-where-matters-but-so-does-parents-income#sthash.osd4Zuq7.dpuf

Predicting Gentrification Through Social Networking Data

Predicting gentrification through social networking data

source: www.cam.ac.uk

Data from location-based social networks may be able to predict when a neighbourhood will go through the process of gentrification, by identifying areas with high social diversity and high deprivation.

We understand that people who diversify their contacts socially and geographically have high social capital, but what about places?

Desislava Hristova

The first network to look at the interconnected nature of people and places in large cities is not only able to quantify the social diversity of a particular place, but can also be used to predict when a neighbourhood will go through the process of gentrification, which is associated with the displacement of residents of a deprived area by an influx of a more affluent population.

The researchers behind the study, led by the University of Cambridge, will present their results today (13 April) at the 25thInternational World Wide Web Conference in Montréal.

The Cambridge researchers, working with colleagues from the University of Birmingham, Queen Mary University of London, and University College London, used data from approximately 37,000 users and 42,000 venues in London to build a network of Foursquare places and the parallel Twitter social network of visitors, adding up to more than half a million check-ins over a ten-month period. From this data, they were able to quantify the ‘social diversity’ of various neighbourhoods and venues by distinguishing between places that bring together strangers versus those that tend to bring together friends, as well as places that attract diverse individuals as opposed to those which attract regulars.

When these social diversity metrics were correlated with wellbeing indicators for various London neighbourhoods, the researchers discovered that signs of gentrification, such as rising housing prices and lower crime rates, were the strongest in deprived areas with high social diversity. These areas had an influx of more affluent and diverse visitors, represented by social media users, and pointed to an overall improvement of their rank, according to the UK Index of Multiple Deprivation.

The UK Index of Multiple Deprivation (IMD) is a statistical exercise conducted by the Department of Communities and Local Government, which measures the relative prosperity of neighbourhoods across England. The researchers compared IMD data for 2010, the year their social and place network data was gathered, with the IMD data for 2015, the most recent report.

“We’re looking at the social roles and properties of places,” said Desislava Hristova from the University’s Computer Laboratory, and the study’s lead author. “We found that the most socially cohesive and homogenous areas tend to be either very wealthy or very poor, but neighbourhoods with both high social diversity and high deprivation are the ones which are currently undergoing processes of gentrification.”

This aligns with previous research, which has found that tightly-knit communities are more resistant to changes and resources remain within the community. This suggests that affluent communities remain affluent and poor communities remain poor because they are relatively isolated.

Hristova and her co-authors found that of the 32 London boroughs, the borough of Hackney had the highest social diversity, and in 2010, had the second-highest deprivation. By 2015, it had also seen the most improvement on the IMD index, and is now an area undergoing intense gentrification, with house prices rising far above the London average, fast-decreasing crime rate and a highly diverse population.

In addition to Hackney, Tower Hamlets, Greenwich, Hammersmith and Lambeth are also boroughs with high social diversity and high deprivation in 2010, and are now undergoing the process of gentrification, with all of the positive and negative effects that come along with it.

The ability to predict the gentrification of neighbourhoods could help local governments and policy-makers improve urban development plans and alleviate the negative effects of gentrification while benefitting from economic growth.

In order to measure the social diversity of a given place or neighbourhood, the researchers defined four distinct measures: brokerage, serendipity, entropy and homogeneity. Brokerage is the ability of a place to connect people who are otherwise disconnected; serendipity is the extent to which a place can induce chance encounters between its visitors; entropy is the extent to which a place is diverse with respect to visits; and homogeneity is the extent to which the visitors to a place are homogenous in their characteristics.

Within categories of places, the researchers found that some places were more likely places for friends to meet, and some were for more fleeting encounters. For example, in the food category, strangers were more likely to meet at a dumpling restaurant while friends were more likely to meet at a fried chicken restaurant. Similarly, friends were more likely to meet at a B&B, football match or strip club, while strangers were more likely to meet at a motel, art museum or gay bar.

“We understand that people who diversify their contacts socially and geographically have high social capital, but what about places?” said Hristova. “We all have a general notion of the social diversity of places and the people that visit them, but we’ve attempted to formalise this – it could even be used as a specialised local search engine.”

For instance, while there are a number of ways a tourist can find a highly-recommended restaurant in a new city, the social role that a place plays in a city is normally only known by locals through experience. “Whether a place is touristy or quiet, artsy or mainstream could be integrated into mobile system design to help newcomers or tourists feel like locals,” said Hristova.

Reference:
Desislava Hristova et al. ‘Measuring Urban Social Diversity Using Interconnected Geo-Social Networks.’ Paper presented to the International World Wide Web Conference, Montréal, 11-15 April 2016. http://www2016.ca/program-at-a-glance.html


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/predicting-gentrification-through-social-networking-data#sthash.VxhRau3e.dpuf

Neanderthals May Have Been Infected By Diseases Carried Out of Africa By Humans, Say Researchers

Neanderthals may have been infected by diseases carried out of Africa by humans, say researchers

Source: www.cam.ac.uk

Review of latest genetic evidence suggests infectious diseases are tens of thousands of years older than previously thought, and that they could jump between species of ‘hominin’. Researchers says that humans migrating out of Africa would have been ‘reservoirs of tropical disease’ – disease that may have sped up Neanderthal extinction.

Humans migrating out of Africa would have been a significant reservoir of tropical diseases

Charlotte Houldcroft

A new study suggests that Neanderthals across Europe may well have been infected with diseases carried out of Africa by waves of anatomically modern humans, or Homo sapiens. As both were species of hominin, it would have been easier for pathogens to jump populations, say researchers. This might have contributed to the demise of Neanderthals.

Researchers from the universities of Cambridge and Oxford Brookes have reviewed the latest evidence gleaned from pathogen genomes and DNA from ancient bones, and concluded that some infectious diseases are likely to be many thousands of years older than previously believed.

There is evidence that our ancestors interbred with Neanderthals and exchanged genes associated with disease. There is also evidence that viruses moved into humans from other hominins while still in Africa. So, the researchers argue, it makes sense to assume that humans could, in turn, pass disease to Neanderthals, and that – if we were mating with them – we probably did.

Dr Charlotte Houldcroft, from Cambridge’s Division of Biological Anthropology, says that many of the infections likely to have passed from humans to Neanderthals – such as tapeworm, tuberculosis, stomach ulcers and types of herpes – are chronic diseases that would have weakened the hunter-gathering Neanderthals, making them less fit and able to find food, which could have catalysed extinction of the species.

“Humans migrating out of Africa would have been a significant reservoir of tropical diseases,” says Houldcroft. “For the Neanderthal population of Eurasia, adapted to that geographical infectious disease environment, exposure to new pathogens carried out of Africa may have been catastrophic.”

“However, it is unlikely to have been similar to Columbus bringing disease into America and decimating native populations. It’s more likely that small bands of Neanderthals each had their own infection disasters, weakening the group and tipping the balance against survival,” says Houldcroft.

New techniques developed in the last few years mean researchers can now peer into the distant past of modern disease by unravelling its genetic code, as well as extracting DNA from fossils of some of our earliest ancestors to detect traces of disease.

In a paper published today in the American Journal of Physical Anthropology, Houldcroft, who also studies modern infections at Great Ormond Street Hospital, and Dr Simon Underdown, a researcher in human evolution from Oxford Brookes University, write that genetic data shows many infectious diseases have been “co-evolving with humans and our ancestors for tens of thousands to millions of years”.

The longstanding view of infectious disease is that it exploded with the dawning of agriculture some 8,000 years ago, as increasingly dense and sedentary human populations coexisted with livestock, creating a perfect storm for disease to spread. The researchers say the latest evidence suggests disease had a much longer “burn in period” that pre-dates agriculture.

In fact, they say that many diseases traditionally thought to be ‘zoonoses’, transferred from herd animals into humans, such as tuberculosis, were actually transmitted into the livestock by humans in the first place.

“We are beginning to see evidence that environmental bacteria were the likely ancestors of many pathogens that caused disease during the advent of agriculture, and that they initially passed from humans into their animals,” says Houldcroft.

“Hunter-gatherers lived in small foraging groups. Neanderthals lived in groups of between 15-30 members, for example. So disease would have broken out sporadically, but have been unable to spread very far. Once agriculture came along, these diseases had the perfect conditions to explode, but they were already around.”

There is as yet no hard evidence of infectious disease transmission between humans and Neanderthals; however, considering the overlap in time and geography, and not least the evidence of interbreeding, Houldcroft and Underdown say that it must have occurred.

Neanderthals would have adapted to the diseases of their European environment. There is evidence that humans benefited from receiving genetic components through interbreeding that protected them from some of these: types of bacterial sepsis – blood poisoning occurring from infected wounds – and encephalitis caught from ticks that inhabit Siberian forests.

In turn, the humans, unlike Neanderthals, would have been adapted to African diseases, which they would have brought with them during waves of expansion into Europe and Asia.

The researchers describe Helicobacter pylori, a bacterium that causes stomach ulcers, as a prime candidate for a disease that humans may have passed to Neanderthals. It is estimated to have first infected humans in Africa 88 to 116 thousand years ago, and arrived in Europe after 52,000 years ago. The most recent evidence suggests Neanderthals died out around 40,000 years ago.

Another candidate is herpes simplex 2, the virus which causes genital herpes. There is evidence preserved in the genome of this disease that suggests it was transmitted to humans in Africa 1.6 million years ago from another, currently unknown hominin species that in turn acquired it from chimpanzees.

“The ‘intermediate’ hominin that bridged the virus between chimps and humans shows that diseases could leap between hominin species. The herpesvirus is transmitted sexually and through saliva. As we now know that humans bred with Neanderthals, and we all carry 2-5% of Neanderthal DNA as a result, it makes sense to assume that, along with bodily fluids, humans and Neanderthals transferred diseases,” says Houldcroft.

Recent theories for the cause of Neanderthal extinction range from climate change to an early human alliance with wolves resulting in domination of the food chain. “It is probable that a combination of factors caused the demise of Neanderthals,” says Houldcroft, “and the evidence is building that spread of disease was an important one.”

Inset image: Dr Charlotte Houldcroft


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/neanderthals-may-have-been-infected-by-diseases-carried-out-of-africa-by-humans-say-researchers#sthash.jCT6tExI.dpuf

Timber Skyscrapers Could Transform London’s Skyline

Timber skyscrapers could transform London’s skyline

source: www.cam.ac.uk

London’s first timber skyscraper could be a step closer to reality this week after researchers presented Mayor of London Boris Johnson with conceptual plans for an 80-storey, 300m high wooden building integrated within the Barbican.

If London is going to survive it needs to increasingly densify. One way is taller buildings. We believe people have a greater affinity for taller buildings in natural materials rather than steel and concrete towers.

Michael Ramage

Researchers from Cambridge University’s Department of Architecture are working with PLP Architecture and engineers Smith and Wallwork on the future development of tall timber buildings in central London.

The use of timber as a structural material in tall buildings is an area of emerging interest for its variety of potential benefits; the most obvious being that it is a renewable resource, unlike prevailing construction methods which use concrete and steel.  The research is also investigating other potential benefits, such as reduced costs and improved construction timescales, increased fire resistance, and significant reduction in the overall weight of buildings.

The conceptual proposals currently being developed would create over 1,000 new residential units in a 1 million sq ft mixed-use tower and mid-rise terraces in central London, integrated within the Barbican.

Dr Michael Ramage, Director of Cambridge’s Centre for Natural Material Innovation, said: “The Barbican was designed in the middle of the last century to bring residential living into the city of London – and it was successful. We’ve put our proposals on the Barbican as a way to imagine what the future of construction could look like in the 21st century.

“If London is going to survive it needs to increasingly densify. One way is taller buildings. We believe people have a greater affinity for taller buildings in natural materials rather than steel and concrete towers. The fundamental premise is that timber and other natural materials are vastly underused and we don’t give them nearly enough credit. Nearly every historic building, from King’s College Chapel to Westminster Hall, has made extensive use of timber.”

Kevin Flanagan, Partner at PLP Architecture said “We now live predominantly in cities and so the proposals have been designed to improve our wellbeing in an urban context. Timber buildings have the potential architecturally to create a more pleasing, relaxed, sociable and creative urban experience.

“Our firm is currently designing many of London’s tall buildings, and the use of timber could transform the way we build in this city. We are excited to be working with the University and with Smith and Wallwork on this ground breaking design- and engineering-based research.”

The tall timber buildings research also looks towards creating new design potentials with timber buildings, rather than simply copying the forms of steel and concrete construction. The transition to timber construction may have a wider positive impact on urban environments and built form, and offers opportunities not only to rethink the aesthetics of buildings, but also the structural methodologies informing their design as well.

Just as major innovations in steel, glass and concrete revolutionised buildings in the 19th and 20th centuries, creating Joseph Paxton’s Crystal Palace and the Parisian arcades described by Walter Benjamin, innovations in timber construction could lead to entirely new experiences of the city in the 21st century.

The type of wood these new buildings would use is regarded as a ‘crop’. The amount of crop forest in the world is currently expanding. Canada alone could produce more than 15billion m³ of crop forest in the next 70 years, enough to house around a billion people.

At present, the world’s tallest timber building is a 14-storey apartment block in Bergen, Norway. The proposals presented to Johnson included concepts for a timber tower nearly 300m high, which would make it the second tallest building in London after The Shard.

Dr Ramage added: “We’ve designed the architecture and engineering and demonstrated it will stand, but this is at a scale no one has attempted to build before. We are developing a new understanding of primary challenges in structure and construction. There is a lot of work ahead, but we are confident of meeting all the challenges before us.”

Perhaps the most obvious concern for potential residents of homes built primarily from timber is fire risk. However, the team involved in the project said the proposed building would eventually meet or exceed every existing fire regulation currently in place for steel and concrete buildings.

Recent research has also shown that timber buildings can have positive effects on their user and occupant’s health. Some recent studies have also shown that children taught in schools with timber structures may perform better than in those made of concrete.

The designs for the Barbican is the first in a series of timber skyscrapers developed by Cambridge University in association with globally renowned architects and structural engineers with funding from the UK’s Engineering and Physical Sciences Research Council.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/timber-skyscrapers-could-transform-londons-skyline#sthash.MFmF72LW.dpuf