All posts by Admin

Linguistics Study Reveals Our Growing Obsession With Education

Linguistics study reveals our growing obsession with education

Source: www.cam.ac.uk

As children around the country go back to school, a new comparative study of spoken English reveals that we talk about education nearly twice as much as we did twenty years ago.

We talk about education twice as much as we used to.

Claire Dembry

The study, which compares spoken English today with recordings from the 1990s, allows researchers at Cambridge University Press and Lancaster University to examine how the language we use indicates our changing attitudes to education.

They found that the topic of education is far more salient in conversations now, with the word cropping up 42 times per million words, compared with only 26 times per million in the 1990s dataset.

As well as talking about education more, there has also been a noticeable shift in the terms we use to describe it. Twenty years ago, the public used fact-based terms to talk about education, most often describing it as either full-time, or part-time.

Today, however, we’re more likely to use evaluative language about the standards of education and say that it’s good, bad or great. This could be due to the rise in the formal assessments of schools, for example, with the establishment of the Office for Standards in Education, Children’s Services and Skills (OFSTED) in 1992. Indeed, Ofsted itself has made its debut as a verb in recent times, with the arrival of discussions on what it means for a school to be Ofsteded.

Dr Claire Dembry, Senior Language Research Manager at Cambridge University Press said: “It’s fascinating to find out that, not only do we talk about education twice as much as we used to, but also that we are more concerned about the quality. It’s great that we have these data sets to be able to find out these insights; without them we wouldn’t be able to research how the language we use is changing, nor the topics we talk about most.”

The research findings also indicate that we’re now expecting to get more out of our education than we used to. We’ve started talking about qualifications twice as much as we did in the 1990s, GCSEs five times as much and A levels 1.4 times as much.

Meanwhile, use of the word university has tripled. This is perhaps not surprising, as the proportion of young people going to university doubled between 1995 and 2008, going from 20 per cent to almost 40 per cent.

When the original data was collected in the 1990s, university fees had yet to be introduced, and so it is unsurprising that the terms university fees and tuition fees did not appear in the findings. However the recent data shows these terms to each occur roughly once per million words, as we’ve begun to talk about university in more commercialised terms.

However, while teachers may be happy to hear that education is of growing concern to the British public, it won’t come as good news to them that the adjective underpaid is most closely associated with their job.

These are only the initial findings from the first two million words of the project, named the ‘Spoken British National Corpus 2014,’ which is still seeking recorded submissions.

Professor Tony McEnery, from the ESRC Centre for Corpus Approaches to Social Sciences (CASS) at Lancaster University, said: “We need to gather hundreds, if not thousands, of conversations to create a full spoken corpus so we can continue to analyse the way language has changed over the last 20 years.

“This is an ambitious project and we are calling for people to send us MP3 files of their everyday, informal conversations in exchange for a small payment to help me and my team to delve deeper into spoken language and to shed more light on the way our spoken language changes over time.”

People who wish to submit recordings to the research team should visit: http://languageresearch.cambridge.org/index.php/spoken-british-national-…


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cities At Risk

Cities at risk

source: www.cam.ac.uk

New model says world cities face expected losses of $4.6 trillion in economic output over the next decade as a result of natural or man-made catastrophes.

We believe that it is possible to estimate the cost to a business, city, region or the global economy, from all catastrophic shocks.

Daniel Ralph

Using a new metric, ‘GDP @Risk’, the ‘Catastronomics’ techniques developed by researchers at the University of Cambridge reveal that major threats to the world’s most important cities could reduce economic output by some $4.56 trillion, equivalent to 1.2 per cent of the total GDP forecast to be generated by these cities in the next decade.

The techniques, developed by researchers at the Cambridge Centre for Risk Studies, provide the data and risk analysis for the Lloyd’s City Risk Index 2015-2025, which was launched last week.

“GDP @Risk makes it possible to combine and compare a very wide range of threats, including those that are disruptive and costly, such as market collapse, in addition to destructive and deadly natural catastrophes, and measure their impact on economic output,” said Professor Daniel Ralph, Academic Director of Cambridge Centre for Risk Studies, which is based at the Cambridge Judge Business School. “This 1.2 per cent is the estimated ‘catastrophe burden’ on the world’s economy – without this, the growth of global GDP, currently running at around three per cent a year, would be significantly higher.”

Lloyd’s City Risk Index encompasses 301 of the world’s leading cities, selected by economic, business and political importance. These cities are responsible for over half of global GDP today, and an estimated two-thirds of the world’s economic output by 2025.

The analysis considers 18 different threats from manmade events, such as market crashes and technological catastrophes, to natural disasters to these urban centres. It examines the likelihood and severity of disruption of the output from the city as an economic engine, rather than metrics of physical destruction or repair cost loss – which is the traditional focus of conventional catastrophe models.

The Centre’s analysis reflects the typologies of different economic activities in each city. The GDP growth history, demographics and other data are used to derive GDP projections out to 2025 for each city. GDP @Risk is a long run average: the economic loss caused by ‘all’ catastrophes that might occur in an average decade, baselined against economic performance between 2015 and 2025.

Professor Ralph added: “A framework to quantify the average damage caused by a Pandora’s box of all ills – a ‘universal’ set of catastrophes – can be used to calibrate the value of investing in resilience. This is what the GDP @Risk metric for 300 World Cities attempts to provide. We believe that it is possible to estimate the cost to a business, city, region or the global economy, from all catastrophic shocks. Such holistic approaches are an antidote to risk management that reacts to threats taken from yesterday’s news headlines. Our simple methodology suggests that between 10 per cent and 25 per cent of GDP @Risk could be recovered, in principle, by improving resilience of all cities.”

Adapted from an article originally published on the Cambridge Judge Business School website.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

International Microfluidics Consortium visits Cambridge Sept 22

Now in its seventh year and following visits to China, Paris, Carolina, Copenhagen and Boston in the last year the MF7 Microfluidics Consortium will be in Cambridge on Sept 21 and 27.

The mission of the Consortium is to grow the market for Microfluidics enabled products and services.

On Sept 21 the consortium will be working on opportunities in stratified medicine and also hearing  pitches from promising start ups seeking investment, support and advice. There will also be a site visit to Dolomite Microfluidics in Royston.

On Sept 22nd the consortium is organising an Open Day at the Trinity Centre on the Cambridge Science Park. This is an opportunity for a wider slection of microfluidics stakeholders to get to know the consortium and its members , to network and to see demonstrations and hear presentations  about recent work.

For more information and to register, please follow this link http://www.cfbi.com/mf54landingpage.htm

Use of TV, internet and computer games associated with poorer GCSE grades

source: www.cam.ac.uk

Each extra hour per day spent watching TV, using the internet or playing computer games during Year 10 is associated with poorer grades at GCSE at age 16, according to research from the University of Cambridge.

Parents who are concerned about their child’s GCSE grade might consider limiting his or her screen time

Kirsten Corder

In a study published today in the open accessInternational Journal of Behavioral Nutrition and Physical Activity, researchers also found that pupils doing an extra hour of daily homework and reading performed significantly better than their peers. However, the level of physical activity had no effect on academic performance.

The link between physical activity and health is well established, but its link with academic achievement is not yet well understood. Similarly, although greater levels of sedentary behaviour – for example, watching TV or reading – have been linked to poorer physical health, the connection to academic achievement is also unclear.

To look at the relationship between physical activity, sedentary behaviours and academic achievement, a team of researchers led by the Medical Research Council (MRC) Epidemiology Unit at the University of Cambridge studied 845 pupils from secondary schools in Cambridgeshire and Suffolk, measuring levels of activity and sedentary behaviour at age 14.5 years and then comparing this to their performance in their GCSEs the following year. This data was from the ROOTS study, a large longitudinal study assessing health and wellbeing during adolescence led by Professor Ian Goodyer at the Developmental Psychiatry Section, Department of Psychiatry, University of Cambridge.

The researchers measured objective levels of activity and time spent sitting, through a combination of heart rate and movement sensing. Additionally the researchers used self-reported measures to assess screen time (the time spent watching TV, using the internet and playing computer games) and time spent doing homework, and reading for pleasure.

The team found that screen time was associated with total GCSE points achieved. Each additional hour per day of time spent in front of the TV or online at age 14.5 years was associated with 9.3 fewer GCSE points at age 16 years – the equivalent to two grades in one subject (for example from a B to a D) or one grade in each of two subjects, for example. Two extra hours was associated with 18 fewer points at GCSE.

Screen time and time spent reading or doing homework were independently associated with academic performance, suggesting that even if participants do a lot of reading and homework, watching TV or online activity still damages their academic performance.

The researchers found no significant association between moderate to vigorous physical activity and academic performance, though this contradicts a recent study which found a beneficial effect in some academic subjects. However, both studies conclude that engaging in physical activity does not damage a pupil’s academic performance. Given the wider health and social benefits of overall physical activity, the researchers argue that it remains a public health priority both in and out of school.

As well as looking at total screen time, the researchers analysed time spent in different screen activities. Although watching TV, playing computer games or being online were all associated with poorer grades, TV viewing was found to be the most detrimental.

As this was a prospective study – in other words, the researchers followed the pupils over time to determine how different behaviours affected their academic achievement – the researchers believe they can, with some caution, infer that increased screen time led to poorer academic performance.

“Spending more time in front of a screen appears to be linked to a poorer performance at GCSE,” says first author Dr Kirsten Corder from the Centre for Diet and Activity Research (CEDAR) in the MRC Epidemiology Unit at the University of Cambridge. “We only measured this behaviour in Year 10, but this is likely to be a reliable snapshot of participants’ usual behaviour, so we can reasonably suggest that screen time may be damaging to a teenager’s grades. Further research is needed to confirm this effect conclusively, but parents who are concerned about their child’s GCSE grade might consider limiting his or her screen time.”

Unsurprisingly, the researchers found that teenagers who spent their sedentary time doing homework or reading scored better at GCSE: pupils doing an extra hour of daily homework and reading achieved on average 23.1 more GCSE points than their peers. However, pupils doing over four hours of reading or homework a day performed less well than their peers – the number of pupils in this category was relatively low (only 52 participants) and may include participants who are struggling at school, and therefore do a lot of homework but unfortunately perform badly in exams.

Dr Esther van Sluijs, also from CEDAR, adds: “We believe that programmes aimed at reducing screen time could have important benefits for teenagers’ exam grades, as well as their health. It is also encouraging that our results show that greater physical activity does not negatively affect exam results. As physical activity has many other benefits, efforts to promote physical activity throughout the day should still be a public health priority.”

The research was mainly supported by the MRC and the UK Clinical Research Collaboration.

Reference
Corder, K et al. Revising on the run or studying on the sofa: Prospective associations between physical activity, sedentary behaviour, and exam results in British adolescents. International Journal of Behavioral Nutrition and Physical Activity; 4 Sept 2015.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Using Stellar ‘Twins’ To Reach the Outer Limits of the Galaxy

Using stellar ‘twins’ to reach the outer limits of the galaxy

 

www.cam.ac.uk

A new method of measuring the distances between stars enables astronomers to climb the ‘cosmic ladder’ and understand the processes at work in the outer reaches of the galaxy.

Determining distances is a key problem in astronomy, because unless we know how far away a star or group of stars is, it is impossible to know the size of the galaxy or understand how it formed and evolved

Paula Jofre Pfeil

Astronomers from the University of Cambridge have developed a new, highly accurate method of measuring the distances between stars, which could be used to measure the size of the galaxy, enabling greater understanding of how it evolved.

Using a technique which searches out stellar ‘twins’, the researchers have been able to measure distances between stars with far greater precision than is possible using typical model-dependent methods. The technique could be a valuable complement to the Gaia satellite – which is creating a three-dimensional map of the sky over five years – and could aid in the understanding of fundamental astrophysical processes at work in the furthest reaches of our galaxy. Details of the new technique are published in the Monthly Notices of the Royal Astronomical Society.

“Determining distances is a key problem in astronomy, because unless we know how far away a star or group of stars is, it is impossible to know the size of the galaxy or understand how it formed and evolved,” said Dr Paula Jofre Pfeil of Cambridge’s Institute of Astronomy, the paper’s lead author. “Every time we make an accurate distance measurement, we take another step on the cosmic distance ladder.”

The best way to directly measure a star’s distance is by an effect known as parallax, which is the apparent displacement of an object when viewed along two different lines of sight – for example, if you hold out your hand in front of you and look at it with your left eye closed and then with your right eye closed, your hand will appear to move against the background. The same effect can be used to calculate the distance to stars, by measuring the apparent motion of a nearby star compared to more distant background stars. By measuring the angle of inclination between the two observations, astronomers can use the parallax to determine the distance to a particular star.

However, the parallax method can only be applied for stars which are reasonably close to us, since beyond distances of 1600 light years, the angles of inclination are too small to be measured by the Hipparcos satellite, a precursor to Gaia. Consequently, of the 100 billion stars in the Milky Way, we have accurate measurements for just 100,000.

Gaia will be able to measure the angles of inclination with far greater precision than ever before, for stars up to 30,000 light years away. Scientists will soon have precise distance measurements for the one billion stars that Gaia is mapping – but that’s still only one percent of the stars in the Milky Way.

For even more distant stars, astronomers will still need to rely on models which look at a star’s temperature, surface gravity and chemical composition, and use the information from the resulting spectrum, together with an evolutionary model, to infer its intrinsic brightness and to determine its distance. However, these models can be off by as much as 30 percent. “Using a model also means using a number of simplifying assumptions – like for example assuming stars don’t rotate, which of course they do,” said Dr Thomas Mädler, one of the study’s co-authors. “Therefore stellar distances obtained by such indirect methods should be taken with a pinch of salt.”

The Cambridge researchers have developed a novel method to determine distances between stars by relying on stellar ‘twins’: two stars with identical spectra. Using a set of around 600 stars for which high-resolution spectra are available, the researchers found 175 pairs of twins. For each set of twins, a parallax measurement was available for one of the stars.

The researchers found that the difference of the distances of the twin stars is directly related to the difference in their apparent brightness in the sky, meaning that distances can be accurately measured without having to rely on models. Their method showed just an eight percent difference with known parallax measurements, and the accuracy does not decrease when measuring more distant stars.

“It’s a remarkably simple idea – so simple that it’s hard to believe no one thought of it before,” said Jofre Pfeil. “The further away a star is, the fainter it appears in the sky, and so if two stars have identical spectra, we can use the difference in brightness to calculate the distance.”

Since a utilised spectrum for a single star contains as many as 280,000 data points, comparing entire spectra for different stars would be both time and data-consuming, so the researchers chose just 400 spectral lines to make their comparisons. These particular lines are those which give the most distinguishing information about the star – similar to comparing photographs of individuals and looking at a single defining characteristic to tell them apart.

The next step for the researchers is to compile a ‘catalogue’ of stars for which accurate distances are available, and then search for twins among other stellar catalogues for which no distances are available. While only looking at stars which have twins restricts the method somewhat, thanks to the new generation of high-powered telescopes, high-resolution spectra are available for millions of stars. With even more powerful telescopes under development, spectra may soon be available for stars which are beyond even the reach of Gaia, so the researchers say their method is a powerful complement to Gaia.

“This method provides a robust way to extend the crucially-important cosmic distance ladder in a new special way,” said Professor Gerry Gilmore, the Principal Investigator for UK involvement in the Gaia mission. “It has the promise to become extremely important as new very large telescopes are under construction, allowing the necessary detailed observations of stars at large distances in galaxies far from our Milky Way, building on our local detailed studies from Gaia.”

The research was funded by the European Research Council.

Reference:
Jofré, P. et. al. Climbing the cosmic ladder with stellar twins. Monthly Notices of the Royal Astronomical Society (2015). DOI: 10.1093/mnras/stv1724. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Scientists “Squeeze” Light One Particle At a Time

Scientists “squeeze” light one particle at a time

 

source: www.cam.ac.uk

A team of scientists have measured a bizarre effect in quantum physics, in which individual particles of light are said to have been “squeezed” – an achievement which at least one textbook had written off as hopeless.

It’s just the same as wanting to look at Pluto in more detail or establishing that pentaquarks are out there. Neither of those things has an obvious application right now, but the point is knowing more than we did before. We do this because we are curious and want to discover new things. That’s the essence of what science is all about.

Mete Atature

A team of scientists has successfully measured particles of light being “squeezed”, in an experiment that had been written off in physics textbooks as impossible to observe.

Squeezing is a strange phenomenon of quantum physics. It creates a very specific form of light which is “low-noise” and is potentially useful in technology designed to pick up faint signals, such as the detection of gravitational waves.

The standard approach to squeezing light involves firing an intense laser beam at a material, usually a non-linear crystal, which produces the desired effect.

For more than 30 years, however, a theory has existed about another possible technique. This involves exciting a single atom with just a tiny amount of light. The theory states that the light scattered by this atom should, similarly, be squeezed.

Unfortunately, although the mathematical basis for this method – known as squeezing of resonance fluorescence – was drawn up in 1981, the experiment to observe it was so difficult that one established quantum physics textbook despairingly concludes: “It seems hopeless to measure it”.

So it has proven – until now. In the journal Nature, a team of physicists report that they have successfully demonstrated the squeezing of individual light particles, or photons, using an artificially constructed atom, known as a semiconductor quantum dot. Thanks to the enhanced optical properties of this system and the technique used to make the measurements, they were able to observe the light as it was scattered, and proved that it had indeed been squeezed.

Professor Mete Atature, from the Cavendish Laboratory, Department of Physics, and a Fellow of St John’s College at the University of Cambridge, led the research. He said: “It’s one of those cases of a fundamental question that theorists came up with, but which, after years of trying, people basically concluded it is impossible to see for real – if it’s there at all.”

“We managed to do it because we now have artificial atoms with optical properties that are superior to natural atoms. That meant we were able to reach the necessary conditions to observe this fundamental property of photons and prove that this odd phenomenon of squeezing really exists at the level of a single photon. It’s a very bizarre effect that goes completely against our senses and expectations about what photons should do.”

Like a lot of quantum physics, the principles behind squeezing light involve some mind-boggling concepts.

It begins with the fact that wherever there are light particles, there are also associated electromagnetic fluctuations. This is a sort of static which scientists refer to as “noise”. Typically, the more intense light gets, the higher the noise. Dim the light, and the noise goes down.

But strangely, at a very fine quantum level, the picture changes. Even in a situation where there is no light, electromagnetic noise still exists. These are called vacuum fluctuations. While classical physics tells us that in the absence of a light source we will be in perfect darkness, quantum mechanics tells us that there is always some of this ambient fluctuation.

“If you look at a flat surface, it seems smooth and flat, but we know that if you really zoom in to a super-fine level, it probably isn’t perfectly smooth at all,” Atature said. “The same thing is happening with vacuum fluctuations. Once you get into the quantum world, you start to get this fine print. It looks like there are zero photons present, but actually there is just a tiny bit more than nothing.”

Importantly, these vacuum fluctuations are always present and provide a base limit to the noise of a light field. Even lasers, the most perfect light source known, carry this level of fluctuating noise.

This is when things get stranger still, however, because, in the right quantum conditions, that base limit of noise can be lowered even further. This lower-than-nothing, or lower-than-vacuum, state is what physicists call squeezing.

In the Cambridge experiment, the researchers achieved this by shining a faint laser beam on to their artificial atom, the quantum dot. This excited the quantum dot and led to the emission of a stream of individual photons. Although normally, the noise associated with this photonic activity is greater than a vacuum state, when the dot was only excited weakly the noise associated with the light field actually dropped, becoming less than the supposed baseline of vacuum fluctuations.

Explaining why this happens involves some highly complex quantum physics. At its core, however, is a rule known as Heisenberg’s uncertainty principle. This states that in any situation in which a particle has two linked properties, only one can be measured and the other must be uncertain.

In the normal world of classical physics, this rule does not apply. If an object is moving, we can measure both its position and momentum, for example, to understand where it is going and how long it is likely to take getting there. The pair of properties – position and momentum – are linked.

In the strange world of quantum physics, however, the situation changes. Heisenberg states that only one part of a pair can ever be measured, and the other must remain uncertain.

In the Cambridge experiment, the researchers used that rule to their advantage, creating a tradeoff between what could be measured, and what could not. By scattering faint laser light from the quantum dot, the noise of part of the electromagnetic field was reduced to an extremely precise and low level, below the standard baseline of vacuum fluctuations. This was done at the expense of making other parts of the electromagnetic field less measurable, meaning that it became possible to create a level of noise that was lower-than-nothing, in keeping with Heisenberg’s uncertainty principle, and hence the laws of quantum physics.

Plotting the uncertainty with which fluctuations in the electromagnetic field could be measured on a graph creates a shape where the uncertainty of one part has been reduced, while the other has been extended. This creates a squashed-looking, or “squeezed” shape, hence the term, “squeezing” light.

Atature added that the main point of the study was simply to attempt to see this property of single photons, because it had never been seen before. “It’s just the same as wanting to look at Pluto in more detail or establishing that pentaquarks are out there,” he said. “Neither of those things has an obvious application right now, but the point is knowing more than we did before. We do this because we are curious and want to discover new things. That’s the essence of what science is all about.”

Additional image: The left diagram represents electromagnetic activity associated with light at what is technically its lowest possible level. On the right, part of the same field has been reduced to lower than is technically possible, at the expense of making another part of the field less measurable. This effect is called “squeezing” because of the shape it produces.

Reference: 
Schulte, CHH, et al. Quadrature squeezed photons from a two-level system. Nature (2015). DOI: 10.1038/nature14868. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/scientists-squeeze-light-one-particle-at-a-time#sthash.hbeMOFBw.dpuf

One Year and 272 Billion Measurements Later, Gaia Team Celebrates First Anniversary of Observations

One year and 272 billion measurements later, Gaia team celebrates first anniversary of observations

Source: www.cam.ac.uk

A space mission to create the largest, most-accurate, three-dimensional map of the Milky Way is celebrating its first completed year of observations.

We are moving beyond just seeing to knowing about the galaxy in which we live.

Gerry Gilmore

The Gaia satellite, which orbits the sun at a distance of 1.5million km from the earth, was launched by the European Space Agency in December 2013 with the aim of observing a billion stars and revolutionising our understanding of the Milky Way.

The unique mission is reliant on the work of Cambridge researchers who collect the vast quantities of data transmitted by Gaia to a data processing centre at the university, overseen by a team at the Institute of Astronomy.

Since the start of its observations in August 2014, Gaia has recorded 272 billion positional (or astrometric) measurements and 54.4 billion brightness (or photometric) data points.

Gaia surveys stars and many other astronomical objects as it spins, observing circular swathes of the sky. By repeatedly measuring the positions of the stars with extraordinary accuracy, Gaia can tease out their distances and motions throughout the Milky Way galaxy.

Dr Francesca de Angeli, lead scientist at the Cambridge data centre, said: “The huge Gaia photometric data flow is being processed successfully into scientific information at our processing centre and has already led to many exciting discoveries.”

The Gaia team have spent a busy year processing and analysing data, with the aim of developing enormous public catalogues of the positions, distances, motions and other properties of more than a billion stars. Because of the immense volumes of data and their complex nature, this requires a huge effort from expert scientists and software developers distributed across Europe, combined in Gaia’s Data Processing and Analysis Consortium (DPAC).

“The past twelve months have been very intense, but we are getting to grips with the data, and are looking forward to the next four years of operations,” said Timo Prusti, Gaia project scientist at ESA.

“We are just a year away from Gaia’s first scheduled data release, an intermediate catalogue planned for the summer of 2016. With the first year of data in our hands, we are now halfway to this milestone, and we’re able to present a few preliminary snapshots to show that the spacecraft is working well and that the data processing is on the right track.”

As Gaia has been conducting its repeated scans of the sky to measure the motions of stars, it has also been able to detect whether any of them have changed their brightness, and in doing so, has started to discover some very interesting astronomical objects.

Gaia has detected hundreds of transient sources so far, with a supernova being the very first on August 30, 2014. These detections are routinely shared with the community at large as soon as they are spotted in the form of ‘Science Alerts’, enabling rapid follow-up observations to be made using ground-based telescopes in order to determine their nature.

One transient source was seen undergoing a sudden and dramatic outburst that increased its brightness by a factor of five. It turned out that Gaia had discovered a so-called ‘cataclysmic variable’, a system of two stars in which one, a hot white dwarf, is devouring mass from a normal stellar companion, leading to outbursts of light as the material is swallowed. The system also turned out to be an eclipsing binary, in which the relatively larger normal star passes directly in front of the smaller, but brighter white dwarf, periodically obscuring the latter from view as seen from Earth.

Unusually, both stars in this system seem to have plenty of helium and little hydrogen. Gaia’s discovery data and follow-up observations may help astronomers to understand how the two stars lost their hydrogen.

Gaia has also discovered a multitude of stars whose brightness undergoes more regular changes over time. Many of these discoveries were made between July and August 2014, as Gaia performed many subsequent observations of a few patches of the sky.

Closer to home, Gaia has detected a wealth of asteroids, the small rocky bodies that populate our solar system, mainly between the orbits of Mars and Jupiter. Because they are relatively nearby and orbiting the Sun, asteroids appear to move against the stars in astronomical images, appearing in one snapshot of a given field, but not in images of the same field taken at later times.

Gaia scientists have developed special software to look for these ‘outliers’, matching them with the orbits of known asteroids in order to remove them from the data being used to study stars. But in turn, this information will be used to characterise known asteroids and to discover thousands of new ones.

Gerry Gilmore, Professor of Experimental Philosophy, and the Gaia UK Principal Investigator, said: “The early science from Gaia is already supporting major education activity involving UK school children and amateur astronomers across Europe and has established the huge discovery potential of Gaia’s data.

“We are entering a new era of big-data astrophysics, with a revolution in our knowledge of what we see in the sky. We are moving beyond just seeing to knowing about the galaxy in which we live.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/one-year-and-272-billion-measurements-later-gaia-team-celebrates-first-anniversary-of-observations#sthash.m3afxktS.dpuf

Bavarian Fire Brigades Choose Sepura

Bavarian fire brigades choose Sepura

Two prestigious contracts have been awarded to Sepura’s long-standing German partner, Selectric GmbH, for the supply of more than 6000 TETRA radios to fire brigades in the regions of Straubing and Passau.

These contracts – comprising a combination of the market-leading STP9000, STP8X Intrinsically-Safe ATEX/IECEx hand-portables and SRG3900 mobilehttp://images.intellitxt.com/ast/adTypes/icon1.png radios, plus accompanying accessories – build on previous successes in Bavaria and bring the total number of Sepura TETRA radios in frame contracts for German Public Safety users to over 350,000.

Hendrik Pieper, Managing Director for Selectric commented, “Sepura and Selectric’s joint knowledge of the challenges facing mission-critical communications in Germany – and our expertise in tackling them – have, once again, been clearly validated by this achievement.”

Hooman Safaie, Regional Director for Sepura added, “This new success demonstrates the formidable strength of our partnership with Selectric. It also reflects our commitment to the German public safety market – the largest TETRA market in the world – and our determination to maintain market leadership in the country.”

Access To Startup Skills Threatened By U.K. Visa Review

Access To Startup Skills Threatened By U.K. Visa Review

Displaying image001.jpg

Source: Tech Crunch

The U.K.-based startup founders and investors who penned an open letter backing the Conservatives at the General Election in May are now confronted with the prospect of a Tory government looking for ways to make it harder for businesses to recruit and bring non-EU workers to the U.K. — owing to a political push to reduce net migration.

Soon after winning the election this May, Prime Minister David Cameron made a speech on migration, outlining the new government’s forthcoming Immigration Bill — which he said would include reforms to domestic immigration and labour market rules aimed at reducing the demand for skilled migrant labour.

Given that the U.K. is a member of the European Union, the U.K. government can’t pull any policy levers to reduce migration from the EU (although Cameron is also hoping to renegotiate migration rules with the EU). Which leaves non-EU migration bearing the brunt of the government’s planned migration squeeze. Startups of course rely on filling vacancies by bringing skills from abroad — given it may be the only way to obtain relevant expertise when you’re working in such nascent areas.

The Home Office has commissioned the Migration Advisory Committee to undertake two reviews of the U.K.’s current tier 2 skilled visa requirements (a main route used by startups to fill vacancies; there is also the tier 1 entrepreneur visa for founders). A review of tier 2 visa salary thresholds was conducted by MAC last month, but has yet to report its findings to the Home Office — although a spokesman for the department told TechCrunch the government would look to implement any changes it deems necessary based on that review this autumn.

The larger scale tier 2 visa review is still taking evidence, with a closing date for submissions of the end of September. The MAC is then likely to report in full by the year’s end — so further changes to the tier 2 visa, based on the government’s assessment of the review, may be incoming by the start of next year.

According to the MAC’s call for evidence, the proposed changes being considered by the government include some potentially radical measures — such as: significantly raising the salary threshold; restricting which job roles are eligible; and removing the right for dependents of visa holders to be able to work in the U.K.

Other measures apparently on the table include putting time-limits on shortage occupations in order to encourage domestic businesses to invest in upskilling; and imposing a skills levy on employers who hire from outside the European Economic Area in order to fund domestic apprenticeships.

The pro-startup organisation, Coadec, which advocates for policies that support the U.K.’s digital economy, kicked off acampaign last week to raise awareness of the MAC’s review and feed its submissions call with startups’ views. It’s asking U.K. startups to complete a survey asking for their views on the tier 2 visa process and the changes the government is considering. Coadec will then be compiling responses into its own report to submit to the MAC.

“The current situation is that the only way non-EU workers can get into the UK really is through the tier 2 visa, because the post-study work visa has been scrapped, the high skilled migrant program has been scrapped,” says Coadec executive director Guy Levin tells TechCrunch. “You can still come in as an entrepreneur through tier 1 or an investor, or if you’re an exceptional talent through tier 1, but tier 2’s the main visa route for non-EU workers to come into the country.

“You have to have a definite job offer, you need to have a degree level qualification, and the role needs to be advertised in the U.K. for 28 days first before it can be offered internationally. There has to be a minimum salary threshold, which is set for new entrants at the 10th percentile and for experienced hires at the 25th percentile… so for developers the 25th percentile is about £31,000. And the company itself needs to go through a process of getting accredited by the Home Office as a sponsor.”

Levin notes there were some 15,500 people entering the U.K. last year via the tier 2 general route — so excluding intracompany transfers (a route which does not apply to startups). A further breakdown of that by job type puts “Programmers and software development professionals” as the third most popular occupation under the ‘resident labour test market route’ (i.e. rather than the shortages occupation route) — with 2,618 devs entering the U.K. via that route in the year ending March 2015.

“It’s not enormous numbers but it’s significant. And that’s just for that particular job title. There may be others under other job titles, like data scientist or product managers,” says Levin.

“The system is fairly sensible, as it stands,” he adds. “Some bits of it are annoying, like the 28 day test. And thankfully that’s waived for shortage occupations… Which means you get to fast-track some bits of that… And at the start of the year some digital roles were put on that, so that’s fantastic and a good win for the sector.”

But Levin says the “worry” now for U.K. startups is that the Conservatives’ political imperative to find ways to reduce migration to the U.K. could result in policies that are actively harmful to the digital economy — given the options currently being considered by the government would limit founders’ ability to hire the skilled talent they need.

Levin says Coadec’s own migration survey has garnered around 100 responses thus far, with around 40 per cent saying they currently employ people hired via the tier 2 visa route. “The majority don’t… and several of the respondents said it’s already too complicated and expensive for us to go through that process,” he notes.

Speaking to TechCrunch about the government’s migration consultation, DueDil CEO and founder Damian Kimmelman, himself an American who relocated to the U.K. to build a data startup (one which has attracted some $22 million in funding thus far, according to CrunchBase), argues that “populist politics” could pose a threat the U.K.’s digital economy if the government ends up scrapping what he says has been a relatively liberal migration policy thus far. Approximately 10 per cent of DueDil’s staff are employed on tier 2 visas.

One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere.

“One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere. One of the problems building a company that’s scaling and building it in the U.K. is there are a not a lot of people that have scaled businesses, and have the experience of scaling large tech businesses. You can only find that outside of the U.K. All of the large companies that scaled got bought out. And this is an unfortunate fact about the talent pool — but one of the ways the U.K. has effectively been able to solve this is by really having quite liberal immigration policies,” he tells TechCrunch.

Broadly speaking, Kimmelman said any of the proposed changes being consulted on by the MAC could have “a serious impact” on DueDil’s ability to grow.

“Restricting what roles are eligible seems ludicrous. We are working in a very transformative economy. All of the types of roles are new types of roles every six months… Government can’t really police that. That’s sort of self defeating,” he adds. “If you restrict the rights of dependents you pretty much nullify the ability to bring in great talent. I don’t know anybody who’s going to move their family [if they can’t work here]… It’s already quite difficult hiring from the U.S. because the quality of life in the U.S. in a lot of cities is much greater than it is in London.”

He’s less concerned about the prospect of being required to increase the salary requirement for individuals hired under the tier 2 visa — although Coadec’s Levin points out that some startups, likely earlier stage, might choose to compensate a hire with equity rather than a large salary to reduce their burn rate. So a higher salary requirement could make life harder for other types of U.K. startups.

Kimmelman was actually one of the signatories of the aforementioned open letter backing the Conservative Party at the General Election. Signatories of that letter asserted the Tory-led government —

…has enthusiastically supported startups, job-makers and innovators and the need to build a British culture of entrepreneurialism to rival America’s. Elsewhere in the world people are envious at how much support startups get in the UK. This Conservative-led government has given us wholehearted support and we are confident that it would continue to do so. It would be bad for jobs, bad for growth, and bad for innovation to change course.

So is he disappointed that the new Conservative government is consulting on measures that, if implemented, could limit U.K.-based startup businesses’ access to digital skills? “I wouldn’t read too much into this just yet because they haven’t made any decisions,” says Kimmelman. “But if they do enact any of these policies I think it would be really harmful to the community.”

“They have a lot of constituents other than just the tech community that they’re working for. So I hope that they don’t do anything that’s rash. But I’ve been very impressed by the way that they’ve handled things thus far and so I think I need to give them the benefit of the doubt,” he adds.

Levin says responses to Coadec’s survey so far suggests U.K. startups’ main priority is the government keeps the overseas talent pipeline flowing — with less concern over cost increases, such as if the government applies a skills levy to fund apprenticeship programs.

But how the government squares the circle of an ideological commitment to reducing net migration with keeping skills-hungry digital businesses on side remains to be seen.

“The radical option of really restricting [migration] to genuine shortages is scary — because we just don’t know what that would look like,” adds Levin. “It could be that that would be the best answer for the tech sector because we might be able to make a case that there are genuine shortages and so we’d be fine. But there’s an uncertainty about what the future would look like — so at the moment we’re going to focus on making a positive case on why skilled migration is vital for the digital economy.”

The prior Tory-led U.K. coalition government introduced a cap on tier 2 visas back in 2011 — of just over 20,000 per year — which is applied as a monthly limit. That monthly cap was exceeded for the first time in June, with a swathe of visa applications turned down as a result. That’s something Levin says shows the current visa system is “creaking at the seams” — even before any further restrictions are factored in.

“Thirteen hundred applicants in June were turned down because they’d hit the cap,” he says, noting that when the cap is hit the Home Office uses salary level to choose between applicants. “So the salary thresholds jump up from the 25th percentile… which means the lower paid end of people lose out, which would probably disproportionately affect startups.”

‘Pill on a String’ Could Help Spot Early Signs of Cancer of the Gullet

‘Pill on a string’ could help spot early signs of cancer of the gullet

 

Source: www.cam.ac.uk

A ‘pill on a string’ developed by researchers at the University of Cambridge could help doctors detect oesophageal cancer – cancer of the gullet – at an early stage, helping them overcome the problem of wide variation between biopsies, suggests research published today in the journal Nature Genetics.

If you’re taking a biopsy, this relies on your hitting the right spot. Using the Cytosponge appears to remove some of this game of chance

Rebecca Fitzgerald

The ‘Cytosponge’ sits within a pill which, when swallowed, dissolves to reveal a sponge that scrapes off cells when withdrawn up the gullet. It allows doctors to collect cells from all along the gullet, whereas standard biopsies take individual point samples.

Oesophageal cancer is often preceded by Barrett’s oesophagus, a condition in which cells within the lining of the oesophagus begin to change shape and can grow abnormally. The cellular changes are cause by acid and bile reflux – when the stomach juices come back up the gullet. Between one and five people in every 100 with Barrett’s oesophagus go on to develop oesophageal cancer in their life-time, a form of cancer that can be difficult to treat, particularly if not caught early enough.

At present, Barrett’s oesophagus and oesophageal cancer are diagnosed using biopsies, which look for signs of dysplasia, the proliferation of abnormal cancer cells. This is a subjective process, requiring a trained scientist to identify abnormalities. Understanding how oesophageal cancer develops and the genetic mutations involved could help doctors catch the disease earlier, offering better treatment options for the patient.

An alternative way of spotting very early signs of oesophageal cancer would be to look for important genetic changes. However, researchers from the University of Cambridge have shown that variations in mutations across the oesophagus mean that standard biopsies may miss cells with important mutations. A sample was more likely to pick up key mutations if taken using the Cytosponge, developed by Professor Rebecca Fitzgerald at the Medical Research Council Cancer Unit at the University of Cambridge.

“The trouble with Barrett’s oesophagus is that it looks bland and might span over 10cm,” explains Professor Fitzgerald. “We created a map of mutations in a patient with the condition and found that within this stretch, there is a great deal of variation amongst cells. Some might carry an important mutation, but many will not. If you’re taking a biopsy, this relies on your hitting the right spot. Using the Cytosponge appears to remove some of this game of chance.”

Professor Fitzgerald and colleagues carried out whole genome sequencing to analyse paired Barrett’s oesophagus and oesophageal cancer samples taken at one point in time from 23 patients, as well as 73 samples taken over a three-year period from one patient with Barrett’s oesophagus.

The researchers found patterns of mutations in the genome – where one ‘letter’ of DNA might change to another, for example from a C to a T – that provided a ‘fingerprint’ of the causes of the cancer. Similar work has been done previously in lung cancer, where it was shown that cigarettes leave fingerprints in an individual’s DNA. The Cambridge team found fingerprints which they believe are likely to be due to the damage caused to the lining of the oesophagus by stomach acid splashing onto its walls; the same fingerprints could be seen in both Barrett’s oesophagus and oesophageal cancer, suggest that these changes occur very early on the process.

Even in areas of Barrett’s oesophagus without cancer, the researchers found a large number of mutations in their tissue – on average 12,000 per person (compared to an average of 18,000 mutations within the cancer). Many of these are likely to have been ‘bystanders’, genetic mutations that occurred along the way but that were not actually implicated in cancer.

The researchers found that there appeared to be a tipping point, where a patient would go from having lots of individual mutations, but no cancer, to a situation where large pieces of genetic information were being transferred not just between genes but between chromosomes.

Co-author Dr Caryn Ross-Innes adds: “We know very little about how you go from pre-cancer to cancer – and this is particularly the case in oesophageal cancer. Barrett’s oesophagus and the cancer share many mutations, but we are now a step closer to understanding which are the important mutations that tip the condition over into a potentially deadly form of cancer.”

The research was funded by the Medical Research Council and Cancer Research UK. The Cytosponge was trialled in patients at the NIHR Clinical Investigation Ward at the Cambridge Clinical Research Facility.

Reference
Ross-Innes, CS et al. Whole-genome sequencing provides new insights into the clonal architecture of Barrett’s esophagus and esophageal adenocarcinoma. Nature Genetics; 20 July 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cambridge TV Goes Live

Cambridge TV goes Live

Whilst Cambridge TV is broadcast on FreeView channel 8 and Virgin Cable channel 159, in the Cambridge area, The target audience is the global “top 20%” of viewers through our on-demand Internet presence.

Whilst we will have some of the usual local TV content, our key programmes are about Cambridge business and Cambridge academic endeavour that is of interest to a global audience.

Our ambition is to create a must-see series of programmes that will inform and entertain the world about the latest amazing things that are being developed and studied in and around Cambridge.

The tag-line “Watch us, Get better” in fully in the “Reith” tradition, and acknowledges that as a new under-funded start-up, our production values will improve over time, though I am confident that the content will shine through regardless.

With a licence to broadcast 24 hours a day for 10 years! there is plenty of time for us to take material from many sources and give it an airing. Whether it is an advert, corporate video, a training video or a full programme, subject to our quality and suitability criteria we can show it!

Monoclonal Antibodies: the Invisible Allies That Changed the Face of Medicine

Monoclonal antibodies: the invisible allies that changed the face of medicine

source:www.cam.ac.uk

Forty years ago, two researchers at the Medical Research Council’s Laboratory of Molecular Biology in Cambridge developed a new technology that was to win him the Nobel Prize – and is now found in six out of ten of the world’s bestselling drugs. Dr Lara Marks from Department of History and Philosophy of Science discusses the importance of ‘monoclonal antibodies’.

They are tiny magic bullets that are quietly shaping the lives of millions of patients around the world. Produced in the lab, invisible to the naked eye, relatively few people are aware of these molecules’ existence or where they came from. Yet monoclonal antibodies are contained in six out of ten of the world’s bestselling drugs, helping to treat everything from cancer to heart disease to asthma.

Known as Mabs for short, these molecules are derived from the millions of antibodies the immune system continually makes to fight foreign invaders such as bacteria and viruses. The technique for producing them was first published 40 years ago. It was developed byCésar Milstein, an Argentinian émigré, and Georges Köhler, a German post-doctoral researcher. They were based at the UK Medical Research Council’s Laboratory of Molecular Biology in Cambridge.

Harnessing the power of the immune system

Milstein and Köhler wanted to investigate how the immune system can produce so many different types of antibodies, each capable of specifically targeting one of a near-infinite number of foreign substances that invade the body. This had puzzled scientists ever since the late 19th century, but an answer had proved elusive. Isolating and purifying single antibodies with known targets, out of the billions made by the body, was a challenge.

The two scientists finally solved this problem by immunising a mouse against a particular foreign substance and then fusing antibodies taken from its spleen with a cell associated with myeloma, a cancer that develops in the bone marrow. Their method created a hybrid cell that secreted Mabs. Such cells could be grown indefinitely, in the abdominal cavity of mice or in tissue culture, producing endless quantities of identical antibodies specific to a chosen target. Mabs can be tailored to combat a wide range of conditions.

When Milstein and Köhler first publicised their technique, relatively few people understood its significance. Editors of Nature missed its importance, asking the two scientists to cut short their article outlining the new technique; as did staff at the British National Research Development Corporation, who declined to patent the work after Milstein submitted it for consideration. Within a short period, however, the technique was being adopted by scientists around the world, and less than ten years later Milstein and Köhler were Nobel laureates.

A transformation in therapeutic medicine

In the years that have passed since 1975, Mab drugs have radically reshaped medicine and spawned a whole new industry. It is predicted that 70 Mab products will have reached the worldwide market by 2020, with combined sales of nearly $125bn (£81bn).

 

An artist’s rendering of anti-cancer antibodies. ENERGY.GOV

 

Key to the success of Mab drugs are the dramatic changes they have brought to thetreatment of cancer, helping in many cases to shift it away from being a terminal disease. Mabs can very specifically target cancer cells while avoiding healthy cells, and can also be used to harness the body’s own immune system to fight cancer. Overall, Mab drugs cause fewer debilitating side-effects than more conventional chemotherapy or radiotherapy. Mabs have also radically altered the treatment of inflammatory and autoimmune disorders like rheumatoid arthritis and multiple sclerosis, moving away from merely relieving symptoms to targeting and disrupting their cause.

Aside from cancer and autoimmune disorders, Mabs are being used to treat over 50 other major diseases. Applications include treatment for heart disease, allergic conditions such as asthma, and prevention of organ rejection after transplants. Mabs are also under investigation for the treatment of central nervous disorders such as Alzheimer’s disease, metabolic diseases like diabetes, and the prevention of migraines. More recently they were explored as a means to combat Ebola, the virus disease that ravaged West Africa in 2014.

Fast and accurate diagnosis

Mabs have enabled faster and more accurate clinical diagnostic testing, opening up the means to detect numerous diseases that were previously impossible to identify until their advanced stages. They have paved the way in personalised medicine, where patients are matched with the most suitable drug. Mabs are intrinsic components in over-the-counter pregnancy tests, are key to spotting a heart attack, and help to screen blood for infectious diseases like hepatitis B and AIDS. They are also used on a routine basis in hospitals to type blood and tissue, a process vital to ensuring safe blood transfusion and organ transplants.

 

Monoclonal antibodies can be used to rapidly diagnose disease and determine blood type.U.S. Navy/Jeremy L. Grisham

 

Mabs are also invaluable to many other aspects of everyday life. For example they are vital to agriculture, helping to identify viruses in animal livestock or plants, and to the food industry in the prevention of the spread of salmonella. In addition they are instrumental in the efforts to curb environmental pollution.

Quietly triumphant

Yet Mabs remain hidden from public view. This is partly because the history of the technology has often been overshadowed by the groundbreaking and controversialAmerican development of genetic engineering in 1973, which revolutionised the manufacturing and production of natural products such as insulin, and inspired the foundation of Genentech, one of the world’s first biotechnology companies.

Looking back, the oversight is not surprising. Mabs did not transform medicine overnight or with any major fanfare, and the scientists who made the discovery did not seek fame. Instead, Mabs quietly slipped unobserved into everyday healthcare practice.

An Argentinian and a German came together in a British Laboratory and changed the face of medicine forever; their story deserves to be told.

Lara Marks is at University of Cambridge.

This article was originally published on The Conversation. Read the original article.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cambridge Venture – The Silk Stories

Local Girl Alice Hewkin is launching ‘The Silk Stories’, a webshop selling luxurious silk pyjamas at an affordable price.

Alice studies at the Hills Road Sixth Form College before winning a place at the Royal School of Speech and Drama to study for a BA in Acting.

Today as a graduate she is building her career with roles on SkyTV, Channel 4 and the BBC in the last year.

Alice tells Connected Cambridge “Being an actress I understand the importance of always looking your best and feeling good. Wearing  glamorous 100% silk pyjamas helps me relax and feel calm as well as getting a great night’s sleep so I’m ready for whatever the next day brings.”

This was the jumping off point for the Silk Stories brand. Importing top of range 100% pure silk pyjamas from near her birthplace in China her website at www.thesilkstories.comAlice Hewkinfounder of The Silk Stories is now live. You can follow progress on twitter @thesilkstories

 

On the Origin of (Robot) Species

On the origin of (robot) species

source: www.cam.ac.uk

Researchers have observed the process of evolution by natural selection at work in robots, by constructing a ‘mother’ robot that can design, build and test its own ‘children’, and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention.

We want to see robots that are capable of innovation and creativity

Fumiya Iida

Researchers led by the University of Cambridge have built a mother robot that can independently build its own children and test which one does best; and then use the results to inform the design of the next generation, so that preferential traits are passed down from one generation to the next.

Without any human intervention or computer simulation beyond the initial command to build a robot capable of movement, the mother created children constructed of between one and five plastic cubes with a small motor inside.

In each of five separate experiments, the mother designed, built and tested generations of ten children, using the information gathered from one generation to inform the design of the next. The results, reported in the open access journal PLOS One, found that preferential traits were passed down through generations, so that the ‘fittest’ individuals in the last generation performed a set task twice as quickly as the fittest individuals in the first generation.

“Natural selection is basically reproduction, assessment, reproduction, assessment and so on,” said lead researcher Dr Fumiya Iida of Cambridge’s Department of Engineering, who worked in collaboration with researchers at ETH Zurich. “That’s essentially what this robot is doing – we can actually watch the improvement and diversification of the species.”

For each robot child, there is a unique ‘genome’ made up of a combination of between one and five different genes, which contains all of the information about the child’s shape, construction and motor commands. As in nature, evolution in robots takes place through ‘mutation’, where components of one gene are modified or single genes are added or deleted, and ‘crossover’, where a new genome is formed by merging genes from two individuals.

In order for the mother to determine which children were the fittest, each child was tested on how far it travelled from its starting position in a given amount of time. The most successful individuals in each generation remained unchanged in the next generation in order to preserve their abilities, while mutation and crossover were introduced in the less successful children.

The researchers found that design variations emerged and performance improved over time: the fastest individuals in the last generation moved at an average speed that was more than twice the average speed of the fastest individuals in the first generation. This increase in performance was not only due to the fine-tuning of design parameters, but also because the mother was able to invent new shapes and gait patterns for the children over time, including some designs that a human designer would not have been able to build.

“One of the big questions in biology is how intelligence came about – we’re using robotics to explore this mystery,” said Iida. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customisation, but we want to see robots that are capable of innovation and creativity.”

In nature, organisms are able to adapt their physical characteristics to their environment over time. These adaptations allow biological organisms to survive in a wide variety of different environments – allowing animals to make the move from living in the water to living on land, for instance.

But machines are not adaptable in the same way. They are essentially stuck in one shape for their entire ‘lives’, and it’s uncertain whether changing their shape would make them more adaptable to changing environments.

Evolutionary robotics is a growing field which allows for the creation of autonomous robots without human intervention. Most work in this field is done using computer simulation. Although computer simulations allow researchers to test thousands or even millions of possible solutions, this often results in a ‘reality gap’ – a mismatch between simulated and real-world behaviour.

While using a computer simulation to study artificial evolution generates thousands, or even millions, of possibilities in a short amount of time, the researchers found that having the robot generate its own possibilities, without any computer simulation, resulted in more successful children. The disadvantage is that it takes time: each child took the robot about 10 minutes to design, build and test. According to Iida, in future they might use a computer simulation to pre-select the most promising candidates, and use real-world models for actual testing.

Iida’s research looks at how robotics can be improved by taking inspiration from nature, whether that’s learning about intelligence, or finding ways to improve robotic locomotion. A robot requires between ten and 100 times more energy than an animal to do the same thing. Iida’s lab is filled with a wide array of hopping robots, which may take their inspiration from grasshoppers, humans or even dinosaurs. One of his group’s developments, the ‘Chairless Chair’, is a wearable device that allows users to lock their knee joints and ‘sit’ anywhere, without the need for a chair.

“It’s still a long way to go before we’ll have robots that look, act and think like us,” said Iida. “But what we do have are a lot of enabling technologies that will help us import some aspects of biology to the engineering world.”

Reference:
Brodbeck, L. et al. “Morphological Evolution of Physical Robots through Model-Free Phenotype Development” PLOS One (2015). DOI: 10.1371/journal.pone.0128444


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/on-the-origin-of-robot-species#sthash.Feg3schS.dpuf

Young Minds Think Alike – and Older People Are More Distractible

Young minds think alike – and older people are more distractible

source: www.cam.ac.uk

‘Bang! You’re Dead’, a 1961 episode of Alfred Hitchcock Presents, continues to surprise – but not just with the twist in its tale. Scientists at the University of Cambridge have used the programme to show that young people respond in a similar way to events, but as we age our thought patterns diverge.

Older adults end up attending to a more diverse range of stimuli and so are more likely to understand and interpret everyday events in different ways than younger people

Karen Campbell

The study, published today in the journal Neurobiology of Aging, also found that older people tended to be more easily distracted than younger adults.

Age is believed to change the way our brains respond and how its networks interact, but studies looking at these changes tend to use very artificial experiments, with basic stimuli. To try to understand how we respond to complex, life-like stimuli, researchers at the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) showed 218 subjects aged 18-88 an edited version of an episode from the Hitchcock TV series while using functional magnetic resonance imaging (fMRI) to measure their brain activity.

The researchers found a surprising degree of similarity in the thought patterns amongst the younger subjects – their brains tended to ‘light up’ in similar ways and at similar points in the programme. However, in older subjects, this similarity tended to disappear and their thought processes became more idiosyncratic, suggesting that they were responding differently to what they were watching and were possibly more distracted.

The greatest differences were seen in the ‘higher order’ regions at the front of the brain, which are responsible for controlling attention (the superior frontal lobe and the intraparietal sulcus) and language processing (the bilateral middle temporal gyrus and left inferior frontal gyrus).

The findings suggest that our ability to respond to everyday events in the environment differs with age, possibly due to altered patterns of attention.

Dr Karen Campbell from the Department of Psychology, first author on the study, says: “As we age, our ability to control the focus of attention tends to decline, and we end up attending to more ‘distracting’ information than younger adults. As a result, older adults end up attending to a more diverse range of stimuli and so are more likely to understand and interpret everyday events in different ways than younger people.”

In order to encourage audiences to respond to movies and TV programmes in the same way as everyone else, and hence have a ‘shared experience’, directors and cinematographers use a variety of techniques to draw attention to the focal item in each shot. When the stimulus is less engaging – for example, when one character is talking at length or the action is slow, people show less overlap in their neural patterns of activity, suggesting that a stimulus needs to be sufficiently captivating in order to drive attention. However, capturing attention is not sufficient when watching a film; the brain needs to maintain attention or at the very least, to limit attention to that information which is most relevant to the plot.

Dr Campbell and colleagues argue that the variety in brain patterns seen amongst older people reflects a difference in their ability to control their attention, as attentional capture by stimuli in the environment is known to be relatively preserved with age. This supports previous research which shows that older adults respond to and better remember materials with emotional content.

“We know that regions at the front of the brain are responsible for maintaining our attention, and these are the areas that see the greatest structural changes as we ages, and it is these changes that we believe are being reflected in our study,” she adds. “There may well be benefits to this distractibility. Attending to lots of different information could help with our creativity, for example.”

Cam-CAN is supported by the Biotechnology and Biological Sciences Research Council (BBSRC).

Reference
Campbell, K et al. Idiosyncratic responding during movie-watching predicted by age differences in attentional control. Neurobiology of Aging; 6 Aug 2015.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/young-minds-think-alike-and-older-people-are-more-distractible#sthash.N56A3U4J.dpuf

Robots Learn to Evolve and Improve

Robots learn to evolve and improve

Media captionResearchers are developing robots that can learn from previous work

Engineers have developed a robotic system that can evolve and improve its performance.

A robot arm builds “babies” that get progressively better at moving without any human intervention.

The ultimate aim of the research project is to develop robots that adapt to their surroundings.

The work by teams in Cambridge and Zurich has been published in the journalPLOS One.

It seems like a plot from a science fiction film: a robot that builds other robots – each one better than the previous generation. But that is what researchers in Cambridge and Zurich have done.

But those concerned about machines taking over the world shouldn’t worry, at least not yet.

At this stage the “baby robots” consist of plastic cubes with a motor inside. These are put together by a “mother” robot arm which glues them together in different configurations.

Although the set up is simple the system itself is ingenious.

The mother robot assesses how far its babies are able to move, and with no human intervention, improves the design so that the next one it builds can move further.

The mother robot built ten generations of children. The final version moved twice the distance of the first before its power ran out.

According to Dr Fumiya Iida of Cambridge University, who led the research with colleagues at ETH Zurich, one aim is to gain new insights into how living things evolve.

“One of the big questions in biology is how intelligence came about – we’re using robotics to explore this mystery,” he told BBC News.

“We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customisation, but we want to see robots that are capable of innovation and creativity.”

Another aim is to develop robots that can improve and adapt to new situations, according to Andre Rosendo – who also worked on the project.

“You can imagine cars being built in factories and the robot looking for defects in the car and fixing them by itself,” he said.

“And robots used in agriculture could try out slightly different ways of harvesting crops to see if they can improve yield.”

Dr Iidya told me that he came into robotics because he was disappointed that the robots he saw in real life were not as good as the ones he saw in science fiction films such as Star Wars and Star Trek.

His aim was to change that and his approach was to draw lessons from the natural world to improve the efficiency and flexibility of traditional robotic systems.

As to whether we’d ever see robots like those in the sci-fi films that inspired him, he said: “We’re not there yet, but sure, why not, maybe in about 30 years.”

Follow Pallab on Twitter

Predators Might Not Be Dazzled By Stripes

Predators might not be dazzled by stripes

Source: www.cam.ac.uk

 

New research using computer games suggests that stripes might not offer the ‘motion dazzle’ protection thought to have evolved in animals such as Zebra and consequently inspired ship camouflage during both World Wars.

Motion may just be one aspect in a larger picture. Different orientations of stripe patterning may have evolved for different purposes

Anna Hughes

Stripes might not offer protection for animals living in groups, such as zebra, as previously thought, according to research published today in the journal Frontiers in Zoology.

Humans playing a computer game captured striped targets more easily than uniform grey targets when multiple targets were present. The finding runs counter to assumptions that stripes evolved to make it difficult to capture animals moving in a group.

“We found that when targets are presented individually, horizontally striped targets are more easily captured than targets with vertical or diagonal stripes. Surprisingly, we also found no benefit of stripes when multiple targets were presented at once, despite the prediction that stripes should be particularly effective in a group scenario,” said Anna Hughes, a researcher in the Sensory Evolution and Ecology group and the Department of Physiology, Development and Neuroscience.

“This could be due to how different stripe orientations interact with motion perception, where an incorrect reading of a target’s speed helps the predator to catch its prey.”

Stripes, zigzags and high contrast markings make animals highly conspicuous, which you might think would make them more visible to a predator. Researchers have wondered if movement is important in explaining why these patterns have evolved. Striking patterns may confuse predators and reduce the chance of attack or capture. In a concept termed ‘motion dazzle’, where high contrast patterns cause predators to misperceive the speed and direction of the moving animal. It was suggested that motion dazzle might be strongest in groups, such as a herd of zebra.

‘Motion dazzle’ is a reference to a type of camouflage used on ships in World Wars One and Two, where ships were patterned in geometric shapes in contrasting colors. Rather than concealing ships, this dazzle camouflage was believed to make it difficult to estimate a target’s range, speed and heading.


HMS Argus (1917) wearing dazzle camouflage.

A total of 60 human participants played a game to test whether stripes influenced their perception of moving targets. They performed a touch screen task in which they attempted to ‘catch’ moving targets – both when only one target was present on screen and when there were several targets present at once.

When single targets were present, horizontal striped targets were easier to capture than any other target, including uniform color, or vertical or diagonal stripes. However, when multiple targets were present, all striped targets, irrespective of the orientation, were captured more easily than uniform grey targets.

“Motion may just be one aspect in a larger picture. Different orientations of stripe patterning may have evolved for different purposes. The evolution of pattern types is complex, for which there isn’t one over-ruling factor, but a multitude of possibilities,” said Hughes.

“More work is needed to establish the value and ecological relevance of ‘motion dazzle’. Now we need to consider whether color, stripe width and spatial patterning, and a predator’s visual system could be important factors for animals to avoid capture.”

Anna Hughes has written a blog post on this research for the journal publisher BioMed Central. Above story adapted from a BioMed Central press release. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/predators-might-not-be-dazzled-by-stripes#sthash.qZnJMCVN.dpuf

Here’s Looking At You: Research Shows Jackdaws Can Recognise Individual Human Faces

Here’s looking at you: research shows jackdaws can recognise individual human faces

source: www.cam.ac.uk

When you’re prey, being able to spot and assess the threat posed by potential predators is of life-or-death importance. In a paper published today in Animal Behaviour, researchers from the University of Cambridge’s Department of Psychology show that wild jackdaws recognise individual human faces, and may be able to tell whether or not predators are looking directly at them.

The fact that they learn to recognise individual faces so quickly provides great evidence of the flexible cognitive abilities of these birds

Gabrielle Davidson

Researchers Alex Thornton, now at the University of Exeter, and Gabrielle Davidson carried out the study with the wild jackdaw population in Madingley village on the outskirts of Cambridge. They found that the jackdaws were able to distinguish between two masks worn by the same researcher, and only responded defensively to the one they had previously seen accessing their nest box.

Over three consecutive days Davidson approached the nest boxes wearing one of the masks and took chicks out to weigh them. She also simply walked past the nest boxes wearing the other mask. Following this she spent four days sitting near the nest boxes wearing each of the masks to see how the jackdaws would respond.

The researchers found that the jackdaws were quicker to return to their nest when they saw the mask that they had previously seen approaching and removing chicks to be weighed, than when they saw the mask that had simply walked by.

They also tended to be quicker to go inside the nest box when Davidson, wearing the mask, was looking directly at them rather than looking down at the ground.

“The fact that they learn to recognise individual facial features or hair patterns so quickly, and to a lesser extent which direction people are looking in, provides great evidence of the flexible cognitive abilities of these birds,” says Davidson. “It also suggests that being able to recognise individual predators and the levels of threat they pose may be more important for guarding chicks than responding to the direction of the predator’s gaze.”

“Using the masks was important to make sure that the birds were not responding to my face, which they may have already seen approaching their nest boxes and weighing chicks in the past,” she adds.

Previous studies have found that crows, magpies and mockingbirds are similarly able to recognise individual people. However, most studies have involved birds in busier urban areas where they are likely to come into more frequent contact with humans.

Jackdaws are the only corvids in the UK that use nest boxes so they provide a rare opportunity for researchers to study how birds respond to humans in the wild. Researchers at Cambridge have been studying the Madingley jackdaws since 2010.

“It would be fascinating to directly compare how these birds respond to humans in urban and rural areas to see whether the amount of human contact they experience has an impact on how they respond to people,” says Davidson.

“It would also be interesting to investigate whether jackdaws are similarly able to recognise individuals of other predator species – although this would be a lot harder to test.”

The study was enabled by funding from Zoology Balfour Fund, Cambridge Philosophical Society, British Ecological Survey, and BBSRC David Philips Research Fellowship.

Inset images: Mask (Elsa Loissel).

Reference:

Davidson, GL et al.,Wild jackdaws, Corvus monedula, recognize individual humans and may respond to gaze direction with defensive behaviour Animal Behaviour 108 October 2015 17-24.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/heres-looking-at-you-research-shows-jackdaws-can-recognise-individual-human-faces#sthash.CDctNhXH.dpuf

Alan Turing Institute Up and Running

Alan Turing Institute up and running

source: www.cam.ac.uk

National institute for the development and use of advanced mathematics, computer science, algorithms and ‘Big Data’ has announced its first director, and will start research activities in the autumn.

The Alan Turing Institute has set off on a speedy course to secure new lasting partnerships and bring together expertise from across the UK that will help secure our place as a world leader in areas like Big Data, computer science and advanced mathematics.

Jo Johnson

The Alan Turing Institute has marked its first few days of operations with the announcement of its new director, the confirmation of £10 million of research funding from Lloyd’s Register Foundation, a research partnership with GCHQ, a collaboration with Cray Inc and EPSRC, and its first research activities.

The Institute will promote the development and use of advanced mathematics, computer science, algorithms and big data for human benefit. The University of Cambridge is one of the Institute’s founding partners, along with the universities of Edinburgh, Oxford, UCL, Warwick and the Engineering and Physical Sciences Research Council (EPSRC). As of 22 July, the Institute, which will be based at the British Library in London, is now fully constituted and has begun operations.

Jo Johnson, Minister for Universities and Science, said: “The Alan Turing Institute has set off on a speedy course to secure new lasting partnerships and bring together expertise from across the UK that will help secure our place as a world leader in areas like Big Data, computer science and advanced mathematics.”

The Institute has also announced that:

  • Has appointed Professor Andrew Blake, who will join the Institute in October, as its first Director;
  • Has accepted a formally approved offer of £10 million of research funding from the board of the Lloyd’s Register Foundation;
  • Will work with GCHQ on open access and commercial data-analysis methods;
  • Is to collaborate with Cray Inc. and EPSRC to exploit a next generation analytics capability on the UK’s Largest Supercomputer for scientific research, ARCHER;
  • Is issuing its first call for expressions of interest from research fellows;
  • Will commence research work this autumn with a series of data summits for commerce, industry and the physical and social sciences and scoping workshops for data and social scientists to inform and shape the Institute’s research agenda.

Andrew Blake is currently a Microsoft Distinguished Scientist and Laboratory Director of Microsoft Research UK. He is an Honorary Professor in Information Engineering at Cambridge, a Fellow of Clare Hall and a leading researcher in computer vision. He studied Mathematics and Electrical Sciences at Trinity College, and after a year as a Kennedy Scholar at MIT and time in the electronics industry, he completed a PhD in Artificial Intelligence at the University of Edinburgh in 1983.

“I am very excited to be chosen for this unique opportunity to lead The Alan Turing Institute,” said Blake. “The vision of bringing together the mathematical and computer scientists from the country’s top universities to develop the new discipline of data science, through an independent institute with strategic links to commerce and industry, is very compelling. The institute has a societally important mission and ambitious research goals. We will go all out to achieve them.”

“The enthusiasm and commitment of the founding partners have enabled the Institute to make rapid progress,” said Howard Covington, chair of The Alan Turing Institute. “We will now turn to building the Institute’s research activities. We are delighted to welcome Andrew Blake as our new director and to begin strategic relationships with the Lloyd’s Register Foundation and GCHQ. Our cooperation with Cray Inc. is one of several relationships with major infrastructure and service providers that will be agreed over the coming months. We are also in discussions with a number of industrial and commercial firms who we expect to become strategic partners in due course and are highly encouraged by the breadth of interest in working with the Institute.”

Professor Philip Nelson, Chief Executive of the Engineering and Physical Sciences Research Council (EPSRC) added: “I am delighted to see The Alan Turing Institute up and running. The teams from EPSRC and the founding universities have shown outstanding collaboration in bringing together five of our world-class academic institutions. We look forward to the Institute becoming an internationally leading player in data science.”

“Getting the most out of big data requires new methods to handle large quantities of information and the clever use of algorithms to distil meaningful knowledge out of such volumes,” said Professor John Aston of Cambridge’s Department of Pure Mathematics and Mathematical Statistics, who is the university’s representative on the Alan Turing Institute Board of Directors. “Research in this area could revolutionise our ability to compare, cross-reference and analyse data in ways that have previously been beyond the bounds of human or computer analysis.

The Alan Turing Institute is a joint venture between the universities of Cambridge, Edinburgh, Oxford, Warwick, UCL and EPSRC – The Alan Turing Institute will attract the best data scientists and mathematicians from the UK and across the globe to break new boundaries in how we use big data in a fast moving, competitive world.

The Institute is being funded over five years with £42 million from the UK government. The university partners are contributing £5 million each, totalling £25 million. In addition, the Institute will seek to partner with other business and government bodies. The creation of the Institute has been coordinated by the EPSRC which invests in research and postgraduate training across the UK.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/alan-turing-institute-up-and-running#sthash.JKPIq7VD.dpuf

‘Brain Training’ App May Improve Memory and Daily Functioning in Schizophrenia

‘Brain training’ app may improve memory and daily functioning in schizophrenia

source: www.cam.ac.uk

A ‘brain training’ iPad game developed and tested by researchers at the University of Cambridge may improve the memory of patients with schizophrenia, helping them in their daily lives at work and living independently, according to research published today.

This proof-of-concept study is important because it demonstrates that the memory game can help where drugs have so far failed

Barbara Sahakian

Schizophrenia is a long-term mental health condition that causes a range of psychological symptoms, ranging from changes in behaviour through to hallucinations and delusions. Psychotic symptoms are reasonably well treated by current medications; however, patients are still left with debilitating cognitive impairments, including in their memory, and so are frequently unable to return to university or work.

There are as yet no licensed pharmaceutical treatments to improve cognitive functions for people with schizophrenia. However, there is increasing evidence that computer-assisted training and rehabilitation can help people with schizophrenia overcome some of their symptoms, with better outcomes in daily functioning and their lives.

Schizophrenia is estimated to cost £13.1 billion per year in total in the UK, so even small improvements in cognitive functions could help patients make the transition to independent living and working and could therefore substantially reduce direct and indirect costs, besides improving the wellbeing and health of patients.

In a study published today in the Philosophical Transactions of the Royal Society B, a team of researchers led by Professor Barbara Sahakian from the Department of Psychiatry at Cambridge describe how they developed and tested Wizard, an iPad game aimed at improving an individual’s episodic memory. Episodic memory is the type of memory required when you have to remember where you parked your car in a multi-storey car park after going shopping for several hours or where you left your keys in home several hours ago, for example. It is one of the facets of cognitive functioning to be affected in patients with schizophrenia.

The game, Wizard, was the result of a nine-month collaboration between psychologists, neuroscientists, a professional game-developer and people with schizophrenia. It was intended to be fun, attention-grabbing, motivating and easy to understand, whilst at the same time improving the player’s episodic memory. The memory task was woven into a narrative in which the player was allowed to choose their own character and name; the game rewarded progress with additional in-game activities to provide the user with a sense of progression independent of the cognitive training process.

The researchers assigned twenty-two participants, who had been given a diagnosis of schizophrenia, to either the cognitive training group or a control group at random. Participants in the training group played the memory game for a total of eight hours over a four-week period; participants in the control group continued their treatment as usual. At the end of the four weeks, the researchers tested all participants’ episodic memory using the Cambridge Neuropsychological Test Automated Battery (CANTAB) PAL, as well as their level of enjoyment and motivation, and their score on the Global Assessment of Functioning (GAF) scale, which doctors use to rate the social, occupational, and psychological functioning of adults.

Professor Sahakian and colleagues found that the patients who had played the memory game made significantly fewer errors and needed significantly fewer attempts to remember the location of different patterns in the CANTAB PAL test relative to the control group. In addition, patients in the cognitive training group saw an increase in their score on the GAF scale.

Participants in the cognitive training group indicated that they enjoyed the game and were motivated to continue playing across the eight hours of cognitive training. In fact, the researchers found that those who were most motivated also performed best at the game. This is important, as lack of motivation is another common facet of schizophrenia.

Professor Sahakian says: “We need a way of treating the cognitive symptoms of schizophrenia, such as problems with episodic memory, but slow progress is being made towards developing a drug treatment. So this proof-of-concept study is important because it demonstrates that the memory game can help where drugs have so far failed. Because the game is interesting, even those patients with a general lack of motivation are spurred on to continue the training.”

Professor Peter Jones adds: “These are promising results and suggest that there may be the potential to use game apps to not only improve a patient’s episodic memory, but also their functioning in activities of daily living. We will need to carry out further studies with larger sample sizes to confirm the current findings, but we hope that, used in conjunction with medication and current psychological therapies, this could help people with schizophrenia minimise the impact of their illness on everyday life.”

It is not clear exactly how the apps also improved the patients’ daily functioning, but the researchers suggest it may be because improvements in memory had a direct impact on global functions or that the cognitive training may have had an indirect impact on functionality by improving general motivation and restoring self-esteem. Or indeed, both these explanations may have played a role in terms of the impact of training on functional outcome.

In April 2015, Professor Sahakian and colleagues began a collaboration with the team behind the popular brain training app Peak to produce scientifically-tested cognitive training modules. The collaboration has resulted in the launch today of the Cambridge University & Peak Advanced Training Plan a memory game, available within Peak’s iOS app, designed to train visual and episodic memory while promoting learning.

The training module is based on the Wizard memory game, developed by Professor Sahakian and colleague Tom Piercy at the Department of Psychiatry at the University of Cambridge. Rights to the Wizard game were licensed to Peak by Cambridge Enterprise, the University’s commercialisation company.

“This new app will allow the Wizard memory game to become widely available, inexpensively. State-of-the-art neuroscience at the University of Cambridge, combined with the innovative approach at Peak, will help bring the games industry to a new level and promote the benefits of cognitive enhancement,” says Professor Sahakian.

Reference
Sahakian, BJ et al. The impact of neuroscience on society: Cognitive enhancement in neuropsychiatric disorders and in healthy people. Phil. Trans. R. Soc. B; 3 Aug 2015

Home page image: Brain Power by Allan Ajifo


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/brain-training-app-may-improve-memory-and-daily-functioning-in-schizophrenia#sthash.cPwXuLkx.dpuf

The Magna Carta of Scientific Maps

The Magna Carta of scientific maps

source: www.cam.ac.uk

One of the most important maps of the UK ever made – described as the ‘Magna Carta of geology’ – is to go on permanent public display in Cambridge after being restored to its former glory.

This is the world’s earliest geological map.

Ken McNamara

William Smith’s 1815 Geological Map of England and Wales, which measures 8.5ft x 6ft, demonstrated for the first time the geology of the UK and was the culmination of years of work by Smith, who was shunned by the scientific community for many years and ended up in debtors’ prison.

Today, exactly 200 years since its first publication, a copy of Smith’s map – rediscovered after more than a century in a museum box – will go on public display at the Sedgwick Museum of Earth Sciences. Aside from a copy held at The Geological Society in London, the Cambridge map is believed to be the only such map on public display anywhere in the world.

The iconic map, which is still used as the basis of geological maps to this day, had the greatest influence on the science of geology, inspiring a generation of naturalists and fledgling geologists to establish geology as a coherent, robust and important science. The map was so large, that, for practicality’s sake, it was often sold in 15 separate sheets, either loose, or in a leather travelling case.

Museum Director Ken McNamara said: “This is the world’s earliest geological map. Smith was working from a position of no knowledge when he began. Nobody had ever attempted this before and it’s really quite staggering what this one man achieved over ten or fifteen years, travelling up and down the country as a canal surveyor.

“It’s incredibly accurate, even now in 2015. If you compare the current geological map of Great Britain today there are amazing similarities. The British Geological Survey still uses the same colour scheme that Smith devised. Chalk is green. Limestone is yellow and it’s still done like that to this day.”

“This started geology as a modern science. It’s like the Magna Carta of geology, the beginnings of geology as a modern science and that’s why it’s so important.”

Smith’s map proudly announced itself to the world as: “A DELINEATION of the STRATA of ENGLAND and WALES with part of SCOTLAND; exhibiting the COLLIERIES and MINES; the MARSHES and FEN LANDS ORIGINALLY OVERFLOWED BY THE SEA; and the VARIETIES of Soil according to the Variations in the Substrata; ILLUSTRATED by the MOST DESCRIPTIVE NAMES”.

How many of Smith’s great maps still exist is unclear. Around 70 are thought to remain worldwide. The Sedgwick Museum of Earth Sciences at the University of Cambridge, the oldest geological museum in the world, is lucky enough to have three copies.

For many years the museum knew that it possessed two of Smith’s great maps: one a set of 15 sheets bound together as a book; the other, beautifully preserved, nestles in its leather travelling case. Two years ago, in May 2013, a third copy was rediscovered in the collection. Found folded in a box with some other early geological maps, staff believe it had not seen the light of day since Queen Victoria was on the throne.

Despite its decades hidden from view, the hand-coloured map had been exposed to harsh light for many years before being packed away. The colours were faded, the paper stained and it carried the stains of faecal deposits from long dead spiders and flies.

The map was then conserved by experts at Duxford, near Cambridge. Nineteenth century dirt and grime was carefully removed, then the original, faded water-colour paint was given a protective coating and subtly restored to enhance the colour of the rock formations. Only 400 were ever produced over at least a four-year period. During that time, Smith continued his geological research and continually made new discoveries, adapting and amending each new edition as he went along. Each individual map took seven or eight days to be coloured.

McNamara said: “Smith suffered many deprivations in his life. He became a bankrupt and ended up in debtor’s prison for a while. Perhaps, almost as galling, he was largely ignored by the geological establishment. However, he gained his due recognition from the Geological Society of London later in life when, in 1831, he was the first person to receive the society’s most prestigious medal, the Wollaston Medal.

“Appropriately, given the hanging of his map in the Sedgwick Museum, it was Adam Sedgwick who presented Smith with his medal. We are, we think, the only museum, library or art gallery in the world to have one of Smith’s legendary maps on public display – and we want as many people as possible to come and see this enormous, iconic and beautiful map for themselves.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/the-magna-carta-of-scientific-maps#sthash.CFe5oNru.dpuf

Cancer Patients Lose Faith In Healthcare System if Referred Late By GP

Cancer patients lose faith in healthcare system if referred late by GP

source: www.cam.ac.uk

If it takes more than three trips to the GP to be referred for cancer tests, patients are more likely to be dissatisfied with their overall care, eroding confidence in the doctors and nurses who go on to treat and monitor them, according to new research.

This research shows that first impressions go a long way in determining how cancer patients view their experience of cancer treatment

Georgios Lyratzopoulos

The results are based on further analysis of survey data from more than 70,000 cancer patients, by Cancer Research UK scientists at the University of Cambridge and University College London, published today in theEuropean Journal of Cancer Care.

Of the nearly 60,000 survey respondents diagnosed through their GP, almost a quarter (23 per cent) had been seen three or more times before being referred for cancer tests.

Four in ten (39 per cent) of those who had experienced referral delays were dissatisfied with the support they received from their GP compared to just under three in ten (28 per cent) of those referred after one or two GP visits.

Overall, patients who had seen their GP three or more times before being referred were more likely to report negative experiences across 10 of 12 different aspects of their care. For example, 18 per cent of these patients were dissatisfied with the way they were told they had cancer, compared to 14 per cent among those who were referred more quickly.

Four in ten expressed dissatisfaction with how hospital staff and their GP had worked with each other to provide the best possible care, compared to one in three among those referred promptly.

Dissatisfaction with the overall care received was even higher among the just under one in ten (9 per cent) patients who saw their GP five or more times before being referred.

Study author Dr Georgios Lyratzopoulos, from the Department of Public Health and Primary Care at the University of Cambridge, said: “This research shows that first impressions go a long way in determining how cancer patients view their experience of cancer treatment. A negative experience of diagnosis can trigger loss of confidence in their care throughout the cancer journey.

“When they occur, diagnostic delays are largely due to cancer symptoms being extremely hard to distinguish from other diseases, combined with a lack of accurate and easy-to-use tests. New diagnostic tools to help doctors decide which patients need referring are vital to improve the care experience for even more cancer patients.”

Dr Richard Roope, Cancer Research UK’s GP expert, said: “It’s vital we now step up efforts to ensure potential cancer symptoms can be investigated promptly, such as through the new NICE referral guidelines launched last month to give GPs more freedom to quickly refer patients with worrying symptoms. This will hopefully contribute to improving the patient experience, one of the six strategic priorities recommended by the UK’s Cancer Task Force last week.”

Reference

Mendonca S.C. et al, Pre-referral general practitioner consultations and subsequent experience of cancer care: evidence from the English Cancer Patient Experience Survey,European Journal of Cancer (2015)

Adapted from a press release by Cancer Research UK.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/cancer-patients-lose-faith-in-healthcare-system-if-referred-late-by-gp#sthash.HTk3o86K.dpuf

Cambridge Researchers and Pharma In Innovative New Consortium to Develop and Study Early Stage Drugs

Cambridge researchers and pharma in innovative new consortium to develop and study early stage drugs

source: www.cam.ac.uk

An innovative new Consortium will act as a ‘match-making’ service between pharmaceutical companies and researchers in Cambridge with the aim of developing and studying precision medicines for some of the most globally devastating diseases.

We believe this form of partnership is a model for how academic institutions and industry can work together to deliver better medicines

Tony Kouzarides

The Therapeutics Consortium, announced today, will connect the intellectual know-how of several large academic institutions with the drug-developing potential of the pharmaceutical industry, to deliver better drugs to the clinic.

From early 2018, the Consortium will form a major constituent of the new Milner Therapeutics Institute, which has been made possible through a £5 million donation from Jonathan Milner and will be located in a new building at the Cambridge Biomedical Campus, the centrepiece of the largest biotech cluster outside the United States.

The Consortium will connect academic and clinical researchers at the University of Cambridge, the Babraham Institute and the Wellcome Trust Sanger Institute with pharmaceutical companies Astex Pharmaceuticals, AstraZeneca and GlaxoSmithKline (GSK). It will provide researchers with the potential to access novel therapeutic agents (including small molecules and antibodies) across the entire portfolio of drugs being developed by each of the companies, in order to investigate their mechanism, efficacy and potential. The terms of the Consortium allow for fast and easy access to these agents and information.

Each industry partner within the Therapeutics Consortium has committed funding to spend on collaborative projects and will collectively fund an executive manager to oversee the academic/industry interactions. Collaborative projects are expected to lead to joint publications, supporting a culture of more open innovation.

Professor Tony Kouzarides from the University of Cambridge, who will head the Therapeutics Consortium and the Milner Institute, is currently deputy director at the Gurdon Institute. He says: “The Milner Institute will act as a ‘match-making’ service through the Therapeutics Consortium, connecting the world-leading research potential of the University of Cambridge and partner institutions with the drug development expertise and resources of the pharmaceutical industry. We hope many more pharmaceutical companies will join our consortium and believe this form of partnership is a model for how academic institutions and industry can work together to deliver better medicines.”

Dr Harren Jhoti, President and CEO of Cambridge-based company Astex Pharmaceuticals, now part of Japan’s Otsuka Group, said: “As a company that was founded right here in Cambridge we are delighted to support this new Consortium working together with leading Cambridge academic and clinical researchers to help us to research and develop ever better treatments for patients.”

Mene Pangalos, Executive Vice President, Innovative Medicines & Early Development at AstraZeneca said: “We are pleased to be part of this exciting new consortium that brings together world-leading science and technology into a dedicated multi-disciplinary institute focused on translational research.  The proximity of the Institute to our new R&D centre and global headquarters in Cambridge will ensure our scientists can work closely with those at the Milner Institute.”

Professor Michael Wakelam, Director of the Babraham Institute, said: “The Institute’s participation in the Therapeutics Consortium provides yet one more channel by which our excellence in basic biological research is built upon in partnership with industry-based collaborators. We know from experience that bringing together the best academics and the best pharmacological research is both efficient and enlightening and we look forward to making joint progress.”

Dr Rab Prinjha, Head of GSK’s Epigenetics Discovery Performance Unit, said: “Late-stage attrition is too high – very few investigational medicines entering human trials eventually become an approved treatment.  As an industry, we must improve our success rate by understanding our molecules and targets better.  This innovative institute which builds on GSK’s very successful collaboration with the Gurdon Institute and close links with many groups across Cambridge, aims to increase our knowledge of basic biological mechanisms to help us bring the right investigational medicines into human trials and ultimately to patients.”

The Consortium will initially operate from the Wellcome Trust/Cancer Research UK Gurdon Institute, but will move into the Milner Institute in early 2018.

The Milner Therapeutics Institute

One of the major aims of the Institute will be to help understand how drugs work and to push forward new ideas and technologies to improve the development of novel therapies. A major, but not exclusive, focus of the Institute will be cancer.

It is envisaged that the Milner Institute will be equipped with core facilities, such as high-throughput screening of small molecules against cell lines, organoids (‘mini organs’) and tumour biopsies, as well as bioinformatics support to help scientists deal with large datasets. Its facilities will be available to researchers working on collaborative projects within the Therapeutics Consortium and, capacity permitting, to other scientists and clinicians within the Cambridge community.

In addition, the Milner Institute will have space for senior and junior scientists to set up independent research groups. There will also be associated faculty positions, which will be taken up by scientists in different departments, whose research and expertise will benefit from a close association with the Milner Institute.

The Milner Institute will be housed within the new Capella building, alongside the relocated Wellcome Trust/MRC Cambridge Stem Cell Institute, a new Centre for Blood & Leukaemia Research, and a new Centre for Immunology & Immunotherapeutics.

Jonathan Milner, whose donation has made the Milner Therapeutics Institute possible, is a former member of Tony Kouzarides’ research group and experienced entrepreneur. In 1998 they founded leading biotechnology company Abcam together with Professor David Cleevely, which has gone on to employ over 800 people and supply products to 64% of researchers globally.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/cambridge-researchers-and-pharma-in-innovative-new-consortium-to-develop-and-study-early-stage-drugs#sthash.S24DvcV2.dpuf

Oracle Bones and Unseen Beauty: Wonders of Priceless Chinese Collection Now Online

Oracle bones and unseen beauty: wonders of priceless Chinese collection now online

source: www.cam.ac.uk

A banknote from 1380 that threatens decapitation, a set of 17th-century prints so delicate they had never been opened, and 3000-year-old ‘oracle bones’ are now freely available for the world to view on the Cambridge Digital Library.

This is the earliest and finest example of multi-colour printing anywhere in the world.

Charles Aylmer

The treasures of Cambridge University Library’s Chinese collections are the latest addition to the Digital Library website (http://cudl.lib.cam.ac.uk/collections/chinese) which already hosts the works of Charles Darwin, Isaac Newton and Siegfried Sassoon, as well as unique collections on the Board of Longitude and the Royal Commonwealth Society.

The oracle bones (ox shoulder blades and turtle shells) are one of the Library’s most important collections and are the earliest surviving examples of Chinese writing anywhere in the world. They are the oldest form of documents owned by the Library and record questions to which answers were sought by divination at the court of the royal house of Shang, which ruled central China between the 16th and 11th centuries BCE. (http://bit.ly/1RJkZEG).

As the earliest known specimens of the Chinese script, the oracle bone inscriptions are of fundamental importance for Chinese palaeography and our understanding of ancient Chinese society. The bones record information on a wide range of matters including warfare, agriculture, hunting and medical problems, as well as genealogical, meteorological and astronomical data, such as the earliest records of eclipses and comets.

Never before displayed, three of the 800 oracle bones held in the Library can now be viewed in exquisite detail, alongside a 17th-century book which has been described as ‘perhaps the most beautiful set of prints ever made’ (http://bit.ly/1fMfAf3). Estimated to be worth millions on the open market, the ‘Manual of Calligraphy and Painting’ was made in 1633 by the Ten Bamboo Studio in Nanjing.

Charles Aylmer, Head of the Chinese Department at Cambridge University Library, said: “This is the earliest and finest example of multi-colour printing anywhere in the world, comprising 138 paintings and sketches with associated texts by fifty different artists and calligraphers. Although reprinted many times, complete sets of early editions in the original binding are extremely rare.

“The binding is so fragile, and the manual so delicate, that until it was digitized, we have never been able to let anyone look through it or study it – despite its undoubted importance to scholars.”

Other highlights of the digitisation include one of the world’s earliest printed bookshttp://bit.ly/1HRsK0k), a Buddhist text dated between 1127 and 1175. The translator (Xuanzang) was famed for the 17 year pilgrimage to India he undertook to collect religious texts and bring them back to China.

‘The Manual of Famine Relief’ has also been digitised. This 19th-century manuscript contains instructions for the distribution of emergency rations to famine victims and includes practical advice about foraging for natural substitutes to normal foodstuffs in the event of an emergency.

Elsewhere, a 14th-century banknote (http://bit.ly/1O8QJwB) is one of the more unusual additions to the Chinese Collections. Paper currency first appeared in China during the 7th century, and was in wide circulation by the 11th century, 500 years before its first use in Europe.

By the 12th century the central government had realised the benefits of banknotes for purposes of tax collection and financial administration, and by the late 13th century had printed and issued a national paper currency – accounts of it reached Europe through the writings of Marco Polo and others.

The Library’s banknote, printed on mulberry paper from a cast metal plate, was first issued in 1380. The denomination of the banknote (one thousand cash) is shown by a picture of ten strings of copper cash (10 x 100 = 1000), flanked by a text in seal script which reads: ‘Great Ming Paper Currency; Circulating Throughout the World’. The text underneath threatens forgers with decapitation and promises that anyone denouncing or apprehending them will receive not only a reward of 25 ounces of silver but also all the miscreant’s property.

Huw Jones, part of the digitisation team at Cambridge University Library, said: “The very high quality of the digital images has already led to important discoveries about the material – we have seen where red pigment was used to colour inscriptions on the oracle bones, and seals formerly invisible have been deciphered on several items. We look forward to new insights now that the collection has a truly global audience, and we are already working with an ornithological expert to identify the birds in the Manual of Calligraphy and Painting.”

Cambridge University Library acquired its first Chinese book in 1632 as part of the collection of the Duke of Buckingham, but the first substantial holdings of Chinese books came with the donation of 4,304 volumes by Sir Thomas Wade (1818–1895), first Professor of Chinese in the University from 1888 until his death.

The Chinese collections at Cambridge University Library now number about half a million individual titles, including monographs, reprinted materials, archival documents, epigraphical rubbings and 200,000 Chinese e-books (donated by Premier Wen Jiabao in 2009).


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/oracle-bones-and-unseen-beauty-wonders-of-priceless-chinese-collection-now-online#sthash.o0qzhgfV.dpuf

‘Pill On a String’ Could Help Spot Early Signs of Cancer of the Gullet

‘Pill on a string’ could help spot early signs of cancer of the gullet

A ‘pill on a string’ developed by researchers at the University of Cambridge could help doctors detect oesophageal cancer – cancer of the gullet – at an early stage, helping them overcome the problem of wide variation between biopsies, suggests research published today in the journal Nature Genetics.

If you’re taking a biopsy, this relies on your hitting the right spot. Using the Cytosponge appears to remove some of this game of chance

Rebecca Fitzgerald

The ‘Cytosponge’ sits within a pill which, when swallowed, dissolves to reveal a sponge that scrapes off cells when withdrawn up the gullet. It allows doctors to collect cells from all along the gullet, whereas standard biopsies take individual point samples.

Oesophageal cancer is often preceded by Barrett’s oesophagus, a condition in which cells within the lining of the oesophagus begin to change shape and can grow abnormally. The cellular changes are cause by acid and bile reflux – when the stomach juices come back up the gullet. Between one and five people in every 100 with Barrett’s oesophagus go on to develop oesophageal cancer in their life-time, a form of cancer that can be difficult to treat, particularly if not caught early enough.

At present, Barrett’s oesophagus and oesophageal cancer are diagnosed using biopsies, which look for signs of dysplasia, the proliferation of abnormal cancer cells. This is a subjective process, requiring a trained scientist to identify abnormalities. Understanding how oesophageal cancer develops and the genetic mutations involved could help doctors catch the disease earlier, offering better treatment options for the patient.

An alternative way of spotting very early signs of oesophageal cancer would be to look for important genetic changes. However, researchers from the University of Cambridge have shown that variations in mutations across the oesophagus mean that standard biopsies may miss cells with important mutations. A sample was more likely to pick up key mutations if taken using the Cytosponge, developed by Professor Rebecca Fitzgerald at the Medical Research Council Cancer Unit at the University of Cambridge.

“The trouble with Barrett’s oesophagus is that it looks bland and might span over 10cm,” explains Professor Fitzgerald. “We created a map of mutations in a patient with the condition and found that within this stretch, there is a great deal of variation amongst cells. Some might carry an important mutation, but many will not. If you’re taking a biopsy, this relies on your hitting the right spot. Using the Cytosponge appears to remove some of this game of chance.”

Professor Fitzgerald and colleagues carried out whole genome sequencing to analyse paired Barrett’s oesophagus and oesophageal cancer samples taken at one point in time from 23 patients, as well as 73 samples taken over a three-year period from one patient with Barrett’s oesophagus.

The researchers found patterns of mutations in the genome – where one ‘letter’ of DNA might change to another, for example from a C to a T – that provided a ‘fingerprint’ of the causes of the cancer. Similar work has been done previously in lung cancer, where it was shown that cigarettes leave fingerprints in an individual’s DNA. The Cambridge team found fingerprints which they believe are likely to be due to the damage caused to the lining of the oesophagus by stomach acid splashing onto its walls; the same fingerprints could be seen in both Barrett’s oesophagus and oesophageal cancer, suggest that these changes occur very early on the process.

Even in areas of Barrett’s oesophagus without cancer, the researchers found a large number of mutations in their tissue – on average 12,000 per person (compared to an average of 18,000 mutations within the cancer). Many of these are likely to have been ‘bystanders’, genetic mutations that occurred along the way but that were not actually implicated in cancer.

The researchers found that there appeared to be a tipping point, where a patient would go from having lots of individual mutations, but no cancer, to a situation where large pieces of genetic information were being transferred not just between genes but between chromosomes.

Co-author Dr Caryn Ross-Innes adds: “We know very little about how you go from pre-cancer to cancer – and this is particularly the case in oesophageal cancer. Barrett’s oesophagus and the cancer share many mutations, but we are now a step closer to understanding which are the important mutations that tip the condition over into a potentially deadly form of cancer.”

The research was funded by the Medical Research Council and Cancer Research UK. The Cytosponge was trialled in patients at the NIHR Clinical Investigation Ward at the Cambridge Clinical Research Facility.

Reference
Ross-Innes, CS et al. Whole-genome sequencing provides new insights into the clonal architecture of Barrett’s esophagus and esophageal adenocarcinoma. Nature Genetics; 20 July 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/pill-on-a-string-could-help-spot-early-signs-of-cancer-of-the-gullet#sthash.JQJmVUfo.dpuf