All posts by Admin

Access To Startup Skills Threatened By U.K. Visa Review

Access To Startup Skills Threatened By U.K. Visa Review

Displaying image001.jpg

Source: tech crunch

The U.K.-based startup founders and investors who penned an open letter backing the Conservatives at the General Election in May are now confronted with the prospect of a Tory government looking for ways to make it harder for businesses to recruit and bring non-EU workers to the U.K. — owing to a political push to reduce net migration.

Soon after winning the election this May, Prime Minister David Cameron made a speech on migration, outlining the new government’s forthcoming Immigration Bill — which he said would include reforms to domestic immigration and labour market rules aimed at reducing the demand for skilled migrant labour.

Given that the U.K. is a member of the European Union, the U.K. government can’t pull any policy levers to reduce migration from the EU (although Cameron is also hoping to renegotiate migration rules with the EU). Which leaves non-EU migration bearing the brunt of the government’s planned migration squeeze. Startups of course rely on filling vacancies by bringing skills from abroad — given it may be the only way to obtain relevant expertise when you’re working in such nascent areas.

The Home Office has commissioned the Migration Advisory Committee to undertake two reviews of the U.K.’s current tier 2 skilled visa requirements (a main route used by startups to fill vacancies; there is also the tier 1 entrepreneur visa for founders). A review of tier 2 visa salary thresholds was conducted by MAC last month, but has yet to report its findings to the Home Office — although a spokesman for the department told TechCrunch the government would look to implement any changes it deems necessary based on that review this autumn.

The larger scale tier 2 visa review is still taking evidence, with a closing date for submissions of the end of September. The MAC is then likely to report in full by the year’s end — so further changes to the tier 2 visa, based on the government’s assessment of the review, may be incoming by the start of next year.

According to the MAC’s call for evidence, the proposed changes being considered by the government include some potentially radical measures — such as: significantly raising the salary threshold; restricting which job roles are eligible; and removing the right for dependents of visa holders to be able to work in the U.K.

Other measures apparently on the table include putting time-limits on shortage occupations in order to encourage domestic businesses to invest in upskilling; and imposing a skills levy on employers who hire from outside the European Economic Area in order to fund domestic apprenticeships.

The pro-startup organisation, Coadec, which advocates for policies that support the U.K.’s digital economy, kicked off acampaign last week to raise awareness of the MAC’s review and feed its submissions call with startups’ views. It’s asking U.K. startups to complete a survey asking for their views on the tier 2 visa process and the changes the government is considering. Coadec will then be compiling responses into its own report to submit to the MAC.

“The current situation is that the only way non-EU workers can get into the UK really is through the tier 2 visa, because the post-study work visa has been scrapped, the high skilled migrant program has been scrapped,” says Coadec executive director Guy Levin tells TechCrunch. “You can still come in as an entrepreneur through tier 1 or an investor, or if you’re an exceptional talent through tier 1, but tier 2’s the main visa route for non-EU workers to come into the country.

“You have to have a definite job offer, you need to have a degree level qualification, and the role needs to be advertised in the U.K. for 28 days first before it can be offered internationally. There has to be a minimum salary threshold, which is set for new entrants at the 10th percentile and for experienced hires at the 25th percentile… so for developers the 25th percentile is about £31,000. And the company itself needs to go through a process of getting accredited by the Home Office as a sponsor.”

Levin notes there were some 15,500 people entering the U.K. last year via the tier 2 general route — so excluding intracompany transfers (a route which does not apply to startups). A further breakdown of that by jobtype puts “Programmers and software development professionals” as the third most popular occupation under the ‘resident labour test market route’ (i.e. rather than the shortages occupation route) — with 2,618 devs entering the U.K. via that route in the year ending March 2015.

“It’s not enormous numbers but it’s significant. And that’s just for that particular job title. There may be others under otherjob titles, like data scientist or product managers,” says Levin.

“The system is fairly sensible, as it stands,” he adds. “Some bits of it are annoying, like the 28 day test. And thankfully that’s waived for shortage occupations… Which means you get to fast-track some bits of that… And at the start of the year some digital roles were put on that, so that’s fantastic and a good win for the sector.”

But Levin says the “worry” now for U.K. startups is that the Conservatives’ political imperative to find ways to reduce migration to the U.K. could result in policies that are actively harmful to the digital economy — given the options currently being considered by the government would limit founders’ ability to hire the skilled talent they need.

Levin says Coadec’s own migration survey has garnered around 100 responses thus far, with around 40 per cent saying they currently employ people hired via the tier 2 visa route. “The majority don’t… and several of the respondents said it’s already too complicated and expensive for us to go through that process,” he notes.

Speaking to TechCrunch about the government’s migration consultation, DueDil CEO and founder Damian Kimmelman, himself an American who relocated to the U.K. to build a data startup (one which has attracted some $22 million in funding thus far, according to CrunchBase), argues that “populist politics” could pose a threat the U.K.’s digital economy if the government ends up scrapping what he says has been a relatively liberal migration policy thus far. Approximately 10 per cent of DueDil’s staff are employed on tier 2 visas.

One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere.

“One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere. One of the problems building a company that’s scaling and building it in the U.K. is there are a not a lot of people that have scaled businesses, and have the experience of scaling large tech businesses. You can only find that outside of the U.K. All of the large companies that scaled got bought out. And this is an unfortunate fact about the talent pool — but one of the ways the U.K. has effectively been able to solve this is by really having quite liberal immigration policies,” he tells TechCrunch.

Broadly speaking, Kimmelman said any of the proposed changes being consulted on by the MAC could have “a serious impact” on DueDil’s ability to grow.

“Restricting what roles are eligible seems ludicrous. We are working in a very transformative economy. All of the types of roles are new types of roles every six months… Government can’t really police that. That’s sort of self defeating,” he adds. “If you restrict the rights of dependents you pretty much nullify the ability to bring in great talent. I don’t know anybody who’s going to move their family [if they can’t work here]… It’s already quite difficult hiring from the U.S. because the quality of life in the U.S. in a lot of cities is much greater than it is in London.”

He’s less concerned about the prospect of being required to increase the salary requirement for individuals hired under the tier 2 visa — although Coadec’s Levin points out that some startups, likely earlier stage, might choose to compensate a hire with equity rather than a large salary to reduce their burn rate. So a higher salary requirement could make life harder for other types of U.K. startups.

Kimmelman was actually one of the signatories of the aforementioned open letter backing the Conservative Party at the General Election. Signatories of that letter asserted the Tory-led government —

…has enthusiastically supported startups, job-makers and innovators and the need to build a British culture of entrepreneurialism to rival America’s. Elsewhere in the world people are envious at how much support startups get in the UK. This Conservative-led government has given us wholehearted support and we are confident that it would continue to do so. It would be bad for jobs, bad for growth, and bad for innovation to change course.

So is he disappointed that the new Conservative government is consulting on measures that, if implemented, could limit U.K.-based startup businesses’ access to digital skills? “I wouldn’t read too much into this just yet because they haven’t made any decisions,” says Kimmelman. “But if they do enact any of these policies I think it would be really harmful to the community.”

“They have a lot of constituents other than just the tech community that they’re working for. So I hope that they don’t do anything that’s rash. But I’ve been very impressed by the way that they’ve handled things thus far and so I think I need to give them the benefit of the doubt,” he adds.

Levin says responses to Coadec’s survey so far suggests U.K. startups’ main priority is the government keeps the overseas talent pipeline flowing — with less concern over cost increases, such as if the government applies a skills levy to fund apprenticeship programs.

But how the government squares the circle of an ideological commitment to reducing net migration with keeping skills-hungry digital businesses on side remains to be seen.

“The radical option of really restricting [migration] to genuine shortages is scary — because we just don’t know what that would look like,” adds Levin. “It could be that that would be the best answer for the tech sector because we might be able to make a case that there are genuine shortages and so we’d be fine. But there’s an uncertainty about what the future would look like — so at the moment we’re going to focus on making a positive case on why skilled migration is vital for the digital economy.”

The prior Tory-led U.K. coalition government introduced a cap on tier 2 visas back in 2011 — of just over 20,000 per year — which is applied as a monthly limit. That monthly cap was exceeded for the first time in June, with a swathe of visa applications turned down as a result. That’s something Levin says shows the current visa system is “creaking at the seams” — even before any further restrictions are factored in.

“Thirteen hundred applicants in June were turned down because they’d hit the cap,” he says, noting that when the cap is hit the Home Office uses salary level to choose between applicants. “So the salary thresholds jump up from the 25th percentile… which means the lower paid end of people lose out, which would probably disproportionately affect startups.”

London Tube Strike Produced Net Economic Benefit

London Tube strike produced net economic benefit

Source: www.cam.ac.uk

New analysis of the London Tube strike in February 2014 finds that it enabled a sizeable fraction of commuters to find better routes to work, and actually produced a net economic benefit.

For the small fraction of commuters who found a better route, when multiplied over a longer period of time, the benefit to them actually outweighs the inconvenience suffered by many more

Shaun Larcom

Analysis of the London Tube strike in February 2014 has found that despite the inconvenience to tens of thousands of people, the strike actually produced a net economic benefit, due to the number of people who found more efficient ways to get to work.

The researchers, from the University of Cambridge and the University of Oxford, examined 20 days’ worth of anonymised Oyster card data, containing more than 200 million data points, in order to see how individual Tube journeys changed during the strike. Since this particular strike only resulted in a partial closure of the Tube network and not all commuters were affected by the strike, a direct comparison was possible. The data enabled the researchers to see whether people chose to go back to their normal commute once the strike was over, or if they found a more efficient route and decided to switch.

The researchers found that of the regular commuters affected by the strike, either because certain stations were closed or because travel times were considerably different, a significant fraction – about one in 20 – decided to stick with their new route once the strike was over.

While the proportion of individuals who ended up changing their routes may sound small, the researchers found that the strike actually ended up producing a net economic benefit. By performing a cost-benefit analysis of the amount of time saved by those who changed their daily commute, the researchers found that the amount of time saved in the longer term actually outweighed the time lost by commuters during the strike. An Oxford working paper of their findings is published online today.

The London Tube map itself may have been a reason why many commuters did not find their optimal journey before the strike. In many parts of London, the actual distances between stations are distorted on the iconic map. By digitising the Tube map and comparing it to the actual distances between stations, the researchers found that those commuters living in, or travelling to, parts of London where distortion is greatest were more likely to have learned from the strike and found a more efficient route.

Additionally, since different Tube lines travel at different speeds, those commuters who had been travelling on one of the slower lines were also more likely to switch routes once the strike was over.

“One of the things we’re looking at is whether consumers usually make the best decision, but it’s never been empirically tested using a large consumer dataset such as this one,” said co-author Dr Ferninand Rauch from Oxford’s Department of Economics. “Our findings illustrate that people might get stuck with suboptimal decisions because they don’t experiment enough.”

According to the authors, being forced to alter a routine, whether that’s due to a Tube strike or government regulation, can often lead to net benefits, as people or corporations are forced to innovate. In economics, this is known as the Porter hypothesis.

“For the small fraction of commuters who found a better route, when multiplied over a longer period of time, the benefit to them actually outweighs the inconvenience suffered by many more,” said co-author Dr Shaun Larcom of Cambridge’s Department of Land Economy. “The net gains came from the disruption itself.”

“Given that a significant fraction of commuters on the London underground failed to find their optimal route until they were forced to experiment, perhaps we should not be too frustrated that we can’t always get what we want, or that others sometimes take decisions for us,” said co-author Dr Tim Willems, also from Oxford’s Department of Economics. “If we behave anything like London commuters and experiment too little, hitting such constraints may very well be to our long-term advantage.”

Reference:
Larcom, Shaun, Ferdinand Rauch and Tim Willems (2015), “The Benefits of Forced Experimentation: Striking Evidence from the London Underground Network”, University of Oxford Working Paper. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/london-tube-strike-produced-net-economic-benefit#sthash.of5zSwDm.dpuf

Old Drug Performs New Tricks

Old drug performs new tricks

Source: www.cam.ac.uk

Patients with the most dangerous type of high blood pressure will be able to receive far more effective treatment after Cambridge-led research reveals the powers of a “wonder drug” that has lain under the noses of doctors for 50 years.

Spironolactone, one of a range of drugs given according to doctors’ preference to patients with resistant hypertension (high blood pressure that doesn’t respond to a standard drug treatment), is in fact “outstandingly superior” to the alternatives, researchers have found. They recommend it should now be the first choice for such patients, and say that – for most – this well-known but under-valued drug will bring their condition fully under control.

The discovery could have a profound impact globally, since hypertension, a major contributor to stroke and heart disease, is so common, affecting as many as one in three adults in some countries. It challenges what the authors describe as “a growing perception” that severe hypertension was beyond the control of existing drug treatments, and gives more clues into what causes the condition.

The latest research, published today in the Lancet to coincide with their presentation to the British Hypertension Society, emerged from the PATHWAY-2 trial, part of the PATHWAY programme of trials in hypertension funded by the British Heart Foundation and led by Professor Morris Brown, professor of clinical pharmacology at Cambridge University and a Fellow of Gonville & Caius College.

The findings are drawn from what authors Brown and Professor Bryan Williams of University College London describe as an experimental “shoot-out” between three different drugs used by doctors for years to treat patients if the standard initial cocktail of three hypertension drugs do  not work.

Spironolactone ‘slugged it out’ against a betablocker (bisoprolol) and an alpha-blocker (doxazosin) in the trial, which took six years and involved 314 patients in 14 different centres.

The patients all suffered from resistant high blood pressure that had not responded to the standard treatment for hypertension: a combination of three drugs (an ACE-inhibitor (or angiotensin-receptor blocker), a calcium channel blocker and a thiazide-type diuretic). They continued with this basic combination thoughout, but each of the three trial drugs – and a placebo – was added one at a time, in random order, for 12 weeks each.

In what is known as a “double blind” trial, neither the patients nor the researchers knew which patient was taking which drug when. In a pioneering step, the study also used patients’ own blood pressure readings taken at home to minimise so-called “white coat syndrome”, in which the stress of being in a clinic causes blood pressure to rise artificially.

Once the resulting data had been analysed, it emerged that almost three quarters of patients in the trial saw a major improvement in blood pressure on spironolactone, with almost 60% hitting a particularly stringent measure of blood pressure control. Of the three drugs trialled, spironolactone was the best at lowering blood pressure in 60% of patients, whereas bisoprolol and doxazosin were the best drug in only 17% and 18% respectively.

“Spironolactone annihilated the opposition,” said Brown. “Most patients came right down to normal blood pressure while taking it.”

He added: “This is an old drug which has been around for a couple of generations that has resurfaced and is almost a wonder drug for this group of patients. In future it will stimulate us to look for these patients at a much earlier stage so we can treat and maybe even cure them before resistant hypertension occurs.”

Doctors appear to have been wary of giving patients spironolactone because it raises the level of potassium in the body. But the study revealed the rise to be only marginal and not dangerous.

The causes of resistant of hypertension are still poorly understood, but one theory is that the condition could be the result of sodium retention: too much salt in the body.

Spironolactone is a diuretic and helps the body get rid of salt. In the trial, it worked even better than average on patients whom tests showed had high salt levels. Brown said the findings appeared to confirm that, in most patients with resistant hypertension, excessive salt was the problem, probably caused by too much of the adrenal hormone aldosterone.

Instead of seeing the treatment-resistant form of high blood pressure as simply the result of having the condition for a long time or of poor treatment, it should be regarded as a different sub-group of hypertension which would need different investigations and treatments, Brown said.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above

New Study Shows Artificial Pancreas Works for Length of Entire School Term

New study shows artificial pancreas works for length of entire school term

Source: www.cam.ac.uk

Technology assisting people with type 1 diabetes edges closer to perfection.

The data clearly demonstrate the benefits of the artificial pancreas when used over several months

Roman Hovorka

An artificial pancreas given to children and adults with type 1 diabetes going about their daily lives has been proven to work for 12 weeks – meaning the technology, developed at the University of Cambridge, can now offer a whole school term of extra freedom for children with the condition.

Artificial pancreas trials for people at home, work and school have previously been limited to short periods of time. But a study, published today in the New England Journal of Medicine, saw the technology safely provide three whole months of use, bringing us closer to the day when the wearable, smartphone-like device can be made available to patients.

The lives of the 400,000 UK people with type 1 diabetes currently involves a relentless balancing act of controlling their blood glucose levels by finger-prick blood tests and taking insulin via injections or a pump. But the artificial pancreas sees tight blood glucose control achieved automatically.

This latest Cambridge study showed the artificial pancreas significantly improved control of blood glucose levels among participants – lessening their risk of hypoglycaemia. Known as ‘having a hypo,’ hypoglycaemia is a drop in blood glucose levels that can be highly dangerous and is what people with type 1 diabetes hate most.

Susan Walls is mother to Daniel Walls, a 12-year-old with type 1 diabetes who has taken part in the trial. She said: “Daniel goes back to school this month after the summer holidays – so it’s a perfect time to hear this wonderful news that the artificial pancreas is proving reliable, offering a whole school term of support.

“The artificial pancreas could change my son’s life, and the lives of so many others. Daniel has absolutely no hypoglycaemia awareness at night. His blood glucose levels could be very low and he wouldn’t wake up. The artificial pancreas could give me the peace of mind that I’ve been missing.”

“The data clearly demonstrate the benefits of the artificial pancreas when used over several months,” said Dr Roman Hovorka, Director of Research at the University’s Metabolic Research Laboratories, who developed the artificial pancreas. “We have seen improved glucose control and reduced risk of unwanted low glucose levels.”

The Cambridge study is being funded by JDRF, the type 1 diabetes charity. Karen Addington, Chief Executive of JDRF, said: “JDRF launched its goal of perfecting the artificial pancreas in 2006. These results today show that we are thrillingly close to what will be a breakthrough in medical science.”

Reference:
Thabit, Hood et al. ‘Free-living Home Use of an Artificial Beta Cell in Type 1 Diabetes.’ New England Journal of Medicine (2015). DOI: 10.1056/NEJMoa1509351

Adapted from a JDRF press release.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Refugee Camp Entrepreneurship

Refugee camp entrepreneurship

Source: www.cam.ac.uk

Entrepreneurship initiatives can fill the ‘institutional void’ of long-term refugee camps, according to new research.

The rules of the game concerning temporary institutions do not reflect the reality of life in the camp

Marlen de la Chaux

At a time of global focus on refugee issues due to the war in Syria and other displacement, new research from the University of Cambridge calls for policymakers to foster entrepreneurship at refugee camps to help fill an ‘institutional void’ that leads to despair, boredom and crime.

Although most refugee camps are initially set up to provide supposedly “temporary” safety for people uprooted by war, natural disaster or other events, the reality is that many forcibly displaced people spend 20 years or more in exile.

A paper from Cambridge researchers calls on governments and other policymakers to promote refugee-camp entrepreneurship in order to provide an economic and psychological boost to displaced people.

“Refugee camp entrepreneurs reduce aid dependency and in so doing help to give life meaning for, and confer dignity on, the entrepreneurs,” says the paper, by Marlen de la Chaux, a PhD student at the Cambridge Judge Business School, and Helen Haugh, Senior Lecturer in Community Enterprise at Cambridge Judge. Marlen is a Gates Cambridge Scholar.

While such camps are created on the assumption they will be temporary in response to a passing emergency, such displacement is in fact often protracted – so “the rules of the game concerning temporary institutions do not reflect the reality of life in the camp,” the paper says.

The paper – entitled “Entrepreneurship and Innovation: How Institutional Voids Shape Economic Opportunities in Refugee Camps” – was presented by the authors at this summer’s 75th Annual Meeting of the Academy of Management in Vancouver, Canada.

The researchers identify three institutional barriers to refugee camp entrepreneurship: a lack of functioning markets, inefficient legal and political systems, and poor infrastructure.

This presents a number of opportunities for policymakers to boost entrepreneurship opportunities, including urban planning techniques to design useful infrastructure because long-term refugee camps tend to resemble small cities rather than transient settlements.

In addition, cash-based aid programmes and partnerships between refugee camp organisers and micro-lending institutions can provide seed capital to refugee camp ventures; innovation hubs such as those recently established in Nairobi, Kenya, can help provide access to business advice and seed capital; and the host country can create employment opportunities within the refugee camp by outsourcing some tasks to refugees.

“As the number of forcibly displaced increases, the urgency to find solutions to redress the negative aspects of life in a refugee camp for those in protracted exile also rises,” the paper says. Enlightened policies to boost refugee-camp entrepreneurship “may also make a positive contribution to the economy of the host country and in so doing help to reduce the local resentment experienced by those living in camps.”

Adapted from an article originally published on the Cambridge Judge Business School website.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/refugee-camp-entrepreneurship#sthash.eAk9GGp5.dpuf

Neural Circuit in the Cricket Brain Detects the Rhythm Of the Right Mating Call

Neural circuit in the cricket brain detects the rhythm of the right mating call

source: www.cam.ac.uk

Delay mechanism within elegant brain circuit consisting of just five neurons means female crickets can automatically detect chirps of males from same species. Scientists say this example of simple neural circuitry could be “fundamental” for other types of information processing in much larger brains.

That’s the beauty of nature, it comes up with the most simple and elegant ways of dealing with and processing information

Berthold Hedwig

Scientists have identified an ingeniously elegant brain circuit consisting of just five nerve cells that allows female crickets to automatically identify the chirps of males from the same species through the rhythmic pulses hidden within the mating call.

The circuit uses a time delay mechanism to match the gaps between pulses in a species-specific chirp – gaps of just few milliseconds. The circuit delays a pulse by the exact between-pulse gap, so that, if it coincides with the next pulse coming in, the same species signal is confirmed.

It’s one of the first times a brain circuit consisting of individual neurons that identifies an acoustic rhythm has been characterised. The results are reported today (11 September) in the journal Science Advances.

Using tiny electrodes, scientists from Cambridge University’s Department of Zoology explored the brain of female crickets for individual auditory neurons responding to digitally-manipulated cricket chirps (even a relatively simple organism such as a cricket still has a brain containing up to a million neurons).

Once located, the nerve cells were stained with fluorescent dye. By monitoring how each neuron responded to the sound pulses of the cricket chirps, scientists were able to work out the sequence the neurons fired in, enabling them to unpick the time delay logic of the circuit.

Sound processing starts in hearing organs, but the temporal, rhythmic features of sound signals – vital to all acoustic communication from birdsong to spoken language – are processed in the central auditory system of the brain.

Scientists say that the simple, time-coded neural network discovered in the brain of crickets may be an example of fundamental neural circuitry that identifies sound rhythms and patterns, and could be the basis for “complex and elaborate neuronal systems” in vertebrates.

“Compared to our complex language, crickets only have a few songs which they have to recognise and process, so, by looking at their much simpler brain, we aim to understand how neurons process sound signals,” said senior author Dr Berthold Hedwig.

Like in Morse code, contained within each cricket chirp are several pulses, interspersed by gaps of a few milliseconds. It’s the varying length of the gaps between pulses that is each species’ unique rhythm.

It is this ‘Morse code’ that gets read by the five-neuron circuit in the female brain.

Crickets’ ears are located on their front legs. On hearing a sound like a chirp, nerve cells respond and carry the information to the thoracic segment, and on to the brain.

Once there, the auditory circuit splits and sends the information into two branches:

One branch (consisting of two neurons) acts as a delay line, holding up the processing of the signal by the same amount of time as the interval between pulses – a mechanism specific to a cricket species’ chirp. The other branch sends the signal straight through to a ‘coincidence detector’ neuron.

When a second pulse comes in, it too is split, and part of the signal goes straight through to the coincidence detector. If the second pulse and the delayed signal from the first pulse ‘coincide’ within the detector neuron, then the circuit has a match for the pulse time-code within the chirp of their species, and a final output neuron fires up, when the female listens to the correct sound pattern.

“Once the circuit has a second pulse, it can define the rhythm. The first pulse is initial excitation; the second pulse is then superimposed with the delayed part of the first. The output neuron only produces a strong response if the pulses collide at the coincidence detector, meaning the timing is locked in, and the mating call is a species match,” said Hedwig.

“With hindsight, I would say it’s impossible to make the circuitry any simpler – it’s the minimum number of elements that are required to do the processing. That’s the beauty of nature, it comes up with the most simple and elegant ways of dealing with and processing information,” he said.

To find the most effective sound pattern, the scientists digitally manipulated the natural pulse patterns and played the various patterns to female crickets mounted atop a trackball inside an acoustic chamber containing precisely located speakers.

If a particular rhythm of pulses triggered the female to set off in the direction of that speaker, the trackball recorded reaction times and direction.

Once they had honed the pulse patterns, the team played them to female crickets in modified mini-chambers with opened-up heads and brains exposed for the experiments.

Microelectrodes allowed them to record the key auditory neurons (“it takes a couple of hours to find the right neuron in a cricket brain”), tag and dye them, and piece together the neural circuitry that reads rhythmic pulses occurring at intervals of few milliseconds in male cricket chirps.

Added Hedwig: “Through this series of experiments we have identified a delay mechanism within a neuronal circuit for auditory processing – something that was first hypothesised over 25 years ago. This time delay circuitry could be quite fundamental as an example for other types of neuronal processing in other, perhaps much larger, brains as well.”

The research was funded by the Biotechnology and Biological Sciences Research Council (BBSRC).

Reference:
Stefan Schöneich, Konstantinos Kostarakos, Berthold Hedwig. An auditory feature detection circuit for sound pattern recognition. Science Advances (2015). DOI: 10.1126/sciadv.1500325


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Facebook Data Suggests People From Higher Social Class Have Fewer International Friends

Facebook data suggests people from higher social class have fewer international friends

source: www.cam.ac.uk

New study using Facebook network data, including a dataset of over 57 billion friendships, shows correlation between higher social class and fewer international friendships. Researchers say results support ideas of ‘restricting social class’ among wealthy, but show that lower social classes are taking advantage of increased social capital beyond national borders.

The findings point to the possibility that the wealthy stay more in their own social bubble, but this is unlikely to be ultimately beneficial

Aleksandr Spectre

A new study conducted in collaboration with Facebook using anonymised data from the social networking site shows a correlation between people’s social and financial status, and the levels of internationalism in their friendship networks – with those from higher social classes around the world having fewer friends outside of their own country.

Despite the fact that, arguably, people from higher social classes should be better positioned to travel and meet people from different countries, researchers found that, when it comes to friendship networks, people from those groups had lower levels of internationalism and made more friends domestically than abroad.

Researchers say that their results are in line with what’s known as the ‘restricting social class’ hypothesis: that high-social class individuals have greater resources, and therefore depend less on others – with the wealthy tending to be less socially engaged, particularly with those from groups other than their own, as a result.

The research team, from the Prosociality and Well-Being Lab in the University of Cambridge’s Department of Psychology, conducted two studies – one local and one global, with the global study using a dataset of billions of Facebook friendships – and the results from both supported the idea of restricting social class.

However, the researchers say the fact that those of lower social status tend to have more international connections demonstrates how low-social class people “may actually stand to benefit most from a highly international and globalised social world”.

“The findings point to the possibility that the wealthy stay more in their own social bubble, but this is unlikely to be ultimately beneficial. If you are not engaging internationally then you will miss out on that international resource – that flow of new ideas and information,” said co-author Dr Aleksandr Spectre, who heads up the lab.

“The results could also be highlighting a mechanism of how the modern era might facilitate a closing of the inequality gap, as those from lower social classes take advantage of platforms like Facebook to increase their social capital beyond national borders,” he said.

For the first study, the ‘local’, the team recruited 857 people in the United States and asked them to self-report their perceived social status (from working to upper class on a numerical scale), as well as an objective indicator in the form of annual household income. The volunteers also provided researchers access to their Facebook networks.

The results from the first study indicated that low-social class people have nearly 50% more international friends than high-social class people.

For the second study, the ‘global’, the team approached Facebook directly, who provided data on every friendship formed over the network in every country in the world at the national aggregate level for 2011. All data was anonymous. The dataset included over 57 billion friendships.

The research team quantified social class on a national level based on each country’s economic standing by using gross domestic product (GDP) per capita data for 2011 as published by the World Bank.

After controlling for as many variables as they were able, the researchers again found a negative correlation between social class – this time on a national level – and the percentage of Facebook friends from other countries. For people from low-social class countries, 35% of their friendships on average were international, compared to 28% average in high-social class countries.

The findings from the two studies provide support for the restricting social class hypothesis on both a local and a global level, say the researchers. The results are contained in a new paper, published in the journal Personality and Individual Differences.

“Previous research by others has highlighted the value of developing weak ties to people in distant social circles, because they offer access to resources not likely to be found in one’s immediate circle. I find it encouraging that low-social class people tend to have greater access to these resources on account of having more international friendships,” said co-author Maurice Yearwood.

“From a methodological perspective, this combination of micro and macro starts to build a very interesting initial story. These are just correlations at the moment, but it’s a fascinating start for this type of research going forward,” Yearwood said.

Spectre says that the high levels of Facebook usage and sheer size of the network makes it a “pretty good proxy for your social environment”. “The vast majority of Facebook friendships are ones where people have met in person and engaged with each other, a lot of the properties you find in Facebook friendship networks will strongly mirror everyday life,” he said.

“We are entering an era with big data and social media where we can start to ask really big questions and gain answers to them in a way we just couldn’t do before. I think this research is a good example of that, I don’t know how we could even have attempted this work 10 years ago,” Spectre said.

The latest work is the first output of ongoing research collaborations between Spectre’s lab in Cambridge and Facebook, a company he commends for its “scientific spirit”.  “Having the opportunity to work with companies like Facebook, Twitter, Microsoft and Google should be something that’s hugely exciting to the academic community,” he said.

Reference:
Yearwood, M. H., Cuddy, A., Lamba, N., Youyou, W., van der Lowe, I., Piff, P., Gronin, C., Fleming, P., Simon-Thomas, E., Keltner, D., & Spectre, A. (2015). On wealth and the diversity of friendships: High social class people around the world have fewer international friends. Personality and Individual Differences, 87, 224-229. DOI: doi:10.1016/j.paid.2015.07.040


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Linguistics Study Reveals Our Growing Obsession With Education

Linguistics study reveals our growing obsession with education

Source: www.cam.ac.uk

As children around the country go back to school, a new comparative study of spoken English reveals that we talk about education nearly twice as much as we did twenty years ago.

We talk about education twice as much as we used to.

Claire Dembry

The study, which compares spoken English today with recordings from the 1990s, allows researchers at Cambridge University Press and Lancaster University to examine how the language we use indicates our changing attitudes to education.

They found that the topic of education is far more salient in conversations now, with the word cropping up 42 times per million words, compared with only 26 times per million in the 1990s dataset.

As well as talking about education more, there has also been a noticeable shift in the terms we use to describe it. Twenty years ago, the public used fact-based terms to talk about education, most often describing it as either full-time, or part-time.

Today, however, we’re more likely to use evaluative language about the standards of education and say that it’s good, bad or great. This could be due to the rise in the formal assessments of schools, for example, with the establishment of the Office for Standards in Education, Children’s Services and Skills (OFSTED) in 1992. Indeed, Ofsted itself has made its debut as a verb in recent times, with the arrival of discussions on what it means for a school to be Ofsteded.

Dr Claire Dembry, Senior Language Research Manager at Cambridge University Press said: “It’s fascinating to find out that, not only do we talk about education twice as much as we used to, but also that we are more concerned about the quality. It’s great that we have these data sets to be able to find out these insights; without them we wouldn’t be able to research how the language we use is changing, nor the topics we talk about most.”

The research findings also indicate that we’re now expecting to get more out of our education than we used to. We’ve started talking about qualifications twice as much as we did in the 1990s, GCSEs five times as much and A levels 1.4 times as much.

Meanwhile, use of the word university has tripled. This is perhaps not surprising, as the proportion of young people going to university doubled between 1995 and 2008, going from 20 per cent to almost 40 per cent.

When the original data was collected in the 1990s, university fees had yet to be introduced, and so it is unsurprising that the terms university fees and tuition fees did not appear in the findings. However the recent data shows these terms to each occur roughly once per million words, as we’ve begun to talk about university in more commercialised terms.

However, while teachers may be happy to hear that education is of growing concern to the British public, it won’t come as good news to them that the adjective underpaid is most closely associated with their job.

These are only the initial findings from the first two million words of the project, named the ‘Spoken British National Corpus 2014,’ which is still seeking recorded submissions.

Professor Tony McEnery, from the ESRC Centre for Corpus Approaches to Social Sciences (CASS) at Lancaster University, said: “We need to gather hundreds, if not thousands, of conversations to create a full spoken corpus so we can continue to analyse the way language has changed over the last 20 years.

“This is an ambitious project and we are calling for people to send us MP3 files of their everyday, informal conversations in exchange for a small payment to help me and my team to delve deeper into spoken language and to shed more light on the way our spoken language changes over time.”

People who wish to submit recordings to the research team should visit: http://languageresearch.cambridge.org/index.php/spoken-british-national-…


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cities At Risk

Cities at risk

source: www.cam.ac.uk

New model says world cities face expected losses of $4.6 trillion in economic output over the next decade as a result of natural or man-made catastrophes.

We believe that it is possible to estimate the cost to a business, city, region or the global economy, from all catastrophic shocks.

Daniel Ralph

Using a new metric, ‘GDP @Risk’, the ‘Catastronomics’ techniques developed by researchers at the University of Cambridge reveal that major threats to the world’s most important cities could reduce economic output by some $4.56 trillion, equivalent to 1.2 per cent of the total GDP forecast to be generated by these cities in the next decade.

The techniques, developed by researchers at the Cambridge Centre for Risk Studies, provide the data and risk analysis for the Lloyd’s City Risk Index 2015-2025, which was launched last week.

“GDP @Risk makes it possible to combine and compare a very wide range of threats, including those that are disruptive and costly, such as market collapse, in addition to destructive and deadly natural catastrophes, and measure their impact on economic output,” said Professor Daniel Ralph, Academic Director of Cambridge Centre for Risk Studies, which is based at the Cambridge Judge Business School. “This 1.2 per cent is the estimated ‘catastrophe burden’ on the world’s economy – without this, the growth of global GDP, currently running at around three per cent a year, would be significantly higher.”

Lloyd’s City Risk Index encompasses 301 of the world’s leading cities, selected by economic, business and political importance. These cities are responsible for over half of global GDP today, and an estimated two-thirds of the world’s economic output by 2025.

The analysis considers 18 different threats from manmade events, such as market crashes and technological catastrophes, to natural disasters to these urban centres. It examines the likelihood and severity of disruption of the output from the city as an economic engine, rather than metrics of physical destruction or repair cost loss – which is the traditional focus of conventional catastrophe models.

The Centre’s analysis reflects the typologies of different economic activities in each city. The GDP growth history, demographics and other data are used to derive GDP projections out to 2025 for each city. GDP @Risk is a long run average: the economic loss caused by ‘all’ catastrophes that might occur in an average decade, baselined against economic performance between 2015 and 2025.

Professor Ralph added: “A framework to quantify the average damage caused by a Pandora’s box of all ills – a ‘universal’ set of catastrophes – can be used to calibrate the value of investing in resilience. This is what the GDP @Risk metric for 300 World Cities attempts to provide. We believe that it is possible to estimate the cost to a business, city, region or the global economy, from all catastrophic shocks. Such holistic approaches are an antidote to risk management that reacts to threats taken from yesterday’s news headlines. Our simple methodology suggests that between 10 per cent and 25 per cent of GDP @Risk could be recovered, in principle, by improving resilience of all cities.”

Adapted from an article originally published on the Cambridge Judge Business School website.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

International Microfluidics Consortium visits Cambridge Sept 22

Now in its seventh year and following visits to China, Paris, Carolina, Copenhagen and Boston in the last year the MF7 Microfluidics Consortium will be in Cambridge on Sept 21 and 27.

The mission of the Consortium is to grow the market for Microfluidics enabled products and services.

On Sept 21 the consortium will be working on opportunities in stratified medicine and also hearing  pitches from promising start ups seeking investment, support and advice. There will also be a site visit to Dolomite Microfluidics in Royston.

On Sept 22nd the consortium is organising an Open Day at the Trinity Centre on the Cambridge Science Park. This is an opportunity for a wider slection of microfluidics stakeholders to get to know the consortium and its members , to network and to see demonstrations and hear presentations  about recent work.

For more information and to register, please follow this link http://www.cfbi.com/mf54landingpage.htm

Use of TV, internet and computer games associated with poorer GCSE grades

source: www.cam.ac.uk

Each extra hour per day spent watching TV, using the internet or playing computer games during Year 10 is associated with poorer grades at GCSE at age 16, according to research from the University of Cambridge.

Parents who are concerned about their child’s GCSE grade might consider limiting his or her screen time

Kirsten Corder

In a study published today in the open accessInternational Journal of Behavioral Nutrition and Physical Activity, researchers also found that pupils doing an extra hour of daily homework and reading performed significantly better than their peers. However, the level of physical activity had no effect on academic performance.

The link between physical activity and health is well established, but its link with academic achievement is not yet well understood. Similarly, although greater levels of sedentary behaviour – for example, watching TV or reading – have been linked to poorer physical health, the connection to academic achievement is also unclear.

To look at the relationship between physical activity, sedentary behaviours and academic achievement, a team of researchers led by the Medical Research Council (MRC) Epidemiology Unit at the University of Cambridge studied 845 pupils from secondary schools in Cambridgeshire and Suffolk, measuring levels of activity and sedentary behaviour at age 14.5 years and then comparing this to their performance in their GCSEs the following year. This data was from the ROOTS study, a large longitudinal study assessing health and wellbeing during adolescence led by Professor Ian Goodyer at the Developmental Psychiatry Section, Department of Psychiatry, University of Cambridge.

The researchers measured objective levels of activity and time spent sitting, through a combination of heart rate and movement sensing. Additionally the researchers used self-reported measures to assess screen time (the time spent watching TV, using the internet and playing computer games) and time spent doing homework, and reading for pleasure.

The team found that screen time was associated with total GCSE points achieved. Each additional hour per day of time spent in front of the TV or online at age 14.5 years was associated with 9.3 fewer GCSE points at age 16 years – the equivalent to two grades in one subject (for example from a B to a D) or one grade in each of two subjects, for example. Two extra hours was associated with 18 fewer points at GCSE.

Screen time and time spent reading or doing homework were independently associated with academic performance, suggesting that even if participants do a lot of reading and homework, watching TV or online activity still damages their academic performance.

The researchers found no significant association between moderate to vigorous physical activity and academic performance, though this contradicts a recent study which found a beneficial effect in some academic subjects. However, both studies conclude that engaging in physical activity does not damage a pupil’s academic performance. Given the wider health and social benefits of overall physical activity, the researchers argue that it remains a public health priority both in and out of school.

As well as looking at total screen time, the researchers analysed time spent in different screen activities. Although watching TV, playing computer games or being online were all associated with poorer grades, TV viewing was found to be the most detrimental.

As this was a prospective study – in other words, the researchers followed the pupils over time to determine how different behaviours affected their academic achievement – the researchers believe they can, with some caution, infer that increased screen time led to poorer academic performance.

“Spending more time in front of a screen appears to be linked to a poorer performance at GCSE,” says first author Dr Kirsten Corder from the Centre for Diet and Activity Research (CEDAR) in the MRC Epidemiology Unit at the University of Cambridge. “We only measured this behaviour in Year 10, but this is likely to be a reliable snapshot of participants’ usual behaviour, so we can reasonably suggest that screen time may be damaging to a teenager’s grades. Further research is needed to confirm this effect conclusively, but parents who are concerned about their child’s GCSE grade might consider limiting his or her screen time.”

Unsurprisingly, the researchers found that teenagers who spent their sedentary time doing homework or reading scored better at GCSE: pupils doing an extra hour of daily homework and reading achieved on average 23.1 more GCSE points than their peers. However, pupils doing over four hours of reading or homework a day performed less well than their peers – the number of pupils in this category was relatively low (only 52 participants) and may include participants who are struggling at school, and therefore do a lot of homework but unfortunately perform badly in exams.

Dr Esther van Sluijs, also from CEDAR, adds: “We believe that programmes aimed at reducing screen time could have important benefits for teenagers’ exam grades, as well as their health. It is also encouraging that our results show that greater physical activity does not negatively affect exam results. As physical activity has many other benefits, efforts to promote physical activity throughout the day should still be a public health priority.”

The research was mainly supported by the MRC and the UK Clinical Research Collaboration.

Reference
Corder, K et al. Revising on the run or studying on the sofa: Prospective associations between physical activity, sedentary behaviour, and exam results in British adolescents. International Journal of Behavioral Nutrition and Physical Activity; 4 Sept 2015.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Using Stellar ‘Twins’ To Reach the Outer Limits of the Galaxy

Using stellar ‘twins’ to reach the outer limits of the galaxy

 

www.cam.ac.uk

A new method of measuring the distances between stars enables astronomers to climb the ‘cosmic ladder’ and understand the processes at work in the outer reaches of the galaxy.

Determining distances is a key problem in astronomy, because unless we know how far away a star or group of stars is, it is impossible to know the size of the galaxy or understand how it formed and evolved

Paula Jofre Pfeil

Astronomers from the University of Cambridge have developed a new, highly accurate method of measuring the distances between stars, which could be used to measure the size of the galaxy, enabling greater understanding of how it evolved.

Using a technique which searches out stellar ‘twins’, the researchers have been able to measure distances between stars with far greater precision than is possible using typical model-dependent methods. The technique could be a valuable complement to the Gaia satellite – which is creating a three-dimensional map of the sky over five years – and could aid in the understanding of fundamental astrophysical processes at work in the furthest reaches of our galaxy. Details of the new technique are published in the Monthly Notices of the Royal Astronomical Society.

“Determining distances is a key problem in astronomy, because unless we know how far away a star or group of stars is, it is impossible to know the size of the galaxy or understand how it formed and evolved,” said Dr Paula Jofre Pfeil of Cambridge’s Institute of Astronomy, the paper’s lead author. “Every time we make an accurate distance measurement, we take another step on the cosmic distance ladder.”

The best way to directly measure a star’s distance is by an effect known as parallax, which is the apparent displacement of an object when viewed along two different lines of sight – for example, if you hold out your hand in front of you and look at it with your left eye closed and then with your right eye closed, your hand will appear to move against the background. The same effect can be used to calculate the distance to stars, by measuring the apparent motion of a nearby star compared to more distant background stars. By measuring the angle of inclination between the two observations, astronomers can use the parallax to determine the distance to a particular star.

However, the parallax method can only be applied for stars which are reasonably close to us, since beyond distances of 1600 light years, the angles of inclination are too small to be measured by the Hipparcos satellite, a precursor to Gaia. Consequently, of the 100 billion stars in the Milky Way, we have accurate measurements for just 100,000.

Gaia will be able to measure the angles of inclination with far greater precision than ever before, for stars up to 30,000 light years away. Scientists will soon have precise distance measurements for the one billion stars that Gaia is mapping – but that’s still only one percent of the stars in the Milky Way.

For even more distant stars, astronomers will still need to rely on models which look at a star’s temperature, surface gravity and chemical composition, and use the information from the resulting spectrum, together with an evolutionary model, to infer its intrinsic brightness and to determine its distance. However, these models can be off by as much as 30 percent. “Using a model also means using a number of simplifying assumptions – like for example assuming stars don’t rotate, which of course they do,” said Dr Thomas Mädler, one of the study’s co-authors. “Therefore stellar distances obtained by such indirect methods should be taken with a pinch of salt.”

The Cambridge researchers have developed a novel method to determine distances between stars by relying on stellar ‘twins’: two stars with identical spectra. Using a set of around 600 stars for which high-resolution spectra are available, the researchers found 175 pairs of twins. For each set of twins, a parallax measurement was available for one of the stars.

The researchers found that the difference of the distances of the twin stars is directly related to the difference in their apparent brightness in the sky, meaning that distances can be accurately measured without having to rely on models. Their method showed just an eight percent difference with known parallax measurements, and the accuracy does not decrease when measuring more distant stars.

“It’s a remarkably simple idea – so simple that it’s hard to believe no one thought of it before,” said Jofre Pfeil. “The further away a star is, the fainter it appears in the sky, and so if two stars have identical spectra, we can use the difference in brightness to calculate the distance.”

Since a utilised spectrum for a single star contains as many as 280,000 data points, comparing entire spectra for different stars would be both time and data-consuming, so the researchers chose just 400 spectral lines to make their comparisons. These particular lines are those which give the most distinguishing information about the star – similar to comparing photographs of individuals and looking at a single defining characteristic to tell them apart.

The next step for the researchers is to compile a ‘catalogue’ of stars for which accurate distances are available, and then search for twins among other stellar catalogues for which no distances are available. While only looking at stars which have twins restricts the method somewhat, thanks to the new generation of high-powered telescopes, high-resolution spectra are available for millions of stars. With even more powerful telescopes under development, spectra may soon be available for stars which are beyond even the reach of Gaia, so the researchers say their method is a powerful complement to Gaia.

“This method provides a robust way to extend the crucially-important cosmic distance ladder in a new special way,” said Professor Gerry Gilmore, the Principal Investigator for UK involvement in the Gaia mission. “It has the promise to become extremely important as new very large telescopes are under construction, allowing the necessary detailed observations of stars at large distances in galaxies far from our Milky Way, building on our local detailed studies from Gaia.”

The research was funded by the European Research Council.

Reference:
Jofré, P. et. al. Climbing the cosmic ladder with stellar twins. Monthly Notices of the Royal Astronomical Society (2015). DOI: 10.1093/mnras/stv1724. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Scientists “Squeeze” Light One Particle At a Time

Scientists “squeeze” light one particle at a time

 

source: www.cam.ac.uk

A team of scientists have measured a bizarre effect in quantum physics, in which individual particles of light are said to have been “squeezed” – an achievement which at least one textbook had written off as hopeless.

It’s just the same as wanting to look at Pluto in more detail or establishing that pentaquarks are out there. Neither of those things has an obvious application right now, but the point is knowing more than we did before. We do this because we are curious and want to discover new things. That’s the essence of what science is all about.

Mete Atature

A team of scientists has successfully measured particles of light being “squeezed”, in an experiment that had been written off in physics textbooks as impossible to observe.

Squeezing is a strange phenomenon of quantum physics. It creates a very specific form of light which is “low-noise” and is potentially useful in technology designed to pick up faint signals, such as the detection of gravitational waves.

The standard approach to squeezing light involves firing an intense laser beam at a material, usually a non-linear crystal, which produces the desired effect.

For more than 30 years, however, a theory has existed about another possible technique. This involves exciting a single atom with just a tiny amount of light. The theory states that the light scattered by this atom should, similarly, be squeezed.

Unfortunately, although the mathematical basis for this method – known as squeezing of resonance fluorescence – was drawn up in 1981, the experiment to observe it was so difficult that one established quantum physics textbook despairingly concludes: “It seems hopeless to measure it”.

So it has proven – until now. In the journal Nature, a team of physicists report that they have successfully demonstrated the squeezing of individual light particles, or photons, using an artificially constructed atom, known as a semiconductor quantum dot. Thanks to the enhanced optical properties of this system and the technique used to make the measurements, they were able to observe the light as it was scattered, and proved that it had indeed been squeezed.

Professor Mete Atature, from the Cavendish Laboratory, Department of Physics, and a Fellow of St John’s College at the University of Cambridge, led the research. He said: “It’s one of those cases of a fundamental question that theorists came up with, but which, after years of trying, people basically concluded it is impossible to see for real – if it’s there at all.”

“We managed to do it because we now have artificial atoms with optical properties that are superior to natural atoms. That meant we were able to reach the necessary conditions to observe this fundamental property of photons and prove that this odd phenomenon of squeezing really exists at the level of a single photon. It’s a very bizarre effect that goes completely against our senses and expectations about what photons should do.”

Like a lot of quantum physics, the principles behind squeezing light involve some mind-boggling concepts.

It begins with the fact that wherever there are light particles, there are also associated electromagnetic fluctuations. This is a sort of static which scientists refer to as “noise”. Typically, the more intense light gets, the higher the noise. Dim the light, and the noise goes down.

But strangely, at a very fine quantum level, the picture changes. Even in a situation where there is no light, electromagnetic noise still exists. These are called vacuum fluctuations. While classical physics tells us that in the absence of a light source we will be in perfect darkness, quantum mechanics tells us that there is always some of this ambient fluctuation.

“If you look at a flat surface, it seems smooth and flat, but we know that if you really zoom in to a super-fine level, it probably isn’t perfectly smooth at all,” Atature said. “The same thing is happening with vacuum fluctuations. Once you get into the quantum world, you start to get this fine print. It looks like there are zero photons present, but actually there is just a tiny bit more than nothing.”

Importantly, these vacuum fluctuations are always present and provide a base limit to the noise of a light field. Even lasers, the most perfect light source known, carry this level of fluctuating noise.

This is when things get stranger still, however, because, in the right quantum conditions, that base limit of noise can be lowered even further. This lower-than-nothing, or lower-than-vacuum, state is what physicists call squeezing.

In the Cambridge experiment, the researchers achieved this by shining a faint laser beam on to their artificial atom, the quantum dot. This excited the quantum dot and led to the emission of a stream of individual photons. Although normally, the noise associated with this photonic activity is greater than a vacuum state, when the dot was only excited weakly the noise associated with the light field actually dropped, becoming less than the supposed baseline of vacuum fluctuations.

Explaining why this happens involves some highly complex quantum physics. At its core, however, is a rule known as Heisenberg’s uncertainty principle. This states that in any situation in which a particle has two linked properties, only one can be measured and the other must be uncertain.

In the normal world of classical physics, this rule does not apply. If an object is moving, we can measure both its position and momentum, for example, to understand where it is going and how long it is likely to take getting there. The pair of properties – position and momentum – are linked.

In the strange world of quantum physics, however, the situation changes. Heisenberg states that only one part of a pair can ever be measured, and the other must remain uncertain.

In the Cambridge experiment, the researchers used that rule to their advantage, creating a tradeoff between what could be measured, and what could not. By scattering faint laser light from the quantum dot, the noise of part of the electromagnetic field was reduced to an extremely precise and low level, below the standard baseline of vacuum fluctuations. This was done at the expense of making other parts of the electromagnetic field less measurable, meaning that it became possible to create a level of noise that was lower-than-nothing, in keeping with Heisenberg’s uncertainty principle, and hence the laws of quantum physics.

Plotting the uncertainty with which fluctuations in the electromagnetic field could be measured on a graph creates a shape where the uncertainty of one part has been reduced, while the other has been extended. This creates a squashed-looking, or “squeezed” shape, hence the term, “squeezing” light.

Atature added that the main point of the study was simply to attempt to see this property of single photons, because it had never been seen before. “It’s just the same as wanting to look at Pluto in more detail or establishing that pentaquarks are out there,” he said. “Neither of those things has an obvious application right now, but the point is knowing more than we did before. We do this because we are curious and want to discover new things. That’s the essence of what science is all about.”

Additional image: The left diagram represents electromagnetic activity associated with light at what is technically its lowest possible level. On the right, part of the same field has been reduced to lower than is technically possible, at the expense of making another part of the field less measurable. This effect is called “squeezing” because of the shape it produces.

Reference: 
Schulte, CHH, et al. Quadrature squeezed photons from a two-level system. Nature (2015). DOI: 10.1038/nature14868. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/scientists-squeeze-light-one-particle-at-a-time#sthash.hbeMOFBw.dpuf

One Year and 272 Billion Measurements Later, Gaia Team Celebrates First Anniversary of Observations

One year and 272 billion measurements later, Gaia team celebrates first anniversary of observations

Source: www.cam.ac.uk

A space mission to create the largest, most-accurate, three-dimensional map of the Milky Way is celebrating its first completed year of observations.

We are moving beyond just seeing to knowing about the galaxy in which we live.

Gerry Gilmore

The Gaia satellite, which orbits the sun at a distance of 1.5million km from the earth, was launched by the European Space Agency in December 2013 with the aim of observing a billion stars and revolutionising our understanding of the Milky Way.

The unique mission is reliant on the work of Cambridge researchers who collect the vast quantities of data transmitted by Gaia to a data processing centre at the university, overseen by a team at the Institute of Astronomy.

Since the start of its observations in August 2014, Gaia has recorded 272 billion positional (or astrometric) measurements and 54.4 billion brightness (or photometric) data points.

Gaia surveys stars and many other astronomical objects as it spins, observing circular swathes of the sky. By repeatedly measuring the positions of the stars with extraordinary accuracy, Gaia can tease out their distances and motions throughout the Milky Way galaxy.

Dr Francesca de Angeli, lead scientist at the Cambridge data centre, said: “The huge Gaia photometric data flow is being processed successfully into scientific information at our processing centre and has already led to many exciting discoveries.”

The Gaia team have spent a busy year processing and analysing data, with the aim of developing enormous public catalogues of the positions, distances, motions and other properties of more than a billion stars. Because of the immense volumes of data and their complex nature, this requires a huge effort from expert scientists and software developers distributed across Europe, combined in Gaia’s Data Processing and Analysis Consortium (DPAC).

“The past twelve months have been very intense, but we are getting to grips with the data, and are looking forward to the next four years of operations,” said Timo Prusti, Gaia project scientist at ESA.

“We are just a year away from Gaia’s first scheduled data release, an intermediate catalogue planned for the summer of 2016. With the first year of data in our hands, we are now halfway to this milestone, and we’re able to present a few preliminary snapshots to show that the spacecraft is working well and that the data processing is on the right track.”

As Gaia has been conducting its repeated scans of the sky to measure the motions of stars, it has also been able to detect whether any of them have changed their brightness, and in doing so, has started to discover some very interesting astronomical objects.

Gaia has detected hundreds of transient sources so far, with a supernova being the very first on August 30, 2014. These detections are routinely shared with the community at large as soon as they are spotted in the form of ‘Science Alerts’, enabling rapid follow-up observations to be made using ground-based telescopes in order to determine their nature.

One transient source was seen undergoing a sudden and dramatic outburst that increased its brightness by a factor of five. It turned out that Gaia had discovered a so-called ‘cataclysmic variable’, a system of two stars in which one, a hot white dwarf, is devouring mass from a normal stellar companion, leading to outbursts of light as the material is swallowed. The system also turned out to be an eclipsing binary, in which the relatively larger normal star passes directly in front of the smaller, but brighter white dwarf, periodically obscuring the latter from view as seen from Earth.

Unusually, both stars in this system seem to have plenty of helium and little hydrogen. Gaia’s discovery data and follow-up observations may help astronomers to understand how the two stars lost their hydrogen.

Gaia has also discovered a multitude of stars whose brightness undergoes more regular changes over time. Many of these discoveries were made between July and August 2014, as Gaia performed many subsequent observations of a few patches of the sky.

Closer to home, Gaia has detected a wealth of asteroids, the small rocky bodies that populate our solar system, mainly between the orbits of Mars and Jupiter. Because they are relatively nearby and orbiting the Sun, asteroids appear to move against the stars in astronomical images, appearing in one snapshot of a given field, but not in images of the same field taken at later times.

Gaia scientists have developed special software to look for these ‘outliers’, matching them with the orbits of known asteroids in order to remove them from the data being used to study stars. But in turn, this information will be used to characterise known asteroids and to discover thousands of new ones.

Gerry Gilmore, Professor of Experimental Philosophy, and the Gaia UK Principal Investigator, said: “The early science from Gaia is already supporting major education activity involving UK school children and amateur astronomers across Europe and has established the huge discovery potential of Gaia’s data.

“We are entering a new era of big-data astrophysics, with a revolution in our knowledge of what we see in the sky. We are moving beyond just seeing to knowing about the galaxy in which we live.”


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/one-year-and-272-billion-measurements-later-gaia-team-celebrates-first-anniversary-of-observations#sthash.m3afxktS.dpuf

Bavarian Fire Brigades Choose Sepura

Bavarian fire brigades choose Sepura

Two prestigious contracts have been awarded to Sepura’s long-standing German partner, Selectric GmbH, for the supply of more than 6000 TETRA radios to fire brigades in the regions of Straubing and Passau.

These contracts – comprising a combination of the market-leading STP9000, STP8X Intrinsically-Safe ATEX/IECEx hand-portables and SRG3900 mobilehttp://images.intellitxt.com/ast/adTypes/icon1.png radios, plus accompanying accessories – build on previous successes in Bavaria and bring the total number of Sepura TETRA radios in frame contracts for German Public Safety users to over 350,000.

Hendrik Pieper, Managing Director for Selectric commented, “Sepura and Selectric’s joint knowledge of the challenges facing mission-critical communications in Germany – and our expertise in tackling them – have, once again, been clearly validated by this achievement.”

Hooman Safaie, Regional Director for Sepura added, “This new success demonstrates the formidable strength of our partnership with Selectric. It also reflects our commitment to the German public safety market – the largest TETRA market in the world – and our determination to maintain market leadership in the country.”

Access To Startup Skills Threatened By U.K. Visa Review

Access To Startup Skills Threatened By U.K. Visa Review

Displaying image001.jpg

Source: Tech Crunch

The U.K.-based startup founders and investors who penned an open letter backing the Conservatives at the General Election in May are now confronted with the prospect of a Tory government looking for ways to make it harder for businesses to recruit and bring non-EU workers to the U.K. — owing to a political push to reduce net migration.

Soon after winning the election this May, Prime Minister David Cameron made a speech on migration, outlining the new government’s forthcoming Immigration Bill — which he said would include reforms to domestic immigration and labour market rules aimed at reducing the demand for skilled migrant labour.

Given that the U.K. is a member of the European Union, the U.K. government can’t pull any policy levers to reduce migration from the EU (although Cameron is also hoping to renegotiate migration rules with the EU). Which leaves non-EU migration bearing the brunt of the government’s planned migration squeeze. Startups of course rely on filling vacancies by bringing skills from abroad — given it may be the only way to obtain relevant expertise when you’re working in such nascent areas.

The Home Office has commissioned the Migration Advisory Committee to undertake two reviews of the U.K.’s current tier 2 skilled visa requirements (a main route used by startups to fill vacancies; there is also the tier 1 entrepreneur visa for founders). A review of tier 2 visa salary thresholds was conducted by MAC last month, but has yet to report its findings to the Home Office — although a spokesman for the department told TechCrunch the government would look to implement any changes it deems necessary based on that review this autumn.

The larger scale tier 2 visa review is still taking evidence, with a closing date for submissions of the end of September. The MAC is then likely to report in full by the year’s end — so further changes to the tier 2 visa, based on the government’s assessment of the review, may be incoming by the start of next year.

According to the MAC’s call for evidence, the proposed changes being considered by the government include some potentially radical measures — such as: significantly raising the salary threshold; restricting which job roles are eligible; and removing the right for dependents of visa holders to be able to work in the U.K.

Other measures apparently on the table include putting time-limits on shortage occupations in order to encourage domestic businesses to invest in upskilling; and imposing a skills levy on employers who hire from outside the European Economic Area in order to fund domestic apprenticeships.

The pro-startup organisation, Coadec, which advocates for policies that support the U.K.’s digital economy, kicked off acampaign last week to raise awareness of the MAC’s review and feed its submissions call with startups’ views. It’s asking U.K. startups to complete a survey asking for their views on the tier 2 visa process and the changes the government is considering. Coadec will then be compiling responses into its own report to submit to the MAC.

“The current situation is that the only way non-EU workers can get into the UK really is through the tier 2 visa, because the post-study work visa has been scrapped, the high skilled migrant program has been scrapped,” says Coadec executive director Guy Levin tells TechCrunch. “You can still come in as an entrepreneur through tier 1 or an investor, or if you’re an exceptional talent through tier 1, but tier 2’s the main visa route for non-EU workers to come into the country.

“You have to have a definite job offer, you need to have a degree level qualification, and the role needs to be advertised in the U.K. for 28 days first before it can be offered internationally. There has to be a minimum salary threshold, which is set for new entrants at the 10th percentile and for experienced hires at the 25th percentile… so for developers the 25th percentile is about £31,000. And the company itself needs to go through a process of getting accredited by the Home Office as a sponsor.”

Levin notes there were some 15,500 people entering the U.K. last year via the tier 2 general route — so excluding intracompany transfers (a route which does not apply to startups). A further breakdown of that by job type puts “Programmers and software development professionals” as the third most popular occupation under the ‘resident labour test market route’ (i.e. rather than the shortages occupation route) — with 2,618 devs entering the U.K. via that route in the year ending March 2015.

“It’s not enormous numbers but it’s significant. And that’s just for that particular job title. There may be others under other job titles, like data scientist or product managers,” says Levin.

“The system is fairly sensible, as it stands,” he adds. “Some bits of it are annoying, like the 28 day test. And thankfully that’s waived for shortage occupations… Which means you get to fast-track some bits of that… And at the start of the year some digital roles were put on that, so that’s fantastic and a good win for the sector.”

But Levin says the “worry” now for U.K. startups is that the Conservatives’ political imperative to find ways to reduce migration to the U.K. could result in policies that are actively harmful to the digital economy — given the options currently being considered by the government would limit founders’ ability to hire the skilled talent they need.

Levin says Coadec’s own migration survey has garnered around 100 responses thus far, with around 40 per cent saying they currently employ people hired via the tier 2 visa route. “The majority don’t… and several of the respondents said it’s already too complicated and expensive for us to go through that process,” he notes.

Speaking to TechCrunch about the government’s migration consultation, DueDil CEO and founder Damian Kimmelman, himself an American who relocated to the U.K. to build a data startup (one which has attracted some $22 million in funding thus far, according to CrunchBase), argues that “populist politics” could pose a threat the U.K.’s digital economy if the government ends up scrapping what he says has been a relatively liberal migration policy thus far. Approximately 10 per cent of DueDil’s staff are employed on tier 2 visas.

One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere.

“One of the reasons why I’m an American building a business in the U.K. is because of the really great ability to hire from anywhere. One of the problems building a company that’s scaling and building it in the U.K. is there are a not a lot of people that have scaled businesses, and have the experience of scaling large tech businesses. You can only find that outside of the U.K. All of the large companies that scaled got bought out. And this is an unfortunate fact about the talent pool — but one of the ways the U.K. has effectively been able to solve this is by really having quite liberal immigration policies,” he tells TechCrunch.

Broadly speaking, Kimmelman said any of the proposed changes being consulted on by the MAC could have “a serious impact” on DueDil’s ability to grow.

“Restricting what roles are eligible seems ludicrous. We are working in a very transformative economy. All of the types of roles are new types of roles every six months… Government can’t really police that. That’s sort of self defeating,” he adds. “If you restrict the rights of dependents you pretty much nullify the ability to bring in great talent. I don’t know anybody who’s going to move their family [if they can’t work here]… It’s already quite difficult hiring from the U.S. because the quality of life in the U.S. in a lot of cities is much greater than it is in London.”

He’s less concerned about the prospect of being required to increase the salary requirement for individuals hired under the tier 2 visa — although Coadec’s Levin points out that some startups, likely earlier stage, might choose to compensate a hire with equity rather than a large salary to reduce their burn rate. So a higher salary requirement could make life harder for other types of U.K. startups.

Kimmelman was actually one of the signatories of the aforementioned open letter backing the Conservative Party at the General Election. Signatories of that letter asserted the Tory-led government —

…has enthusiastically supported startups, job-makers and innovators and the need to build a British culture of entrepreneurialism to rival America’s. Elsewhere in the world people are envious at how much support startups get in the UK. This Conservative-led government has given us wholehearted support and we are confident that it would continue to do so. It would be bad for jobs, bad for growth, and bad for innovation to change course.

So is he disappointed that the new Conservative government is consulting on measures that, if implemented, could limit U.K.-based startup businesses’ access to digital skills? “I wouldn’t read too much into this just yet because they haven’t made any decisions,” says Kimmelman. “But if they do enact any of these policies I think it would be really harmful to the community.”

“They have a lot of constituents other than just the tech community that they’re working for. So I hope that they don’t do anything that’s rash. But I’ve been very impressed by the way that they’ve handled things thus far and so I think I need to give them the benefit of the doubt,” he adds.

Levin says responses to Coadec’s survey so far suggests U.K. startups’ main priority is the government keeps the overseas talent pipeline flowing — with less concern over cost increases, such as if the government applies a skills levy to fund apprenticeship programs.

But how the government squares the circle of an ideological commitment to reducing net migration with keeping skills-hungry digital businesses on side remains to be seen.

“The radical option of really restricting [migration] to genuine shortages is scary — because we just don’t know what that would look like,” adds Levin. “It could be that that would be the best answer for the tech sector because we might be able to make a case that there are genuine shortages and so we’d be fine. But there’s an uncertainty about what the future would look like — so at the moment we’re going to focus on making a positive case on why skilled migration is vital for the digital economy.”

The prior Tory-led U.K. coalition government introduced a cap on tier 2 visas back in 2011 — of just over 20,000 per year — which is applied as a monthly limit. That monthly cap was exceeded for the first time in June, with a swathe of visa applications turned down as a result. That’s something Levin says shows the current visa system is “creaking at the seams” — even before any further restrictions are factored in.

“Thirteen hundred applicants in June were turned down because they’d hit the cap,” he says, noting that when the cap is hit the Home Office uses salary level to choose between applicants. “So the salary thresholds jump up from the 25th percentile… which means the lower paid end of people lose out, which would probably disproportionately affect startups.”

‘Pill on a String’ Could Help Spot Early Signs of Cancer of the Gullet

‘Pill on a string’ could help spot early signs of cancer of the gullet

 

Source: www.cam.ac.uk

A ‘pill on a string’ developed by researchers at the University of Cambridge could help doctors detect oesophageal cancer – cancer of the gullet – at an early stage, helping them overcome the problem of wide variation between biopsies, suggests research published today in the journal Nature Genetics.

If you’re taking a biopsy, this relies on your hitting the right spot. Using the Cytosponge appears to remove some of this game of chance

Rebecca Fitzgerald

The ‘Cytosponge’ sits within a pill which, when swallowed, dissolves to reveal a sponge that scrapes off cells when withdrawn up the gullet. It allows doctors to collect cells from all along the gullet, whereas standard biopsies take individual point samples.

Oesophageal cancer is often preceded by Barrett’s oesophagus, a condition in which cells within the lining of the oesophagus begin to change shape and can grow abnormally. The cellular changes are cause by acid and bile reflux – when the stomach juices come back up the gullet. Between one and five people in every 100 with Barrett’s oesophagus go on to develop oesophageal cancer in their life-time, a form of cancer that can be difficult to treat, particularly if not caught early enough.

At present, Barrett’s oesophagus and oesophageal cancer are diagnosed using biopsies, which look for signs of dysplasia, the proliferation of abnormal cancer cells. This is a subjective process, requiring a trained scientist to identify abnormalities. Understanding how oesophageal cancer develops and the genetic mutations involved could help doctors catch the disease earlier, offering better treatment options for the patient.

An alternative way of spotting very early signs of oesophageal cancer would be to look for important genetic changes. However, researchers from the University of Cambridge have shown that variations in mutations across the oesophagus mean that standard biopsies may miss cells with important mutations. A sample was more likely to pick up key mutations if taken using the Cytosponge, developed by Professor Rebecca Fitzgerald at the Medical Research Council Cancer Unit at the University of Cambridge.

“The trouble with Barrett’s oesophagus is that it looks bland and might span over 10cm,” explains Professor Fitzgerald. “We created a map of mutations in a patient with the condition and found that within this stretch, there is a great deal of variation amongst cells. Some might carry an important mutation, but many will not. If you’re taking a biopsy, this relies on your hitting the right spot. Using the Cytosponge appears to remove some of this game of chance.”

Professor Fitzgerald and colleagues carried out whole genome sequencing to analyse paired Barrett’s oesophagus and oesophageal cancer samples taken at one point in time from 23 patients, as well as 73 samples taken over a three-year period from one patient with Barrett’s oesophagus.

The researchers found patterns of mutations in the genome – where one ‘letter’ of DNA might change to another, for example from a C to a T – that provided a ‘fingerprint’ of the causes of the cancer. Similar work has been done previously in lung cancer, where it was shown that cigarettes leave fingerprints in an individual’s DNA. The Cambridge team found fingerprints which they believe are likely to be due to the damage caused to the lining of the oesophagus by stomach acid splashing onto its walls; the same fingerprints could be seen in both Barrett’s oesophagus and oesophageal cancer, suggest that these changes occur very early on the process.

Even in areas of Barrett’s oesophagus without cancer, the researchers found a large number of mutations in their tissue – on average 12,000 per person (compared to an average of 18,000 mutations within the cancer). Many of these are likely to have been ‘bystanders’, genetic mutations that occurred along the way but that were not actually implicated in cancer.

The researchers found that there appeared to be a tipping point, where a patient would go from having lots of individual mutations, but no cancer, to a situation where large pieces of genetic information were being transferred not just between genes but between chromosomes.

Co-author Dr Caryn Ross-Innes adds: “We know very little about how you go from pre-cancer to cancer – and this is particularly the case in oesophageal cancer. Barrett’s oesophagus and the cancer share many mutations, but we are now a step closer to understanding which are the important mutations that tip the condition over into a potentially deadly form of cancer.”

The research was funded by the Medical Research Council and Cancer Research UK. The Cytosponge was trialled in patients at the NIHR Clinical Investigation Ward at the Cambridge Clinical Research Facility.

Reference
Ross-Innes, CS et al. Whole-genome sequencing provides new insights into the clonal architecture of Barrett’s esophagus and esophageal adenocarcinoma. Nature Genetics; 20 July 2015


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cambridge TV Goes Live

Cambridge TV goes Live

Whilst Cambridge TV is broadcast on FreeView channel 8 and Virgin Cable channel 159, in the Cambridge area, The target audience is the global “top 20%” of viewers through our on-demand Internet presence.

Whilst we will have some of the usual local TV content, our key programmes are about Cambridge business and Cambridge academic endeavour that is of interest to a global audience.

Our ambition is to create a must-see series of programmes that will inform and entertain the world about the latest amazing things that are being developed and studied in and around Cambridge.

The tag-line “Watch us, Get better” in fully in the “Reith” tradition, and acknowledges that as a new under-funded start-up, our production values will improve over time, though I am confident that the content will shine through regardless.

With a licence to broadcast 24 hours a day for 10 years! there is plenty of time for us to take material from many sources and give it an airing. Whether it is an advert, corporate video, a training video or a full programme, subject to our quality and suitability criteria we can show it!

Monoclonal Antibodies: the Invisible Allies That Changed the Face of Medicine

Monoclonal antibodies: the invisible allies that changed the face of medicine

source:www.cam.ac.uk

Forty years ago, two researchers at the Medical Research Council’s Laboratory of Molecular Biology in Cambridge developed a new technology that was to win him the Nobel Prize – and is now found in six out of ten of the world’s bestselling drugs. Dr Lara Marks from Department of History and Philosophy of Science discusses the importance of ‘monoclonal antibodies’.

They are tiny magic bullets that are quietly shaping the lives of millions of patients around the world. Produced in the lab, invisible to the naked eye, relatively few people are aware of these molecules’ existence or where they came from. Yet monoclonal antibodies are contained in six out of ten of the world’s bestselling drugs, helping to treat everything from cancer to heart disease to asthma.

Known as Mabs for short, these molecules are derived from the millions of antibodies the immune system continually makes to fight foreign invaders such as bacteria and viruses. The technique for producing them was first published 40 years ago. It was developed byCésar Milstein, an Argentinian émigré, and Georges Köhler, a German post-doctoral researcher. They were based at the UK Medical Research Council’s Laboratory of Molecular Biology in Cambridge.

Harnessing the power of the immune system

Milstein and Köhler wanted to investigate how the immune system can produce so many different types of antibodies, each capable of specifically targeting one of a near-infinite number of foreign substances that invade the body. This had puzzled scientists ever since the late 19th century, but an answer had proved elusive. Isolating and purifying single antibodies with known targets, out of the billions made by the body, was a challenge.

The two scientists finally solved this problem by immunising a mouse against a particular foreign substance and then fusing antibodies taken from its spleen with a cell associated with myeloma, a cancer that develops in the bone marrow. Their method created a hybrid cell that secreted Mabs. Such cells could be grown indefinitely, in the abdominal cavity of mice or in tissue culture, producing endless quantities of identical antibodies specific to a chosen target. Mabs can be tailored to combat a wide range of conditions.

When Milstein and Köhler first publicised their technique, relatively few people understood its significance. Editors of Nature missed its importance, asking the two scientists to cut short their article outlining the new technique; as did staff at the British National Research Development Corporation, who declined to patent the work after Milstein submitted it for consideration. Within a short period, however, the technique was being adopted by scientists around the world, and less than ten years later Milstein and Köhler were Nobel laureates.

A transformation in therapeutic medicine

In the years that have passed since 1975, Mab drugs have radically reshaped medicine and spawned a whole new industry. It is predicted that 70 Mab products will have reached the worldwide market by 2020, with combined sales of nearly $125bn (£81bn).

 

An artist’s rendering of anti-cancer antibodies. ENERGY.GOV

 

Key to the success of Mab drugs are the dramatic changes they have brought to thetreatment of cancer, helping in many cases to shift it away from being a terminal disease. Mabs can very specifically target cancer cells while avoiding healthy cells, and can also be used to harness the body’s own immune system to fight cancer. Overall, Mab drugs cause fewer debilitating side-effects than more conventional chemotherapy or radiotherapy. Mabs have also radically altered the treatment of inflammatory and autoimmune disorders like rheumatoid arthritis and multiple sclerosis, moving away from merely relieving symptoms to targeting and disrupting their cause.

Aside from cancer and autoimmune disorders, Mabs are being used to treat over 50 other major diseases. Applications include treatment for heart disease, allergic conditions such as asthma, and prevention of organ rejection after transplants. Mabs are also under investigation for the treatment of central nervous disorders such as Alzheimer’s disease, metabolic diseases like diabetes, and the prevention of migraines. More recently they were explored as a means to combat Ebola, the virus disease that ravaged West Africa in 2014.

Fast and accurate diagnosis

Mabs have enabled faster and more accurate clinical diagnostic testing, opening up the means to detect numerous diseases that were previously impossible to identify until their advanced stages. They have paved the way in personalised medicine, where patients are matched with the most suitable drug. Mabs are intrinsic components in over-the-counter pregnancy tests, are key to spotting a heart attack, and help to screen blood for infectious diseases like hepatitis B and AIDS. They are also used on a routine basis in hospitals to type blood and tissue, a process vital to ensuring safe blood transfusion and organ transplants.

 

Monoclonal antibodies can be used to rapidly diagnose disease and determine blood type.U.S. Navy/Jeremy L. Grisham

 

Mabs are also invaluable to many other aspects of everyday life. For example they are vital to agriculture, helping to identify viruses in animal livestock or plants, and to the food industry in the prevention of the spread of salmonella. In addition they are instrumental in the efforts to curb environmental pollution.

Quietly triumphant

Yet Mabs remain hidden from public view. This is partly because the history of the technology has often been overshadowed by the groundbreaking and controversialAmerican development of genetic engineering in 1973, which revolutionised the manufacturing and production of natural products such as insulin, and inspired the foundation of Genentech, one of the world’s first biotechnology companies.

Looking back, the oversight is not surprising. Mabs did not transform medicine overnight or with any major fanfare, and the scientists who made the discovery did not seek fame. Instead, Mabs quietly slipped unobserved into everyday healthcare practice.

An Argentinian and a German came together in a British Laboratory and changed the face of medicine forever; their story deserves to be told.

Lara Marks is at University of Cambridge.

This article was originally published on The Conversation. Read the original article.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

 

Cambridge Venture – The Silk Stories

Local Girl Alice Hewkin is launching ‘The Silk Stories’, a webshop selling luxurious silk pyjamas at an affordable price.

Alice studies at the Hills Road Sixth Form College before winning a place at the Royal School of Speech and Drama to study for a BA in Acting.

Today as a graduate she is building her career with roles on SkyTV, Channel 4 and the BBC in the last year.

Alice tells Connected Cambridge “Being an actress I understand the importance of always looking your best and feeling good. Wearing  glamorous 100% silk pyjamas helps me relax and feel calm as well as getting a great night’s sleep so I’m ready for whatever the next day brings.”

This was the jumping off point for the Silk Stories brand. Importing top of range 100% pure silk pyjamas from near her birthplace in China her website at www.thesilkstories.comAlice Hewkinfounder of The Silk Stories is now live. You can follow progress on twitter @thesilkstories

 

On the Origin of (Robot) Species

On the origin of (robot) species

source: www.cam.ac.uk

Researchers have observed the process of evolution by natural selection at work in robots, by constructing a ‘mother’ robot that can design, build and test its own ‘children’, and then use the results to improve the performance of the next generation, without relying on computer simulation or human intervention.

We want to see robots that are capable of innovation and creativity

Fumiya Iida

Researchers led by the University of Cambridge have built a mother robot that can independently build its own children and test which one does best; and then use the results to inform the design of the next generation, so that preferential traits are passed down from one generation to the next.

Without any human intervention or computer simulation beyond the initial command to build a robot capable of movement, the mother created children constructed of between one and five plastic cubes with a small motor inside.

In each of five separate experiments, the mother designed, built and tested generations of ten children, using the information gathered from one generation to inform the design of the next. The results, reported in the open access journal PLOS One, found that preferential traits were passed down through generations, so that the ‘fittest’ individuals in the last generation performed a set task twice as quickly as the fittest individuals in the first generation.

“Natural selection is basically reproduction, assessment, reproduction, assessment and so on,” said lead researcher Dr Fumiya Iida of Cambridge’s Department of Engineering, who worked in collaboration with researchers at ETH Zurich. “That’s essentially what this robot is doing – we can actually watch the improvement and diversification of the species.”

For each robot child, there is a unique ‘genome’ made up of a combination of between one and five different genes, which contains all of the information about the child’s shape, construction and motor commands. As in nature, evolution in robots takes place through ‘mutation’, where components of one gene are modified or single genes are added or deleted, and ‘crossover’, where a new genome is formed by merging genes from two individuals.

In order for the mother to determine which children were the fittest, each child was tested on how far it travelled from its starting position in a given amount of time. The most successful individuals in each generation remained unchanged in the next generation in order to preserve their abilities, while mutation and crossover were introduced in the less successful children.

The researchers found that design variations emerged and performance improved over time: the fastest individuals in the last generation moved at an average speed that was more than twice the average speed of the fastest individuals in the first generation. This increase in performance was not only due to the fine-tuning of design parameters, but also because the mother was able to invent new shapes and gait patterns for the children over time, including some designs that a human designer would not have been able to build.

“One of the big questions in biology is how intelligence came about – we’re using robotics to explore this mystery,” said Iida. “We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customisation, but we want to see robots that are capable of innovation and creativity.”

In nature, organisms are able to adapt their physical characteristics to their environment over time. These adaptations allow biological organisms to survive in a wide variety of different environments – allowing animals to make the move from living in the water to living on land, for instance.

But machines are not adaptable in the same way. They are essentially stuck in one shape for their entire ‘lives’, and it’s uncertain whether changing their shape would make them more adaptable to changing environments.

Evolutionary robotics is a growing field which allows for the creation of autonomous robots without human intervention. Most work in this field is done using computer simulation. Although computer simulations allow researchers to test thousands or even millions of possible solutions, this often results in a ‘reality gap’ – a mismatch between simulated and real-world behaviour.

While using a computer simulation to study artificial evolution generates thousands, or even millions, of possibilities in a short amount of time, the researchers found that having the robot generate its own possibilities, without any computer simulation, resulted in more successful children. The disadvantage is that it takes time: each child took the robot about 10 minutes to design, build and test. According to Iida, in future they might use a computer simulation to pre-select the most promising candidates, and use real-world models for actual testing.

Iida’s research looks at how robotics can be improved by taking inspiration from nature, whether that’s learning about intelligence, or finding ways to improve robotic locomotion. A robot requires between ten and 100 times more energy than an animal to do the same thing. Iida’s lab is filled with a wide array of hopping robots, which may take their inspiration from grasshoppers, humans or even dinosaurs. One of his group’s developments, the ‘Chairless Chair’, is a wearable device that allows users to lock their knee joints and ‘sit’ anywhere, without the need for a chair.

“It’s still a long way to go before we’ll have robots that look, act and think like us,” said Iida. “But what we do have are a lot of enabling technologies that will help us import some aspects of biology to the engineering world.”

Reference:
Brodbeck, L. et al. “Morphological Evolution of Physical Robots through Model-Free Phenotype Development” PLOS One (2015). DOI: 10.1371/journal.pone.0128444


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/on-the-origin-of-robot-species#sthash.Feg3schS.dpuf

Young Minds Think Alike – and Older People Are More Distractible

Young minds think alike – and older people are more distractible

source: www.cam.ac.uk

‘Bang! You’re Dead’, a 1961 episode of Alfred Hitchcock Presents, continues to surprise – but not just with the twist in its tale. Scientists at the University of Cambridge have used the programme to show that young people respond in a similar way to events, but as we age our thought patterns diverge.

Older adults end up attending to a more diverse range of stimuli and so are more likely to understand and interpret everyday events in different ways than younger people

Karen Campbell

The study, published today in the journal Neurobiology of Aging, also found that older people tended to be more easily distracted than younger adults.

Age is believed to change the way our brains respond and how its networks interact, but studies looking at these changes tend to use very artificial experiments, with basic stimuli. To try to understand how we respond to complex, life-like stimuli, researchers at the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) showed 218 subjects aged 18-88 an edited version of an episode from the Hitchcock TV series while using functional magnetic resonance imaging (fMRI) to measure their brain activity.

The researchers found a surprising degree of similarity in the thought patterns amongst the younger subjects – their brains tended to ‘light up’ in similar ways and at similar points in the programme. However, in older subjects, this similarity tended to disappear and their thought processes became more idiosyncratic, suggesting that they were responding differently to what they were watching and were possibly more distracted.

The greatest differences were seen in the ‘higher order’ regions at the front of the brain, which are responsible for controlling attention (the superior frontal lobe and the intraparietal sulcus) and language processing (the bilateral middle temporal gyrus and left inferior frontal gyrus).

The findings suggest that our ability to respond to everyday events in the environment differs with age, possibly due to altered patterns of attention.

Dr Karen Campbell from the Department of Psychology, first author on the study, says: “As we age, our ability to control the focus of attention tends to decline, and we end up attending to more ‘distracting’ information than younger adults. As a result, older adults end up attending to a more diverse range of stimuli and so are more likely to understand and interpret everyday events in different ways than younger people.”

In order to encourage audiences to respond to movies and TV programmes in the same way as everyone else, and hence have a ‘shared experience’, directors and cinematographers use a variety of techniques to draw attention to the focal item in each shot. When the stimulus is less engaging – for example, when one character is talking at length or the action is slow, people show less overlap in their neural patterns of activity, suggesting that a stimulus needs to be sufficiently captivating in order to drive attention. However, capturing attention is not sufficient when watching a film; the brain needs to maintain attention or at the very least, to limit attention to that information which is most relevant to the plot.

Dr Campbell and colleagues argue that the variety in brain patterns seen amongst older people reflects a difference in their ability to control their attention, as attentional capture by stimuli in the environment is known to be relatively preserved with age. This supports previous research which shows that older adults respond to and better remember materials with emotional content.

“We know that regions at the front of the brain are responsible for maintaining our attention, and these are the areas that see the greatest structural changes as we ages, and it is these changes that we believe are being reflected in our study,” she adds. “There may well be benefits to this distractibility. Attending to lots of different information could help with our creativity, for example.”

Cam-CAN is supported by the Biotechnology and Biological Sciences Research Council (BBSRC).

Reference
Campbell, K et al. Idiosyncratic responding during movie-watching predicted by age differences in attentional control. Neurobiology of Aging; 6 Aug 2015.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/young-minds-think-alike-and-older-people-are-more-distractible#sthash.N56A3U4J.dpuf

Robots Learn to Evolve and Improve

Robots learn to evolve and improve

Media captionResearchers are developing robots that can learn from previous work

Engineers have developed a robotic system that can evolve and improve its performance.

A robot arm builds “babies” that get progressively better at moving without any human intervention.

The ultimate aim of the research project is to develop robots that adapt to their surroundings.

The work by teams in Cambridge and Zurich has been published in the journalPLOS One.

It seems like a plot from a science fiction film: a robot that builds other robots – each one better than the previous generation. But that is what researchers in Cambridge and Zurich have done.

But those concerned about machines taking over the world shouldn’t worry, at least not yet.

At this stage the “baby robots” consist of plastic cubes with a motor inside. These are put together by a “mother” robot arm which glues them together in different configurations.

Although the set up is simple the system itself is ingenious.

The mother robot assesses how far its babies are able to move, and with no human intervention, improves the design so that the next one it builds can move further.

The mother robot built ten generations of children. The final version moved twice the distance of the first before its power ran out.

According to Dr Fumiya Iida of Cambridge University, who led the research with colleagues at ETH Zurich, one aim is to gain new insights into how living things evolve.

“One of the big questions in biology is how intelligence came about – we’re using robotics to explore this mystery,” he told BBC News.

“We think of robots as performing repetitive tasks, and they’re typically designed for mass production instead of mass customisation, but we want to see robots that are capable of innovation and creativity.”

Another aim is to develop robots that can improve and adapt to new situations, according to Andre Rosendo – who also worked on the project.

“You can imagine cars being built in factories and the robot looking for defects in the car and fixing them by itself,” he said.

“And robots used in agriculture could try out slightly different ways of harvesting crops to see if they can improve yield.”

Dr Iidya told me that he came into robotics because he was disappointed that the robots he saw in real life were not as good as the ones he saw in science fiction films such as Star Wars and Star Trek.

His aim was to change that and his approach was to draw lessons from the natural world to improve the efficiency and flexibility of traditional robotic systems.

As to whether we’d ever see robots like those in the sci-fi films that inspired him, he said: “We’re not there yet, but sure, why not, maybe in about 30 years.”

Follow Pallab on Twitter

Predators Might Not Be Dazzled By Stripes

Predators might not be dazzled by stripes

Source: www.cam.ac.uk

 

New research using computer games suggests that stripes might not offer the ‘motion dazzle’ protection thought to have evolved in animals such as Zebra and consequently inspired ship camouflage during both World Wars.

Motion may just be one aspect in a larger picture. Different orientations of stripe patterning may have evolved for different purposes

Anna Hughes

Stripes might not offer protection for animals living in groups, such as zebra, as previously thought, according to research published today in the journal Frontiers in Zoology.

Humans playing a computer game captured striped targets more easily than uniform grey targets when multiple targets were present. The finding runs counter to assumptions that stripes evolved to make it difficult to capture animals moving in a group.

“We found that when targets are presented individually, horizontally striped targets are more easily captured than targets with vertical or diagonal stripes. Surprisingly, we also found no benefit of stripes when multiple targets were presented at once, despite the prediction that stripes should be particularly effective in a group scenario,” said Anna Hughes, a researcher in the Sensory Evolution and Ecology group and the Department of Physiology, Development and Neuroscience.

“This could be due to how different stripe orientations interact with motion perception, where an incorrect reading of a target’s speed helps the predator to catch its prey.”

Stripes, zigzags and high contrast markings make animals highly conspicuous, which you might think would make them more visible to a predator. Researchers have wondered if movement is important in explaining why these patterns have evolved. Striking patterns may confuse predators and reduce the chance of attack or capture. In a concept termed ‘motion dazzle’, where high contrast patterns cause predators to misperceive the speed and direction of the moving animal. It was suggested that motion dazzle might be strongest in groups, such as a herd of zebra.

‘Motion dazzle’ is a reference to a type of camouflage used on ships in World Wars One and Two, where ships were patterned in geometric shapes in contrasting colors. Rather than concealing ships, this dazzle camouflage was believed to make it difficult to estimate a target’s range, speed and heading.


HMS Argus (1917) wearing dazzle camouflage.

A total of 60 human participants played a game to test whether stripes influenced their perception of moving targets. They performed a touch screen task in which they attempted to ‘catch’ moving targets – both when only one target was present on screen and when there were several targets present at once.

When single targets were present, horizontal striped targets were easier to capture than any other target, including uniform color, or vertical or diagonal stripes. However, when multiple targets were present, all striped targets, irrespective of the orientation, were captured more easily than uniform grey targets.

“Motion may just be one aspect in a larger picture. Different orientations of stripe patterning may have evolved for different purposes. The evolution of pattern types is complex, for which there isn’t one over-ruling factor, but a multitude of possibilities,” said Hughes.

“More work is needed to establish the value and ecological relevance of ‘motion dazzle’. Now we need to consider whether color, stripe width and spatial patterning, and a predator’s visual system could be important factors for animals to avoid capture.”

Anna Hughes has written a blog post on this research for the journal publisher BioMed Central. Above story adapted from a BioMed Central press release. 


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/predators-might-not-be-dazzled-by-stripes#sthash.qZnJMCVN.dpuf

Here’s Looking At You: Research Shows Jackdaws Can Recognise Individual Human Faces

Here’s looking at you: research shows jackdaws can recognise individual human faces

source: www.cam.ac.uk

When you’re prey, being able to spot and assess the threat posed by potential predators is of life-or-death importance. In a paper published today in Animal Behaviour, researchers from the University of Cambridge’s Department of Psychology show that wild jackdaws recognise individual human faces, and may be able to tell whether or not predators are looking directly at them.

The fact that they learn to recognise individual faces so quickly provides great evidence of the flexible cognitive abilities of these birds

Gabrielle Davidson

Researchers Alex Thornton, now at the University of Exeter, and Gabrielle Davidson carried out the study with the wild jackdaw population in Madingley village on the outskirts of Cambridge. They found that the jackdaws were able to distinguish between two masks worn by the same researcher, and only responded defensively to the one they had previously seen accessing their nest box.

Over three consecutive days Davidson approached the nest boxes wearing one of the masks and took chicks out to weigh them. She also simply walked past the nest boxes wearing the other mask. Following this she spent four days sitting near the nest boxes wearing each of the masks to see how the jackdaws would respond.

The researchers found that the jackdaws were quicker to return to their nest when they saw the mask that they had previously seen approaching and removing chicks to be weighed, than when they saw the mask that had simply walked by.

They also tended to be quicker to go inside the nest box when Davidson, wearing the mask, was looking directly at them rather than looking down at the ground.

“The fact that they learn to recognise individual facial features or hair patterns so quickly, and to a lesser extent which direction people are looking in, provides great evidence of the flexible cognitive abilities of these birds,” says Davidson. “It also suggests that being able to recognise individual predators and the levels of threat they pose may be more important for guarding chicks than responding to the direction of the predator’s gaze.”

“Using the masks was important to make sure that the birds were not responding to my face, which they may have already seen approaching their nest boxes and weighing chicks in the past,” she adds.

Previous studies have found that crows, magpies and mockingbirds are similarly able to recognise individual people. However, most studies have involved birds in busier urban areas where they are likely to come into more frequent contact with humans.

Jackdaws are the only corvids in the UK that use nest boxes so they provide a rare opportunity for researchers to study how birds respond to humans in the wild. Researchers at Cambridge have been studying the Madingley jackdaws since 2010.

“It would be fascinating to directly compare how these birds respond to humans in urban and rural areas to see whether the amount of human contact they experience has an impact on how they respond to people,” says Davidson.

“It would also be interesting to investigate whether jackdaws are similarly able to recognise individuals of other predator species – although this would be a lot harder to test.”

The study was enabled by funding from Zoology Balfour Fund, Cambridge Philosophical Society, British Ecological Survey, and BBSRC David Philips Research Fellowship.

Inset images: Mask (Elsa Loissel).

Reference:

Davidson, GL et al.,Wild jackdaws, Corvus monedula, recognize individual humans and may respond to gaze direction with defensive behaviour Animal Behaviour 108 October 2015 17-24.


Creative Commons License
The text in this work is licensed under a Creative Commons Attribution 4.0 International License. For image use please see separate credits above.

– See more at: http://www.cam.ac.uk/research/news/heres-looking-at-you-research-shows-jackdaws-can-recognise-individual-human-faces#sthash.CDctNhXH.dpuf