Let the robots do the tedious work
From squeezing pipettes to discovering conservation laws, machines are set to play an ever greater role in scientific research. An inquiry into the current state of automation in science.
To be able to carry out any experiment you like without leaving your desk, and without having to pick up a single test tube or look down a microscope. That is the vision of the company Emerald Cloud Laboratory, which allows biologists and chemists to design experiments, adjust and monitor instrument settings, and then analyse data with a few clicks of a mouse. Extending the concept of cloud computing beyond simple data storage and processing to the operation of real experiments over the Internet, the idea is that scientists should free themselves from dreary bench work, and use the extra time on their hands to Devise better experiments.
Centrifuging in the cloud
Emerald provides this service from a warehouse on the outskirts of San Francisco. There, on rows of parallel benches, a series of liquid-handling robots, automated incubators, centrifuges and other assorted machines manipulate samples according to a precise set of instructions sent by each user through a special web-based interface. With the equipment operating more-or-less autonomously around the clock, results can arrive back on a researcher’s computer within 24 hours of them having requested an experiment.
here are still very few companies that provide such a service. The first to do so was a firm called Transcriptic that was set up in 2012, and which is based in a warehouse a few miles down the road from Emerald. But already there are scientists who are enthusiastic converts. Among them is Justin Siegel, a synthetic biologist at the University of California, USA, who says that his research students can test more, and bolder, hypotheses than they could if doing the experiments themselves, and that even fresher undergraduates and school students can get involved. “They can work on designing experiments and not worry about having ‘good hands’ to implement them”, he says.
In fact, Emerald risks becoming a victim of its own success. Having started offering its cloud service last October, it currently has a list of several hundred labs waiting for time on its robots. But the company co-founder Brian Frezza is confident the backlog can be whittled down and that within about a year Emerald will offer all of the hundred or so standard experiments used in the life sciences. Currently they can run about 40. “At that point we want to be profitable”, he says.
Away from the assembly line
Researchers already have a lot of experience with robots. For years, pharmaceutical companies have been using them to carry out repetitive, time-consuming tasks in early-stage drug development, while biotechnology firms rely on them for manipulating DNA – a growing demand that has been met by instrument manufacturers such as Tecan, based near Zurich. “Most of what humans can do in the lab can now be done by machines”, says Ross King, a biologist and computer scientist at the University of Manchester in the UK.
Perhaps the archetype of automated science is DNA sequencing, the process of determining the order of genetic base pairs. This used to require very labour-intensive work that only a few labs could perform. Nowadays it is carried out automatically by machines that read genetic material millions of times over. Those machines are located at centralised facilities, making it unusual now for labs to do their own sequencing, according to Siegel.
What Emerald does is “fundamentally different”, says Frezza. Rather than carrying out “one experiment perhaps a million times, like a car factory”, he explains, “we do a million different experiments at once”. But because robots are not very efficient at carrying out a series of steps one after another – as opposed to many versions of the same process simultaneously – on average, their devices are actually slower and more expensive than people would be. As such, his company is not trying to compete price-wise with existing contract research organisations, which use a mixture of robotic and human labour.
More reproducible by robots
However, the great virtue of robots, says Frezza, is reproducibility, or, as he puts it, the fact that they “always pipette in exactly the same way”. Taking advantage of this ability, he says, has meant developing an instruction set that scientists can use to specify without ambiguity exactly what steps a robot needs to execute when performing an experiment. He thinks that after several years of working on the problem he and his colleagues have now developed a robust set of commands, but adds that they still need to make the interface more user-friendly. “What tends to get people’s backs up is the idea that they are writing code”, he says.
Richard Whitby of Southampton University in the UK agrees that reproducibility is very important. He says that humans’ versatility is a great asset when it comes to carrying out the complex reactions in his field, organic chemistry, but that scientific papers often don’t spell out that complexity in full – neglecting to specify, for example, how quickly reagents should be introduced. Without knowing the value of every parameter in a reaction, he notes, it is difficult to quantify the effect of tuning certain variables in order to improve reactions.
Whitby is leading a British project called Dial-a-Molecule that ultimately aims to build a machine capable of synthesising any organic compound on demand, just as biologists today can order specific strands of DNA through the post. He is under no illusion as to how difficult this will be, pointing out that such a machine would need to carry out tens of thousands of reactions as opposed to the four used by DNA synthesisers. “We have given ourselves a 30 or 40-year timescale”, he says.
Automated hypothesis testing
Even more ambitious is the vision of King and his colleagues at Manchester, who aim to “automate the full cycle of scientific research”. Like the cloud-based companies, they use robots made commercially, but then hook those robots up to artificial-intelligence systems. After being educated about a particular subject using logic and probability theory, the idea is that a robot will by itself formulate hypotheses to explain observations, then devise and run experiments to test those hypotheses, before generating new hypotheses and repeating the cycle many times in an attempt to learn something new about the world.
King reckons the approach is bearing fruit, and that machines can make science both more efficient and more precise. He embarked on the research while at Aberystwyth University in the UK, assembling a robot called Adam that in 2008 successfully identified several previously unknown genes that encode enzymes in yeast. He has since developed the one-million-dollar Eve, which has gone on to discover the anti-malarial mechanism of an everyday compound known as triclosan – potentially easing the substance’s approval as a pharmaceutical.
Beyond biochemistry, robots are also being used increasingly in materials science. Last year, engineers at the US Air Force Research Laboratory in Ohio reported results from a robot with artificial intelligence that had carried out research on carbon nanotubes – cylindrical molecules of carbon that are strong, light, and very good conductors of heat and electricity. The machine carried out more than 600 experiments on its own, varying conditions to try and speed up the growth of nanotubes. Doing so it confirmed theoretical predictions of the maximum growth rate.
No paradigm-shifters yet
Some researchers are even trying to automate advances in physics, although they aren’t using robots as such. Hod Lipson at Columbia University in the US and his colleagues have developed an algorithm that generates equations at random and then uses an evolutionary process to select the equations that best match experimental data. In 2009 they reported using their approach to model the behaviour of chaotic double pendula, yielding what they describe as physically meaningful conservation laws. Two years later they followed that up by deriving equations describing energy production from sugar breakdown using data on yeast metabolism.
Not everyone is convinced. The American physicists Philip Anderson and Elihu Abrahams wrote a letter to the Journal Science in 2009 accusing both King’s and Lipson’s groups of being “seriously mistaken about the nature of the scientific enterprise”. They argued that even if machines contribute to what the philosopher Thomas Kuhn called “normal science”, they could never transform science by discovering new physical laws – maintaining that in Lipson’s research on pendula motion the “relevant physical law and variables are known in advance”.
King acknowledges the limitations of machines, pointing out that even if a robot successfully carries out experiments, it doesn’t know why it’s doing so. He adds that he and his colleagues wanted to include Adam and Eve as authors on their papers, but were told they couldn’t because the robots wouldn’t be able to give their informed consent. Nevertheless, he believes that intelligent robots are set to become more commonplace in science, thanks to the ever-increasing power of computers, improved algorithms and more advanced robotics. “They are getting better, while humans remain the same”, he says. “I don’t see any reason why these trends aren’t going to continue”.
Based in Rome, Edwin Cartlidge writes for Science and Nature.
- Machine learning: The development of algorithms capable of learning how to solve given problems by themselves (recognition, classification, prediction, translation, etc.)
- Supervised learning: The training of algorithms with pre-labelled material (e.g., data pairs linking an object’s properties to a certain category). The algorithm builds a model capable of categorising new, unlabelled objects.
- Unsupervised learning: The employment of algorithms to look for structure in data but without initial training.
- Reinforcement learning: The rewarding of algorithms for generating results. The algorithm aims to maximise its rewards. A classic application of this kind of AI is computer chess.
- Neural network: A computer model based on the interconnection of a large number of artificial neurons to mimic the architecture of the brain. The network analyses an object so as to recombine its properties into increasingly complex and abstract
representations, which can then be used to classify it, for example. Neural networks learn by testing new combinations.