This question can’t be answered without thinking more holistically about the role for AI in innovation more generally, something I presented in a keynote presentation for a recent conference aimed at corporate Innovation Managers called ‘AI for Innovation’ and hosted by the Centre for Doctoral Training for Innovation in Data Intensive Science at Liverpool University 1 ,
I explored some key questions under the title “Who will steer the future of innovation – humans or AI?”
- Will AI be a total or partial (ultimate)solution to endless innovation?
- Is there a risk that AI will diminish our human capabilities – and in turn our overall ability to innovate as a human race?
- What are the early examples of where AI can help to speed and de-risk the process of product development?
Once you layer on the further uncertainty on the timing of when AI might be capable enough to play a full role in the automation of multiple lab processes then a lot of open questions remain. For example, what are the immediate steps we need to consider to ‘future proof’ the lab machines being developed now? This is a subject close to our heart in TTP as we work with our clients to develop custom machines to address today’s challenges in lab processing, but with one eye on what it means to future proof these to be ready for an ‘AI future’.
In summary, there’s enough at stake here to cause us to think carefully about how we choose to employ AI in labs, and in particular to automate various parts of the process – as illustrated in some thoughts below.
Will AI immediately conquer all aspects of a highly successful lab, or is there still a place for a ‘scientist in the loop’?
At least in the short term then we should view AI as a ‘human amplifier’, working collaboratively with human scientists in‘closed loop systems’. For example, a study recently executed by University of Cambridge, King’s College London and Goldsmiths found that the combination of AI and scientists successfully found six promising new drug pairs for the treatment of breast cancer2.
However, in the long run, as AI becomes more capable (through a gentic AI or otherwise), what are the likely respective roles for humans and AI? AI is currently good at:
- Processing data
- Automating repetitive tasks
- Knowledge retrieval
- Creation (e.g. for written words and images)
But generative AI also hallucinates – and can struggle when not in possession of all the data (and context) it needs. Humans are good at:
- Emotional intelligence
- Critical thinking
- Situational awareness
- Cultural context
But have finite effort & stamina and are limited in speed and indeed can also suffer cognitive overload in the face of too much data.
However, whilst these attributes are relatively clear,no-one really knows yet how to optimally combine them and indeed how this might change as AI gets more capable. To make matters more complicated, the vast majority of labs are not being razed to the ground and built new from scratch,but having to deal with multiple machines, often from different vendors.
Is there anything we can learn from the past that might help us steer the optimal path?
Is this a calculator moment, a satnav moment or even an excel moment? And whatever it is, what’s the likely impact?
In reality, all of these tools have been perhaps viewed suspiciously when introduced but all of them have made it to everyday use.
However, it’s also useful to look at it through the lens of what we might have lost as a result?
Do we care if we can’t add up in our head as well as we used to? Probably not when we all have a mobile phone at our fingertips.
Do we care if we can’t read a map/find our way quite as well as we used to, particularly when we also get the added value of being able to not only avoid wrong turns, but also to see traffic or issues ahead which might interrupt our journey to our destination?
Probably not, as we are now much more likely to arrive at an unfamiliar destination successfully and also get a heads up on any likely delays which might slow us up along the way.
But at least some of us, strive to retain the ability (and often the will) to over-ride the satnav in ‘edge cases’ that it might not have previously encountered.
- Perhaps this latter analogy is the most pertinent and has some good parallels with the lab automation dilemma:
- Scientists often don’t know precisely where they’re going e.g. what will they discover at the end of the process?
- They would love to know in advance of any roadblocks e.g. a machine being close to breaking down
- Satnav is highly reliant on the accuracy and richness of the data it bases its decisions on – and the curation of that data often involves human input. Do current labs offer that breadth, richness and accuracy of data and indeed context in terms of meta data?
We should also be rightly nervous of losing the power of critical thinking. If we blindly always follow the satnav, we will eventually lose the power to use our ‘situational awareness’ to over-ride the single-minded satnav when needed, especially when we have access to data via our senses which the satnav simply doesn’t have?
What are the key factors we therefore need to balance?
The area of critical thinking and its value as a skill is worthy of specific attention – is this a muscle that will remain strong in innovators for ever – or is there a danger it might wither and die?
A recent paper from Microsoft3 suggests that use of AI in knowledge workflows appears to shift workers’ perception of critical thinking from focus on information gathering to information verification, from problem-solving to AI response integration and from task execution to task stewardship.
However, it also found that with less confident scientists, it can lead to longer term over-reliance on AI tools and inhibit critical engagement with work.
Clearly this means that as AI tools become more heavily used, scientists need to be trained to get the best out of them. The tools themselves also need to be designed to facilitate this and to minimise the risk of a longer-term decline in critical thinking.
In addition, what is the value of serendipity (through situational awareness) and the risk of loss of this with over automation? For example, would we have Penicillin today if it hadn’t been for Alexander Fleming’s chance observation of a petri dish that had been left too long in a lab?
Can 100% automation ever replace this?
Is there something valuable in scientists remaining sufficiently involved in the process to avoid their muscle memory evolving to the point where they have islands of knowledge in their heads, but fail to maintain sufficient view of the whole picture to join the islands together in the face of particularly challenging problems?
However, despite these virtues of human scientists they are not infallible, inexhaustible or indeed beyond hankering for a better future.
Such a future might be where they’re mainly valued for the positive attributes above, but don’t need to trade working out of hours, doing boring repetitive tasks or being blocked from an exciting experiment at a crucial stage due to the failure of a critical machine?
Therefore, AI would appear to have a role to use its (at least current) strengths to amplify human scientists, but there remains an open question of how this is best achieved – especially in the context of the majority of today’s labs with their myriads of different machines.
So, what are the principles we should follow in the short term as we move towards increased automation?
In the absence of both clarity on the optimal roles for humans and AI (and the likely improvements in it) in the labs of the future, any new machines introduced need to be flexible enough to be compatible with evolving mixed roles for humans and machines.
For the lab as a whole
AI/automation takes on the repetitive/boring jobs in the lab, such as ordering reagents, setting up reactions, and running routine purifications, and simplifies the decision tree for a lab scientist, leaving the human to do the specialist work best suited to them. For example, scientists might be able to interact with AI models using their scientific language and drawings (such as drawing out a chemical synthesis route).
We call this ‘AI lab ready’, which has a number of different aspects.
1. Dual human/AI control
- An obvious help to human scientists would be to simplify the act of controlling a machine i.e. describing an experiment in natural language, rather than learning a complex protocol
- To provide future compatibility with AI control of the lab, another step forward would be the provision of an AI friendly interface such as an API to ease AI control e.g. via a future AI Agent
- Along the same lines, an accompanying digital twin with every machine would allow any controlling AI entity to simulate its operation and even benefit from synthetic data as part of this simulation
2. Dual human/AI interpretable data
In the chemical synthesis/optimisation setting, with the right set of sensors and automation, the ability to capture additional and accurate information beyond what a typical chemists note down on their lab book (such as exact temperature, humidity, length of reactions) without human bias, would enable better prediction on reaction outcome for future AI models
3. Flexibility in AI execution capacity
Careful decisions will be needed from a systems engineering perspective to architect the compute power associated with the lab as a whole and the individual machines, and distribute this between the edge and the cloud – not least in labs integrating multiple machines from different vendors, all on a potentially different timeline for automation
4. Capable of predictable continuous operation
The lab in the future has the ability to detect (and ideally predict) when things go wrong in the system and perform simple troubleshooting tasks and potentially self-healing where possible
Finally, lab automation is not the only field considering the respective role of human beings and AI in future settings. At TTP we are seeing this across many industries, and openness to cross-fertilisation between them will pay dividends. In summary, every sector we work with is wrestling with similar questions: where and how to start their AI journey, how to maximise efficiency and accelerate time to market, what guiderails are needed to avoid the common pitfalls, and most importantly how to ensure the role of expert scientists is protected and strengthened long into the future.
At TTP, we don’t just speculate about the AI-enabled lab of the future; we help build it today. Our engineers and scientists work alongside R&D leaders to design, prototype and deliver machines that not only solve today’s bottlenecks but are architected with the flexibility to integrate tomorrow’s AI capabilities. Whether you are rethinking your lab workflows holistically or introducing a single next-generation instrument, we ensure your investment is AI lab ready, safeguarding both near-term productivity and long-term innovation.
If you are exploring how to future proof your lab, we would be delighted to share what we have learned from building next-generation systems with some of the world’s leading R&D organisations. Talk to us about your automation challenges and ambitions, and let’s design solutions that not only deliver immediate impact but also ensure your lab is ready to thrive in an AI future.