Blending Biology and AI: Dr. Markus Gershater on the Future of Life Sciences

by Andrii Buvailo, PhD          Interview

Disclaimer: All opinions expressed by Contributors are their own and do not represent those of their employers, or BiopharmaTrend.com.
Contributors are fully responsible for assuring they own any required copyright for any content they submit to BiopharmaTrend.com. This website and its owners shall not be liable for neither information and content submitted for publication by Contributors, nor its accuracy.

   1727    Comments 0
Topics: Emerging Technologies   
Share:   Share in LinkedIn  Share in Reddit  Share in X  Share in Hacker News  Share in Facebook  Send by email   |  

In the dynamic field of life sciences, both biological research and AI are coming together to alter our perspective on life at its most basic level. Dr. Markus Gershater, Co-founder and Chief Science Officer of Synthace, sheds light on the challenges and opportunities this union presents. Synthace, a UK-based no-code platform, allows for the design and execution of experiments, subsequently producing and analyzing structured data.

With AI's potential to revolutionize our approach to biological systems, it also highlights the need for a change in our scientific methods and thinking. As past technological shifts like electrification have taught us, simply adopting new technology isn't the end game. The true value emerges when technology is paired with new approaches and perspectives. In this interview, Dr. Gershater discusses a future where AI becomes an integral part of biology, not just an adjunct.

Andrii: Dr. Gershater, you've got your feet in both biochemistry and synthetic biology while navigating a fast-paced tech world. In your view, what's the most exciting promise that AI holds for biotech?

Dr. Markus Gershater, Co-founder, CSO, Synthace Markus: The promise is that, quite simply, AI will give us insights into biology that are currently impossible and that we can’t yet begin to imagine. Also exciting, but secondary to this, is how it will prompt changes in the way we work. The reason I say this is because my underlying belief here is that, right now, AI and biological research don’t yet fit together properly.

AI is a technology that fundamentally demands change from the people who want to use it, so for AI to have a fundamental impact on biology, we really have to change the way we approach the process of science in the first place. It seems to me that organizations and teams will have to adopt new mindsets, new processes, and new tooling.

There are some companies who, today, already exhibit many of the required characteristics of companies that are looking to the future in terms of how they think about the way we gather data about biological systems. Think of companies like Recursion and Insitro, that have built whole automated platforms around this. Fully digitized, they are built to systematically create a greater understanding of biological systems.

They give us a glimpse of what the future may look like: the routine generation of high-quality, large, varied, multidimensional data, in the full context of rich metadata. Data that provides the foundation for AI, and a step change in our ability to understand and work with biological systems.

 

Andrii: Of course, every silver lining has a cloud. What do you see as the biggest challenges in bringing AI into the world of bioengineering? How can the industry, Synthace included, best tackle these hurdles?

Markus: We recently ran some research that found a staggering 43% of R&D decision-makers have low confidence in the quality of their experiment data. This is concerning because it doesn’t just demand we improve our means of recording experiment data, it also demands we perform experiments that generate higher quality data in the first place. It follows that to understand this data correctly we also require a high level of granularity about how it was created: metadata about experimentation should be automatically collected as much as possible.

In the context of AI, this is a problem. The scope of possible uses for AI in biotech is massive and can be applied in a myriad of ways across every aspect of the value chain. Saying “we need to use AI” is like saying “we need to use electricity”: obvious and useless unless you talk specifics. Much more meaningful is “we need to apply large language models to improve the user interfaces for our complex equipment and methodologies,” or “we should use active learning to optimize the development of assays for early discovery.”

“We need to use AI” is in danger of being a kind of an empty call to arms, with no acknowledgment of all the change that will be needed to make the touted revolution come about. In the second industrial revolution, electricity was insufficient by itself to increase productivity. People needed to first realize that it offered a way of changing the way they worked. Factories no longer had to be arranged around massive drive-shafts powered by steam engines. Instead, they could be arranged into production lines. It was the combination of new technology (electrification) and new ways of working (production lines and separation of labor) that enabled the step-change in productivity.

For Synthace, our focus is firmly on the experiment itself. How can we gather, generate, and structure high-quality data for export into systems that are able to make more use of it than the frankly limited and limiting data available today. To continue the above analogy, how can we adapt the factory floor to make the best use of electricity?

 

Andrii: Speaking of challenges, there's no denying that the complexity of biological systems makes for a dizzying amount of data. What's your take on the best approach to handle this data overload, and where does AI come into the picture?

Markus: Biology's complexity emerges from the interactions of its simpler components, giving rise to unique properties and behaviors. These emergent features can't be reliably predicted from individual components, necessitating a comprehensive and interconnected dataset for a deeper understanding of biological systems.

Much of the big data produced in biology are multi-omic studies: highly detailed molecular snapshots of a system. But apart from genomic data, all of these readouts are highly dynamic: they change over time and in response to a multitude of stimuli. To truly understand a biological system, we must understand its dynamics as any number of factors change. We can’t just measure a lot of things, we have to measure them in the context of this multifactorial landscape, systematically running experiments that map the space, and allow AI to “see” what is going on.

Just sequencing something isn’t enough; we must also look at how it works, interacts, and reacts to different stimuli. In our pursuit of comprehending the intricacies of biological processes, it's clear that one-dimensional data alone won't lead us far along this investigative path. Ideally we’d have large, varied, dynamic, high-quality data enriched with as much experimental context as possible, such that future as-yet-unimagined AI-driven analyses can make as much use of today’s data as possible.

 

Andrii: Finally, the idea that AI might change our whole understanding of the universe is a bit of a head-spinner. Can you delve a bit deeper into that concept? How might AI transform the way we interact with everything from biological systems to the wider world around us?

Markus: The buzz around AI/ML is remarkably strong and, without a doubt, it will be transformational in bringing new insight to biology. But as I’ve said, we have yet to see the full realization of its potential. The work of biology and the data/metadata that it produces is difficult to represent in code and difficult to digitize. If we can’t do it, AI/ML remains a pipe dream that remains the preserve of “big tech.” The volume of data, and also the quality of data we can provide to those artificial intelligence and machine learning tools determines the likelihood of uncovering anything interesting.

Is there a way to enable and control the entire experiment lifecycle from end to end? Is there a way to enable multifactorial experimentation, sophisticated automation, and AI/ML with a single unifying standard? Is there a way to elevate the scientist so they can spend more time on what matters most, applying more of their individual talents to today’s most difficult problems with the full power of modern computing?

In the event that we are able to adapt in the right ways to the possibilities created by these tools, we may begin to map entire biological landscapes overnight, using the resulting data and metadata to predict future outcomes. There will likely come a time in this decade when AI can predict the best possible experiment design before we even step into the lab. Should this come to pass, the upshot will be scientific breakthroughs that defy belief by today’s standards.

Topics: Emerging Technologies   

Share:   Share in LinkedIn  Share in Reddit  Share in X  Share in Hacker News  Share in Facebook  Send by email

Comments:

There are no comments yet. You can be the first.

Leave a Reply

Your email address will not be published. Required fields are marked *