We're fortunate to speak with Dr. James Field, founder and CEO of LabGenius, a leading machine learning (ML)-driven protein engineering company headquartered in London. This pioneering startup is using artificial intelligence (AI) and robotics to accelerate the discovery of next-generation therapeutic antibodies for the treatment of diseases like cancer. With traditional methods proving slow and unreliable, LabGenius offers a novel approach that could revolutionize the way we discover and develop treatments.
In our conversation, we explore the challenges of traditional antibody discovery and how LabGenius navigates these challenges using mathematical models. We will discuss the concept of 'robot scientists,' understanding the unique challenges and rewards that this paradigm presents.
We delve into the recently released research demonstrating the company's ability to expedite the discovery of highly targeted molecules that have the potential to mitigate toxic side effects associated with existing immunotherapies. The conversation will also cover the importance of unique, high-quality datasets and their application in antibody discovery.
Lastly, we'll touch on the future of AI and robotics in healthcare, both in the context of LabGenius' goals and in the wider industry. We will examine Dr. Field's role in briefing policymakers on AI-enabled drug discovery, and explore how LabGenius plans to use its significant funding to further its mission.
Andrii: Could you expound on the "cognition barrier" you mention, and elaborate how LabGenius is utilizing mathematical models to understand molecular responses to diseases? What differentiates your approach from conventional methods?
James: Humanity’s incredible success (at least reproductively) can largely be attributed to our capacity to hypothesise and invent. But even this superpower has limits. Concretely, hypothesis-driven innovation requires the inventor to understand the system that they’re working with at an appropriate level of abstraction. Now, here's the rub. There are many arenas in which hypothesis-driven innovation performs poorly because the human brain simply isn't wired to grapple with the complexity of the underlying system.
For example, consider biological systems. Earth’s flora and fauna provide living proof of what can be built with biology, and at the same time highlight the sheer inadequacy of human-led hypothesis-driven innovation within this domain. This shouldn’t come as a surprise. After all, at no point in our evolutionary history has an intuition for manipulating organic matter at the nanoscale conferred a selective advantage!
To address some of humanity’s greatest challenges, it’s clear that we need to break through the ‘cognition barrier’ presented by hypothesis-driven innovation. In the absence of a suitable nootropic, this means that we must develop new forms of innovation that do not require a human to understand the underlying system.
This concept is not new. For decades, scientists, engineers and technologists have dreamt of building ‘robot scientists’ capable of autonomously discovering new knowledge, technologies, and sophisticated real-world products. For protein engineers, that dream is now a real possibility.
The rapid pace of technological development across the fields of synthetic biology, robotic automation, and ML has given us access to the tools required to create a smart, robotic platform capable of intelligently discovering novel therapeutic proteins.
At LabGenius, we’ve spent several years building a platform capable of generating hypotheses, testing them in the lab and then using the resulting data to iteratively refine its understanding of how a molecule’s design determines its performance.
With this approach, we have been able to overcome some major antibody engineering challenges. For example, traditional antibody engineering involves the sequential optimization of different molecular properties through rational design. With this approach, improving one property can inadvertently worsen others. These trade-offs inevitably lead to costly failures or sub-optimal outcomes for patients. In contrast, with our approach, we can now efficiently co-optimise antibodies across multiple features (e.g. potency, efficacy, selectivity, and developability).
Andrii: The results of LabGenius' research indicate a remarkable improvement over clinical benchmarks. Could you discuss the specific methodology employed in accelerating the discovery of these targeted molecules, and how they might alleviate the toxic side effects associated with existing immunotherapies?
James: T-cell engagers (TCEs) are a type of engineered antibody that redirect the immune system's T-cells to recognize and kill cancer cells.
Despite showing real promise, the progression of solid tumour-targeting TCEs through clinical trials has been plagued by issues with dose-limiting ‘on-target, off-tumour’ toxicity. This occurs when healthy cells expressing a tumour-associated antigen (TAA) get unintentionally targeted, which can cause toxic side effects for patients.
In a recent platform demonstration project, we used our ML-driven discovery platform to systematically identify novel TCEs that had strong killing selectivity, were highly potent and also had good developability profiles. The top-performing molecules demonstrated ≥10,000-fold killing selectivity, corresponding to a >400-fold improvement over a relevant clinical benchmark, Runimotamab (a TCE currently in phase I clinical trials) [ref].
The specific form of ML that we used to efficiently navigate design space and find high-performing TCEs is a form of active learning, called Multi-Objective Bayesian Optimization (MOBO).
We're using this capability to develop our own pipeline of highly selective TCEs for the treatment of solid tumours.
Andrii: A key aspect of LabGenius's work is the generation of unique, high-quality datasets. How do you guarantee data quality, and how are these datasets being applied in the discovery of therapeutic antibodies?
James: At the start of a traditional antibody discovery campaign, molecules are evaluated in the lab at high throughput using simple in vitro assays. The most promising molecules are then taken forward for characterization in lower throughput disease-relevant cell-based assays.
As previously mentioned, the issue with this approach is that sequential optimization is inherently inefficient.
At LabGenius, we address this challenge by co-optimizing for performance in disease-relevant cell-based assays right from the start of the discovery process. Our focus on functional assays with high predictive validity gives us the best chance of finding molecules that will perform well in the clinic.
Our ability to accurately predict how novel antibody designs will perform in disease-relevant cell-based assays is underpinned by our computational models. Unsurprisingly, the performance of these predictive models is highly sensitive to the quality and quantity of the data used to train them.
In response, we have developed a set of highly optimised and automated experimental workflows that reliably produce data for complex cell-based assays at the quality and throughput required for machine learning. We call this ‘ML-grade’ data.
Andrii: You've spoken at world events and have a prominent role in briefing policymakers on AI-enabled drug discovery. Could you elaborate on the key aspects of these briefings, especially in terms of policy-making challenges and regulatory hurdles?
James: From a scientific and technical standpoint, it’s clear that the moment for AI-enabled drug discovery is now. Yet from a societal perspective, it will likely be many years before the full benefit of this technological revolution is felt by patients.
Within Europe, the UK has a particularly strong and vibrant biotech ecosystem and is well-positioned to take advantage of the AI revolution but there is no room for complacency. Without proactive and sustained engagement with policymakers, we are unlikely to see the timely translation of technological potential into real and meaningful value for patients.
In recent discussions with policymakers, my message has been to make sure that we continue to strike the right balance between risk and benefit: we need to make sure that businesses can still make progress with hard scientific problems but at the same time regulation is essential to protect patients and manage any risks.
Rather than taking a broad brush approach to AI regulation, the UK government is pursuing a ‘common-sense, outcomes-oriented approach’. This pragmatic positioning is, in my mind, the right one.
Let’s not forget that any AI-discovered drug is still subject to all the stringent regulatory approvals required by bodies like the FDA, EMA and MHRA. With this in mind, we must steer clear of unnecessary layers of complex regulation and instead focus on fostering alignment between the various institutions already involved in the drug discovery process.
Andrii: LabGenius has raised significant capital from investors. How is the company planning to utilize these resources to advance its mission further? Are there any specific projects or areas of research that will be prioritized?
James: Antibody-based immunotherapies, including TCEs, are transforming the way we treat diseases like cancer. At LabGenius, we see this as an extraordinary opportunity to create treatments that no longer need to be as painful as the disease itself. To make this a reality, we’re focused on applying our platform’s technology to generate our own pipeline of highly selective immune cell engagers.
Topics: Industry Trends