In contrast to early days of drug discovery, when ideas about new medicines were born primarily via “serendipitous trial-and-error” approach, or mere luck, modern drug design is a more conscious process -- owing to increased understanding of disease mechanisms and the underlying biology, advents of combinatorial chemistry, high-throughput screening, and many other experimental techniques, and, of course, advances in computational methods.
Topic: ‘In Silico’
In this Special Perspective, our fourth in an ongoing series, we will be presenting MatchMaker™, a novel deep proteome screening technology that we have developed and validated over the past 2 years to identify DTIs. MatchMaker builds on Cyclica’s passions of combining protein, chemistry, and genomic data, and augmenting it with high performance computing and algorithm development supported on the cloud.
Over the last five years the interest of pharmaceutical professionals towards machine learning (ML) and artificial intelligence (AI) has measurably increased -- while only one “AI-related” research collaboration involving “big pharma” appeared in the news in 2013, the number of such events increased up to 21 in 2017 alone, involving some of the top pharma players like GSK, Sanofi, Abbvie, Genentech, etc.
Since August Kekulé’s proposal for the tetrahedral configuration of carbon or his more famous realization that benzene was a cyclic molecule, a snake biting its tale, molecular structure has been the leading consideration for the design of new molecules as drugs or performance materials. For the former, it is said that 70% of drug design is based on molecular shape with the remainder attributed to electrostatic or non-bonded interactions.
Structural chemistry began around the 1860 with these dual assignments by Kekulé but it wasn’t until one hundred years later with Allinger’s initial force field approaches that the first classical molecular mechanics (MM) models became available to make computer-assisted prediction of molecular structure. These models themselves are based on principles derived by Robert Hooke, a contemporary of Isaac Newton, in the mid 17th century with additional layers from van der Waals (19th century) etc.
(Last updated 08.10.2018)
The type of artificial intelligence (AI) which scares some of the greatest minds, like Elon Musk and Stephen Hawking, is called “general artificial intelligence” -- the one which can “think” pretty much like humans do, and which can quickly evolve into a dangerous “superintelligence”. There is a notion that it might be invented in the nearest decades, but today we are definitely not there yet. The AI which is making headlines these days is a “narrow artificial intelligence”, a limited type of machine “intelligence” able to solve only a specific task or a group of tasks. It can’t go anywhere beyond specifics of the problem for which it is designed, so apparently, it will not hurt anyone in the nearest time. But already now it can provide meaningful practical results on those narrow tasks, like natural language processing, image recognition, controlling self-driving cars, and helping develop new drugs more efficiently. With the ability to find hidden and unintuitive patterns in vast amounts of data in ways that no human can do, AI represents a considerable promise to transform many industries, including pharma and biotech.