Joseph Pareti

Contributor   in
Emerging Technologies  

I like Machine Learning (ML) applications for Bioscience and Engineering. ML is what I have been doing since 2018. Prior to that, I worked MANY years for various Companies as an R&D Engineer, Application Engineer, CAE & HPC Consultant, and Pre-Sales solutions architect. My interests include learning and coding on one side, and evaluating how to found or co-found a business.

tags:  

Disclaimer: All opinions, ideas, and thoughts expressed and posted by Contributors at BiopharmaTrend.com platform are their own personal points of view, and do not represent neither Contributor's employers, nor BiopharmaTrend.com.

Posts by this author

Understanding the Innovations From LSTM, to an Encoder/Decoder Model, to Transformers; and Their Impact on Health Science

   932
Understanding the Innovations From LSTM, to an Encoder/Decoder Model, to Transformers; and Their Impact on Health Science

AstraZeneca has achieved outstanding results in drug design using large language models applied to SMILES representation of molecules; but what are the steps to understand how this is possible?  

In this report, I am going to describe my effort on an LSTM-based encoder/decoder model and on transformers. I would like to show that these technologies are related, can be learnt starting with a simple case, and they are not only relevant for NLP, but also for health science.

Applying AI and HPC to Drug Design: a Survey of R&D Work

   629
Applying AI and HPC to Drug Design: a Survey of R&D Work

The motivation for this report

In 2018, while at Supercomputing in Dallas I had a couple key encounters that profoundly influenced my work path ever since:

FIRST. At Nvidia, I saw a presentation about ANI1, to approximate molecules energy using deep learning. This is a precondition to accurately calculate dynamic and chemical parameters. The result is a dramatic reduction of time to solution, while achieving the same or better accuracy than using the exact numerical method. Five (yes, 5) orders of magnitude faster time-to-solution are demonstrated on a 54 atoms molecule than using Density Functional Theory. The neural network approximates DFT output data. But molecules vary in size while the input to the network must be of constant size, so the authors extended Behler-Parrinello functions and created special vectors that describe the input to the network.

SHARE