Posts Tagged drug development
Novel Approaches to Drug Development – 2015 Eppic Annual Conference
Drug development panel discussed how the technological advancements in Big Data, Machine Learning and Cloud Computing, when paired with focus on combination therapies that include for instance, anti cancer and immuno oncology agents, or paired with next gen sequencing technologies, and advanced companion diagnostics, with relentless focus on patient outcomes, may open new frontiers in cost effective drug development.
Drug development panel was moderated by Dr. Suneel Gupta, Chief Scientific Officer at Impax Pharmaceuticals. Gupta has over 25 years experience in pharmaceutical R&D, specifically around drug delivery technologies. Gupta shared the story of how Impax launched RYTARY, an extended-release oral capsule formulation drug, for the treatment of Parkinson’s disease, and took it in a span of 3.5 years, with $100M, from benchmark to launch. This is Impax’s first branded drug internally developed and approved for commercialization and it was achieved with “relentless execution”, said Gupta. Gupta’s advice to the entrepreneurs was to focus on the patient, not technology; to focus on what the product does, not what the product is made of. He also advised to keep the the focus on the effect and go for big effect size, to get the drug approved faster.
Brandon Allgood, CTO at Numerate began by talking about the challenges of existing computational methods which so far have been strictly dependent on either high resolution crystal structure data or very clean SAR screening data (QSAR). These models did not work well where computing is most needed, with disparate data, with emerging targets, and in high content, low throughput biology, and for multi target optimizations, said Allgood. However, with a vast number of technological advancements in cloud computing, big data, and machine learning algorithms, Numerate has overcome major challenges in drug discovery, said Allgood.
Numerate has created a powerful drug design platform that can rapidly deliver novel leads on targets, without the need for a crystal structure and with very limited SAR data. It can be used with just some ligand data and that is perfect for emerging targets, said Allgood. Numerate’s machine learning algorithms can integrate and make predictions from small amounts of public data available from patents, literature review etc. and make accurate predictions and give google or netflix type ranking to data design. It can handle noise and bias and give tolerance windows to address inter lab measurement variance. Numerate has built 2100 off target models and also has advanced ADME models, said Allgood. Speaking of some of the challenges in this area, Allgood said, public data is noisy and biased, while private data is private. He suggested following changes. 1) To spur innovation, big pharma should be encouraged to release data so others can apply machine learning algorithms to more data. 2) There is a need to put in place some standards for machine learning to standardize lab validations. 3) While genomics has received a bulk of funding, there is need to put investment behind research on small molecule drugs; “they still have a future”, said Allgood.
Dirk Brockstedt, SVP of R&D at Aduro BioTech, began by saying 2013 brought in a new era in cancer immunotherapy with approvals for Yervoy, Provenge, Opdivo, and Keytruda. The opportunity exists for rethinking about the biology and cancer treatment and look for combination of anti cancer and immuno oncology agents that can move the cure to the right for an increasing proportion of cancers, said Brockstedt. The key is to target the immune and not the cancer cell in developing innovative therapies. We need to also develop new clinical endpoints, develop new trial designs with new statistical methods, and consider novel regulatory paths for accelerated approvals of combination therapies. When only a subset of patients respond well, we need to apply novel technologies and methods for patient identification and stratification, said Brockstedt.
Brockstedt talked about Aduro Biotech’s novel approach for tackling the disease, with listeria bacteria. Here is my previous blog on Aduro’s approach http://bit.ly/JqDJ3K and here’s link to recently aired Scott Pelly’s segment on 60 minutes, on the use of polio bacteria for treatment of glioblastoma http://tinyurl.com/pkspcmz . These potential therapies are in early stage and it remains to be seen how successful the genetic engineering will be to render them useful as cancer drugs.
Eric Peters, Group Leader of Companion Diagnostics at Genentech discussed the challenge of expediting drug discovery and development through the use of next gen sequencing technology. Currently the cost to bring a new drug to patients (including failures) is around $2.6 B. There is only 12% success rate in drug development. However, also the knowledge of disease heterogeneity is rapidly evolving. Now we speak not of lung cancer but lung cancers, said Peters. There is a need for large scale biomarker and phenotype datasets. Access to high quality data from multiple sources is the most essential element. Patients’ access to complete range of testing and comprehensive diagnostics will play a big role in the future, and will become a standard of care in the future, said Peters.
Here are some additional blog links for your convenience
EPPICon 2015 keynote by Vivek Wadhwa – http://bit.ly/1abPwr5
“EPPICon 2015 Digital Health Panel Preview” http://bit.ly/1EQtd5y
“EPPICon 2015 Keynote by Kim Bush on “Tackling Global Health at Gates Foundation” http://bit.ly/18SV1cx
Feel free to browse my blog for past EPPIC conferences and other articles.
Computational Biology Applied to Liver Cells in vitro – High Throughput Screen for Drug Toxicity
Posted by Darshana V. Nadkarni, Ph.D. in Biotech - Medical Device - Life Science - Healthcare on January 29, 2013
Dr. Mike Bowles, previously a founder of Com21 and IBeam Broadcasting (both of which went on to huge IPOs) and currently co-founder of Biomatica, talked about the application of Computational Biology to investigate drug toxicity effects, earlier in drug development process. It is an understatement to say that drug development is very expensive, often costing billions of dollars and years of research. The primary challenge is determination of long term drug toxicity side effects. If we can develop and deploy efficient technologies for early prediction of adverse side effects, then the costs of drug development can be noticeably reduced, said Bowles.
However, the toxicity studies often take place relatively late in the process. During first year of research, the focus is on identifying and validating target molecules from over 5000 compounds. Toxicity is not studied until much later in the development process. Liver damage is one of the worst potential side effects of drugs, taken alone or with other medicines. “We need a paradigm shift”, said Bowles, to include toxicity studies earlier in the development process. But animal studies are also time consuming and they leave many uncertainties about human risk potential. Often by the time the animal data is in, too much is invested and it is costly to cancel the compounds. So there is an incentive to go on, rather than to eliminate compounds with riskier profiles.
Biomatica addresses this challenge by replacing liver toxicology studies on live rats with machine learned models of liver damage that can be run on rat (or human) liver cells grown in culture. They have built models using microarray data from hepatocytes to predict animal and human toxicity. Toxicity does not occur through just one pathway and it is a diffused problem. But microarray can encompass all the changes going inside a cell, at any point in time. Microarray data is collected on rat liver or rat hepatocytes grown in cultures or human hepatocytes grown in cultures and used to find earlier the compounds that should be eliminated. Testing costs earlier on live rats and on microarrays are similar. But at following stages they start adding up, in case of live rats. For instance, for 36 rats, in later stage, while they come to about $20,000 with microarry, with live rats they go as high as $113,400. The early results are indicating very good prediction accuracy, said Bowles. The talk generated a lot of interest and was followed by Q&A.
Bits, Bytes and Biology: A new paradigm for designing therapies
Posted by Darshana V. Nadkarni, Ph.D. in Biotech - Medical Device - Life Science - Healthcare on October 4, 2012
Pradeep Fernades, co-founder and President of Cellworks Group (www.cellworksgroup.com), discussed new derisked and innovative approach to developing therapies with enhanced probability of success at www.bio2devicegroup.org .
This new approach based on integrating biology and computing, enables a new paradigm that dramatically can reduce spending for developing new therapies. Currently $100B is spent globally, each year for pharmaceutical R&D. The cost for development of each new drug is estimated to be between $1 to $4B, depending on how the math is done. While published research and data is exploding, the overload of information makes it increasingly challenging to meaningfully use the information. The pharmaceutical industry is pouring more and more money into the R&D and increasingly has lesser probability of success. Problem is that despite the technology advancement, the drug development process remains fundamentally unchanged and drug development is validated very late in clinical trials. Real understanding of drugs is only possible during clinicals and even then the underlying mechanism of action if frequently unclear.
Informatics can bridge the gap and improve the outcomes, both better consumer oriented information that include monitoring of patient, drug, and trends as well as development oriented information that includes information about genomics, proteomics, biology etc. Cellworks Group is focused on development oriented informatics, which have traditionally been aimed at leveraging information technology and software algorithms to help manage large data sets, extract information from large data sets, and allow visualization of data. Their engineering model further goes from analyzing and extracting information to predicting information through abstraction modeling, simulation, and synthesis. This is contrary to traditional model where biologist begins at the lab, puts the drug in and if there is expected effect, then the initial hypothesis is confirmed which does not frequently happen. In this model, based on mathematical modeling, equation at each interaction within the cell, is analyzed and understood. This is a predictive computational disease model based on integrating insights of thousands of scientists, research data, experimental protocols, and clinical trends, that is mathematically observed at cellular level. Fernandes shared several ongoing collaborations and validations in oncology, rheumatoid arthritis, and anti-infection that are under way.
Essentially this is a process for finding innovative new therapies that begins with very explicit assumptions. It is based on leveraging functional representation of biology using mathematics. Relationship of each interaction is represented using differential equations. It emulates human disease physiology computationally and integrates it with understanding biological efficacy and toxicity. The process enables prediction of clinical outcomes as well as novel non-obvious insights and is many times speedier than wet approaches. It is about time that these new approaches be explored so that drug development process goes through an overhaul rather than small, incremental enhancements to make it more cost-effective.
Reader Comments