280 likes | 380 Vues
This study explores the differences between pharmaceuticals and pesticides in terms of target interaction, economics, and quantities. It delves into screening quality, lead characteristics, physical properties, and data analysis methods essential for identifying promising agrochemical leads. Through rigorous filters, designing libraries, and systematic analysis, the research aims to enhance the effectiveness of lead discovery and development processes.
E N D
What have we learnt about using HT screening as a source of agrochemical leads? John Delaney
Pharmaceuticals v Pesticides • Both interact with the same sorts of target – e.g. enzyme, receptor • Different economics • Pharmaceuticals – what price a life or a quality of life? • Pesticides – what price a bushel of wheat? • Quantities also differ somewhat…
The short answer… • The quality of the chemical input to a screen matters • Proper analysis of the screen results matters • Logistics/cycle times matter
What do we mean by input quality? • For the purposes of this talk I won’t cover sample integrity, important though that is • The guiding principle is … “If this compound were to hit on my screen, would I consider it a lead worth working on?” • If the answer is no, why did you screen it in the first place?
What do we look for in a lead (beyond potency) ? • A lead is a hit that is … • Novel • Distinct • Interesting
Novel • Is this compound similar to something I already know a lot about? • There are no prizes for re-discovering a well worked area of chemistry • We look for compounds that are dissimilar to anything in our corporate database – this assumes that we know everything there is to know about our corporate database!
Distinct • Are the compounds in the collection you’re testing different from each other? • A bunch of similar hits might only constitute one lead area • Every slot taken by an close analogue is a slot that could have been used to try a different area of chemistry
Interesting • Easy (and worthwhile) to define ‘uninteresting’ • Non-specific, toxic crap – e.g. organo-mercurics, acid chlorides, nitro-phenols • Compounds with poor physical properties • Harder to define ‘interesting’ – what makes a compound ‘agchem-like’ ?
Interesting = Right physical properties? • Bioavailability – a combination of potency, stability and mobility • All three affected by the physical properties of the molecule • We know that certain combinations of phys props severely compromise mobility • We know that the presence of certain chemical groups can affect stability
Physical properties of agrochemicals Not so very different from Lipinski’s ‘rule of five’ • MWT between 200 and 500 • clogP < 4 • Basic pKa < 9 (big difference from pharma) • H-bond donors (OH,NH) < 3 Mobility – like Lipinski refers to passive transport only (Colin Tice, Pest Manag Sci 57:3-16 (2001)
How do we ensure that our input is good? • Apply rigorous filters to compounds we buy in • Designing decent properties directly into our own libraries • Encouraging signs that this is working – more leads from the same number of hits • We can cope with collections offered in a variety of formats – individual, plates or whole collections
Analysis • Analysis of hits traditionally done ‘by hand’ • This becomes difficult as the number of screens and the number of compounds fed through them rise • Automation and standardisation part of the answer
Standardisation • Is each assay a unique case? Really? • Recording and storing data in a standard form greatly eases the task of developing analysis tools • Expect some up-front grief…
How do we analyse a bunch of hits? • Grouping similar structures together • Pulling relevant data from other sources • Turning raw biology into breakpoints • Flexible display of structures, activities and physical properties
Clustering • We tend to use Daylight substructural fingerprints as our molecular descriptor, the Tanimoto coefficient as our measure of similarity, and modified Jarvis-Patrick non-hierachical cluster analysis to group compounds – since you ask… • Unashamedly chemistry driven! • Groupings tend to chime with chemists’ intuition
Excel as a tool for data analysis • Ubiquitous – this is the way our chemists do most their data analysis • May not be the best tool for doing this kind of work, but… • Its short-comings can be addressed through programming effort • Take a general purpose tool and make it specific
The D1 batch system • Batch screening of compounds on in-vitro targets • A framework for collating data, analysis and driving analogue acquisition • Excel based – familiar to chemists • Incorporates clustering and data visualisation using AVS • Keeps track of what was done when and why
Cycle times • Cycle times can be surprisingly long • Often leads from the first stage of screening need ‘amplifying’ • Rapid follow-up with analogues key • We would like library design to be an iterative process – delays in getting results compromise the effectiveness of this • Effects of changes to compound selection procedures
Analogue ordering • Potential for chaos here • Centralise the actual ordering process • ‘Supersearch’ searches a hierarchy of databases and automatically eliminates duplicate compounds • Automatic annotation of database – “why was this compound ordered from Maybridge?” • Ordering done by adding an MFCD number to a spreadsheet
Conclusions • Good chemical input doesn’t just happen, you’ve got to work at it • Analysis can be made easier and faster – automate where possible, but consider the people doing the analysis • Cycle times are still a worry – some progress with making analogue ordering easier • And remember…