40 likes | 148 Vues
This article explores the challenges posed by uncertainty and noise in approximate modeling. It discusses various sources of uncertainty, including inputs, external factors, and model outputs, as well as issues relating to validation and descriptor choice. We delve into robust methods for probabilistic modeling and sensitivity analysis to noise. Furthermore, we propose promising approaches such as consensus among multiple models and the use of non-dimensional transforms. The need for better data, model accuracy, and computational methods to enhance empirical results is also emphasized.
E N D
State of the Art • Sources of uncertainty • Uncertainty in inputs • Uncertainty in external factors • Uncertainty model output • Uncertainty in constraints • Sources of noise in models • Experimental noise • Lack of coverage of models • Inaccurate/incomplete validation • Choice/availability of descriptors • How do we deal with this • Probabilistic modelling • Robustness techniques – sensitivity to noise • Normal distributions
Problems • Don’t fully understand form of probability distributions • Prior distributions • No data! • Descriptors typically have low information content
Promising Approaches • Multiple models based on different approaches – consensus • But need multiple sets of training data • Global vs local models • Non-dimensional transforms (Buckingham Theorem) to reduce noise in input data • E.g. pKi vs Ki • But, are there other approaches? • Distribution fitting to data (when/if available) • Better models, accuracy and transferability • E.g. quantum mechanical descriptors • Capture underlying physical model • Estimate of inaccuracy of current models • More data • Directly comparable data – where from? • Use computationally expensive calculations as input to empirical methods – but still limitations to accuracy