150 likes | 269 Vues
This presentation by Prof. Dr. Lambert Schomaker explores the nuances of Bayesian theory applied to continuous probability density functions (PDFs). It contrasts traditional discrete PDF applications with the complexities of real-valued features and n-dimensional data. The discussion addresses the rise in popularity of Bayesian methods, especially in light of advancements in computing and data availability. It pursues practical implications for areas such as speech recognition and insurance client evaluations, emphasizing the significance of data volume and sampling in probability estimation.
E N D
Bayes and continuous PDFs prof. dr. Lambert Schomaker Kunstmatige Intelligentie / RuG
discrete vs continuous • Bayes theory is usually introduced on the basis of discrete PDFs (alarm? true/false) • … in a set-theoretic framework • but: numbers along a dimension can be considered as points in a set: {x R}
Bayes revisited • P(C|x) = P(x|C) P(C) / P(x) where C is a “class” of observations x is an observed scalar feature P(C) is the prior probability of finding that class P(x) is the likelihood or prior probability of the observable value of x P(x|C) is the probability of finding x in case of C
Bayes & continuous PDFs • P(C|x) = P(x|C) P(C) / P(x) where C is a “class” of observations x is an observed scalar feature • If x is a real number: P(x|C) is the probability density function (PDF) or histogram of feature values observed for class C P(x) is the PDF of x “at all” (all possible classes)
Example: temperature classification Classes C: Cold P(x|C) Normal P(x|N) Warm P(x|W) Hot P(x|H) P(x|C) P(x|N) P(x|W) P(x|H) P(x) P(x) likelihood of x values
Bayes: probability “blow up” P(C|x) P(N|x) P(W|x) P(H|x) Classes C: Cold P(x|C) Normal P(x|N) Warm P(x|W) Hot P(x|H)
in P(x|C) even with an irregular PDF shape … P(C|x) P(C|x) = P(x|C) P(C) / P(x) Bayesian output has a nice plateau out
Puzzle • So if Bayes is optimal and can be used for continuous data too, why has it become popular so late, i.e., much later than neural networks?
P(x) x Why Bayes has become popular so late… • Note: the example was 1-dimensional • A PDF (histogram) with 100 bins for one dimension will cost 10000 bins for two dimensions etc. • Ncells = Nbinsndims
Why Bayes has become popular so late… • Ncells = Nbinsndims • Yes… but you could use n-dimensional theoretical distributions (Gauss, Weibull etc.) instead of empirically measured PDFs…
Why Bayes has become popular so late… • … use theoretical distributions instead of empirically measured PDFs… • still the dimensionality is a problem: • 20 samples needed to estimate 1-dim. Gaussian PDF • 400 samples needed to estimate 2-dim. Gaussian!, etc. • massive amounts of labeled data are needed to estimate probabilities reliably!
Labeled (ground truthed) data 0.1 0.54 0.53 0.874 8.455 0.001 –0.111 risk 0.2 0.59 0.01 0.974 8.40 0.002 –0.315 risk 0.11 0.4 0.3 0.432 7.455 0.013 –0.222 safe 0.2 0.64 0.13 0.774 8.123 0.001 –0.415 risk 0.1 0.17 0.59 0.813 9.451 0.021 –0.319 risk 0.8 0.43 0.55 0.874 8.852 0.011 –0.227 safe 0.1 0.78 0.63 0.870 8.115 0.002 –0.254 risk . . . . . . . . Example: client evaluation in insurances
Success of speech recognition • massive amounts of data • increased computing power • cheap computer memory • allowed for the use of Bayes in hidden Markov Models for speech recognition • similarly (but slower): application of Bayes in script recognition
Global Structure: • year • title • date • date and number of entry (Rappt) • redundant lines between paragraphs • jargon-words: • Notificatie • Besluit fiat • imprint with page number • XML model
Local probabilistic structure: P(“Novb 16 is a date” | “sticks out to the left” & is left of “Rappt ”) ?