1 / 18

Bayes, Price and Laplace: what did they say about the brain? Tea – April 16 , 2013

Bayes, Price and Laplace: what did they say about the brain? Tea – April 16 , 2013. The question for the day: Who was the first one to say that people reason probabilistically? The candidates: Bayes, Price and Laplace. Philosophical Transactions of the Royal

neumann
Télécharger la présentation

Bayes, Price and Laplace: what did they say about the brain? Tea – April 16 , 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayes, Price and Laplace: what did they say about the brain? Tea – April 16, 2013

  2. The question for the day: Who was the first one to say that peoplereason probabilistically? The candidates: Bayes, Price and Laplace

  3. Philosophical Transactions of the Royal Society of London, 53:370-418 (1763) Communicated by Richard Price after Bayes was dead Intro: Price Math: Bayes Interpretation: Price

  4. What did Bayes do? It’s really hard to tell. “If there be two subsequent events be determined every day, and each day the probability of the 2d is b/N and the probability of both P/N, and I am to receive N if both of the events happen the 1st day on which the 2d does; I say, according to these conditions, the probability of my obtaining N is P/b.”

  5. What did Bayes do? My translation: p(x,y) = P/N p(y) = b/N P p(x|y) = => b p(x,y) = p(y) Bayes’ theorem!

  6. What did Bayes do? It’s really hard to tell. “If there be two subsequent events be determined every day, and each day the probability of the 2d is b/N and the probability of both P/N, and I am to receive N if both of the events happen the 1st day on which the 2d does; I say, according to these conditions, the probability of my obtaining N is P/b.” p(y) p(x,y) p(x|y)

  7. Bayes also did latent variable modelling. He ultimately asked the question: If you observe p heads out of p+q coin flips, what’s the probability that the true probability of heads is between a and b? But he didn’t ask it that way.

  8. Instead, he considered the following problem: p q x Compute p(a<x<b) unit square Throw a ball. Assume it has equal probability of landing anywhere in the square (flat priors). You don’t observe x, but you want to infer it. Throw lots more balls; somebody tells that p balls land on the left side and q balls land on the right. Compute the probability that the first ball landed in any particular interval on the x-axis.

  9. b 1 p(a<x<b) = dxxp (1 – x)q Z a He even got the normalizer right: = (p+q+1)  [binomial coefficient of ypzq in expansion of (y+z)p+q] = (p+q+1) = 1 Z (p+q)! (p+q+1)! p!q! p!q! Don’t think Bayes knew this, but I’m not sure.

  10. b 1 p(a<x<b) = dxxp (1 – x)q Z a q(q-1) =  dxxp(1 – qx + x2– …) 2 qxp+2 q(q-1)xp+3 xp+1 b = ( – + – …) p+1 p+2 2(p+3) a b 1 Z a 1 Z

  11. b 1 p(a<x<b) = dxxp (1 – x)q Z a He also worked out an approximation for: large p and q a = p/(p+q) – b = p/(p+q) +  But I couldn’t understand it.

  12. But he said nothing about the brain.

  13. Price, on the other hand, did.

  14. As an aside, he also appears to have anticipated Bayesian non-parametrics: Suppose a solid or die or whose number of sides and constitution we know nothing; and that we are to judge of these from experiments made in throwing it. In this case, it should be observed, that it would be in the highest degree improbable that the solid should, in the first trial, turn any one side which could be assigned before hand; because it would be known that some side must turn, and that there was an infinity of sides, or sides otherwise marked, which it was equally likely that it should turn.

  15. flat prior and hierarchical models, It should be carefully remembered that these deductions suppose a previous total ignorance of nature. After having observed for some time the course of events it would be found that the operations of nature are in general regular, and that the powers and laws which prevail in it are stable and permanent. The consideration of this will cause one or a few experiments often to produce a much stronger expectation of success in further experiments than would otherwise have been reasonable... the prior is no longer flat!

  16. and maybe even approximate inference, ... we seem to know little more than that it [Bayesian analysis]does sometimes in fact convince us, and at other times not; and that, as it is the means of acquainting us with many truths, of which otherwise we must have been ignorant; so it is, in all probability, the source of many errors, which perhaps might in some measure be avoided, if the force that this sort of reasoning ought to have with us were more distinctly and clearly understood.

  17. What Price said about behavior (1763): Let us imagine to ourselves the case of a person just brought forth into thisworld. The sun would, probably, be the first object that would engage his attention; but after losing it the first night he would be entirely ignorant whether he should ever see it again. But let him see a second appearance or one return of the sun, and an expectation would be raised in him of a second return, and he might know that there was an odds of 3 to 1 [flat prior] for some probability of this. But no finite number of returns would be sufficient to produce absolute or physical certainty.

  18. And so did Laplace, One may even say, strictly speaking, that almost all our knowledge is only probable; and in the small number of things that we are able to know with certainty, the principle means of arriving at the truth – induction and analogy – are based on probabilities. Theorieanalytique des probabilites (1825).

More Related