1 / 86

K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology

Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Monte Carlo Method. K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology. Brief History. Comte de Buffon in 1677

qamar
Télécharger la présentation

K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012Monte Carlo Method K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology

  2. Brief History • Comte de Buffon in 1677 • He conducted an experiment in which a needle of length L was thrown at random on a horizontal plane with lines drawn at a distance d apart (d > L). • He repeated the experiment many times to estimate the probability P that the needle would intersect one of these lines. • Laplace suggested that this technique of repeated experimentation could be used to compute an estimated value of pi. • The term “Monte Carlo” was coined in the 1940s, at the advent of electronic computing, to describe mathematical techniques that use statistical sampling to simulate phenomena or evaluate values of functions. • These techniques were originally devised to simulate neutron transport by a group of scientists working on nuclear weapons.

  3. Why are Monte Carlo Techniques Useful? • Overall steps of Monte Carlo Techniques • Given a problem of computing the value of the integration of a function with respect to an appropriately defined measure over a domain. • The Monte Carlo approach would be to define a random variable such that the expected value of that random variable would be the solution to the problem. • Samples of this random variable are then drawn and averaged to compute an estimate of the expected value of the random variable. • This estimated expected value is an approximation to the solution of the given problem.

  4. Why are Monte Carlo Techniques Useful? • Advantages • The conceptual simplicity. • Given an appropriate random variable, the computation consists of sampling the random variable and averaging the estimates obtained from the sample. • Can be applied to a wide range of problems • Problems that are stochastic in nature: transport problems in nuclear physics • Problems that require the higher-dimensional integration of complicated functions. • Often Monte Carlo techniques are the only feasible solution.

  5. Why are Monte Carlo Techniques Useful? • Disadvantage • Relative slow convergence rate. • 1/sqrt(N), N is the number of samples. • Several variance reduction techniques to accelerate the convergence have been proposed. • However, they are not used unless there are no viable alternatives. • BUT!!! There are problems for which Monte Carlo methods are the only feasible solution technique: • Higher-dimensional integrals and integrals with nonsmooth integrands.

  6. Review of Probability Theory • A Monte Carlo process is a sequence of random events. • A numerical outcome can be associated with each possible event. • When a fair die is thrown, the outcome could be any value from1 to 6. • A random variable describes the possible outcomes of an experiment.

  7. Review of Probability Theory • Discrete Random Variables • When a random variable can take a finite number of possible values, it is called a discrete random variable. • A probability pi can be associated with any event with outcome xi. • A random variable xdie might be said to have a value of 1 to 6 associated with each of the possible outcomes of the throw of the die. • The probability pi associated with each outcome for a fair die is 1/6.

  8. Review of Probability Theory • Discrete Random Variables • Some properties of the probabilities pi are: • The probability of an event lies between 0 and 1: 0≤pi≤1. • If an outcome never occurs, its probability is 0. • If an event always occurs, its probability is 1. • The probability that either of two events occurs is: • Pr(Event1 or Event2) ≤ Pr(Event1) + Pr(Event2) • Two events are mutually exclusive if and only if the occurrence of one of the events implies the other event cannot possibly occur. • Pr(Event1 or Event2) = Pr(Event1) + Pr(Event2) • A set of all the possible events/outcomes of an experiment such that the events are mutually exclusive and collectively exhaustive satisfies the following normalization property: Σi pi = 1.

  9. Review of Probability Theory • Expected Value • For a discrete random variable with n possible outcomes, the expected value, or mean of the random variable is

  10. Review of Probability Theory • Variance and Standard Deviation • The variance is a measure of the deviation of the outcomes from the expected value of the random variable. • The standard deviation is the square root of the variance.

  11. Review of Probability Theory • Functions of Random Variables • Consider a function f(x), where x takes values xi with probabilities pi. • x is a random variable. f(x) is also a random variable whose expected value or mean is defined as

  12. Review of Probability Theory • Functions of Random Variables • The variance of the function f(x) is defined similarly as

  13. Review of Probability Theory • Continuous Random Variables • Probability Distribution Function • For a real-valued (continuous) random variable x, a probability density function (PDF) p(x) is defined such that the probability that the variable takes a value x in the interval [x,x+dx] equals p(x)dx. • Cumulative Distribution Function (CDF) • It provides a more intuitive definition of probabilities for continuous variables.

  14. Review of Probability Theory • Continuous Random Variables • The CDF gives the probability with which an event occurs with an outcome whose value is less than or equal to the value y. • The CDF P(y) is a nondecreasing function. • The CDF P(y) is non-negative over the domain of the random variable.

  15. Review of Probability Theory • The PDF p(x) has the following properties:

  16. Review of Probability Theory • Expected Value • Similar to the discrete-valued case, the expected value of a continuous random variable x is given as: • Consider some function f(x), where p(x) is the probability distribution function of the random variable x. • Since f(x) is also a random variable, its expected value is

  17. Review of Probability Theory • Variance and Standard Deviation

  18. Review of Probability Theory • Conditional and Marginal Probabilities • Consider a pair of random variables x and y. • For discrete random variables, pij specifies the probability that x takes a value of xi and y takes a value of yj. • Similarly, a joint probability distribution function p(x,y) is defined for continuous random variables.

  19. Review of Probability Theory • Conditional and Marginal Probabilities • The marginal density function of x is defined as • The conditional density function p(y|x) is the probability of y given some x;

  20. Review of Probability Theory • Conditional and Marginal Probabilities • The conditional expectation of a random function g(x,y) is computed as:

  21. Monte Carlo Integration • Assume that we have some function f(x) defined over the domain x∈[a,b]. • We would like to evaluate the integral • For one-dimensional integration, Monte Carlo is typically not used.

  22. Monte Carlo Integration • Weighted Sum of Random Variables • Consider a function G that is the weighted sum of N random variables g(x1),…,g(xN). • Each of the xi has the same probability distribution function p(x). • xi : independent identically distributed variables • Let gi(x) denote the function g(xi):

  23. Monte Carlo Integration • Weighted Sum of Random Variables • The linearity property holds: • Consider the case where the weights wj are the same and all add to 1. When N functions are added together, wj=1/N:

  24. Monte Carlo Integration • Weighted Sum of Random Variables • The expected value of G(x) is • The expected value of G is the same as the expected value of g(x). • G can be used to estimate the expected value of g(x). • G is called an estimator of the expected value of the function g(x).

  25. Monte Carlo Integration • Weighted Sum of Random Variables • The variance of G is • Variance, in general, satisfies the following equation, with the covariance Cov[x,y] given as • For independent random variables, Cov[x,y] = 0

  26. Monte Carlo Integration • Weighted Sum of Random Variables • The following property holds for any constant a: • Using the fact that the xi in G are independent identically distributed variables, the variance for G is

  27. Monte Carlo Integration • Weighted Sum of Random Variables • So,

  28. Monte Carlo Integration • Weighted Sum of Random Variables • As N increases, the variance of G decreases with N, making G an increasingly good estimator of E[g(x)]. • The standard deviation σ decreases as sqrt(N).

  29. Monte Carlo Integration • Estimator • The Monte Carlo approach to computing the integral is to consider N samples to estimate the value of the integral. • The samples are selected randomly over the domain of the integral with probability distribution function p(x). • The estimator is denoted as <I> and is

  30. Monte Carlo Integration • Estimator • The expected value of the estimator is computed as follows:

  31. Monte Carlo Integration • Estimator • The variance of this estimator is • As N increases, the variance decreases linearly with N. • The error in the estimator is proportional to the standard deviation σ. • The standard deviation decreases as sqrt(N). • One problem with Monte Carlo is the slow convergence of the estimator to the right solution. • Four times more samples are required to decrease the error of the Monte Carlo computation by half.

  32. Monte Carlo Integration • Example of Simple Monte Carlo Integration

  33. Monte Carlo Integration • Bias • When the expected value of the estimator is exactly the value of the integral I, the estimator is said to be unbiased. • An estimator that does not satisfy this property is said to be biased. • The difference between the expected value of the estimator and the actual value of the integral is called bias: B[<I>] = E[<I>] – I. • The total error on the estimate is typically represented as the sum of the standard deviation and the bias.

  34. Monte Carlo Integration • Bias • A biased estimator is called consistent if the bias vanishes as the number of samples increases. limN->∞ B[<I>] = 0.

  35. Monte Carlo Integration • Accuracy • There exist two theorems which explain how the error of the Monte Carlo estimator reduces as the number of samples increases. • These error bounds are probabilistic in nature. • Chebyshev’s Inequality • The probability that a sample deviates from the solution by a value greater than sqrt(σ2/δ), where δ is an arbitrary positive number, is smaller than δ.

  36. Monte Carlo Integration • Accuracy • Assuming an estimator that averages N samples and has a well-defined variance, the variance of the estimator is

  37. Monte Carlo Integration • Accuracy • The Central Limit Theorem gives an even stronger statement about the accuracy of the estimator. • As N->∞, the Central Limit Theorem states that the values of the estimator have a normal distribution. • Therefore, as N->∞, the computed estimate lies in a narrower region around the expected value of the integral with higher probability. • It only applies when N is large enough. • How large N should be is not clear.

  38. Monte Carlo Integration • Estimating the Variance • The variance for the Monte Carlo estimator is

  39. Monte Carlo Integration • Deterministic Quadrature versus Monte Carlo • A deterministic quadrature rule to compute a one-dimensional integral could be to compute the sum of the area of regions over the domain. • Extending these deterministic quadrature runes to a d-dimensional integral would require Nd samples.

  40. Monte Carlo Integration • Multidimensional Monte Carlo Integration • The Monte Carlo integration technique can be extended to multiple dimensions in a straightforward manner as follows:

  41. Monte Carlo Integration • Multidimensional Monte Carlo Integration • One of the main strengths of Monte Carlo integration is that it can be extended seamlessly to multiple dimensions. • Monte Carlo techniques permit an arbitrary choice of N as oppose to Nd samples for deterministic quadrature techniques. • Example. • Integration over a Hemisphere.

  42. Monte Carlo Integration • Sampling Random Variables • The Monte Carlo technique is about computing samples from a probability distribution p(x). • Samples should be found such that the distribution of the samples matches p(x). • Inverse Cumulative Distribution Function • Rejection Sampling • Look-Up Table

  43. Monte Carlo Integration • Inverse Cumulative Distribution Function • Discrete Random Variables • Given a set of probabilities pi, we want to pick xi with probability pi. • The discrete cumulative probability distribution (CDF) corresponding to the pi is:

  44. Monte Carlo Integration • Inverse Cumulative Distribution Function • Discrete Random Variables • The selection of samples is done as follows: • Compute a sample u that is uniformly distributed over the domain [0,1). • Output k that satisfies the property:

  45. Monte Carlo Integration • Inverse Cumulative Distribution Function • Discrete Random Variables • For a uniform PDF, F(a≤u≤b) = (b-a). • The probability that the value of u lies between Fk-1 and Fk is Fk-Fk-1 = pk. • But this is the probability that k is selected. • Therefore, k is selected with probability pk.

  46. Monte Carlo Integration • Inverse Cumulative Distribution Function • Continuous Random Variables • A sample can be generated according to a given distribution p(x) by applying the inverse cumulative distribution function of p(x) to a uniformly generated random variable u over the interval [0,1). • The resulting sample is F-1(u). • This method of sampling requires the ability to compute and analytically invert the cumulative probability distribution.

  47. Monte Carlo Integration • Rejection Sampling • It is often not possible to derive an analytical formula for the inverse of the cumulative distribution function. • Rejection sampling is an alternative. • In rejection sampling, samples are tentatively proposed and tested to determine acceptance or rejection of the sample. • This method raises the dimension of the function being sampled by one and then uniformly samples the bounding box that includes the entire PDF. • This sampling technique yields samples with the appropriate distibution.

  48. Monte Carlo Integration • Rejection Sampling • For a one-dimensional PDF case. • The maximum value over the domain [a,b] to be sampled is M. • Rejection sampling raises the dimension of the function by one and creates a two-dimensional function over [a,b]x[0,M]. • This function is then sampled uniformly to compute samples (x,y). • Rejection sampling rejects all samples (x,y) such that p(x) < y. • All other samples are accepted. • The distribution of the accepted samples is exactly the PDF p(x) we want to sample.

  49. Monte Carlo Integration • Rejection Sampling • For a one-dimensional PDF case.

  50. Monte Carlo Integration • Rejection Sampling • For a one-dimensional PDF case. • One criticism of rejection sampling is that rejecting samples could be inefficient. • The efficiency of this technique is proportional to the probability of accepting a proposed sample. • This probability is proportional to the ratio of the area under the function to the area of the box. • If this ratio is small, a lot of samples are rejected.

More Related