1 / 43

Introduction

Aspects of Bayesian Inference and Statistical Disclosure Control in Python Duncan Smith Confidentiality and Privacy Group CCSR University of Manchester. Introduction. Bayesian Belief Networks (BBNs) probabilistic inference Statistical Disclosure Control (SDC)

cicily
Télécharger la présentation

Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Aspects of Bayesian InferenceandStatistical Disclosure Controlin PythonDuncan Smith Confidentiality and Privacy GroupCCSR University of Manchester

  2. Introduction • Bayesian Belief Networks (BBNs) probabilistic inference • Statistical Disclosure Control (SDC) deterministic inference (attribution)

  3. Bayesian Belief Networks • Decision-making in complex domains • Hard and soft evidence • Correlated variables • Many variables

  4. Bayes’ Rule A prior belief and evidence combined to give a posterior belief

  5. Venn Diagram Event B Event A Both A B A & B only only N either A nor B

  6. Inference 1. Prior probability table P(A) 2. Conditional probability table P(B|A)

  7. 3. Produce joint probability table by multiplication 4. Condition on evidence 5. Normalise table probabilities to sum to 1

  8. defBayes(prior, conditional, obs_level): """Simple Bayes for two categorical variables. 'prior' is a Python list. 'conditional' is a list of lists (‘column’ variable conditional on ‘row’ variable). 'obs_level' is the index of the observed level of the row variable""" levels = len(prior) # condition on observed level result = conditional[obs_level] # multiply values by prior probabilities result = [result[i] * prior[i] for i in range(levels)] # get marginal probability of observed level marg_prob = sum(result) # normalise the current values to sum to 1 posterior = [value / marg_prob for value in result] return posterior Note: conditioning can be carried out before calculating the joint probabilities, reducing the cost of inference

  9. >>> A = [0.7, 0.3] >>> B_given_A = [[3.0/7, 2.0/3], [4.0/7, 1.0/3]] >>> Bayes(A, B_given_A, 0) [0.59999999999999998, 0.39999999999999997] >>> • The posterior distribution can be used as a new prior and combined with evidence from further observed variables • Although computationally efficient, this ‘naïve’ approach implies assumptions that can lead to problems

  10. Naive Bayes

  11. A ‘correct’ factorisation

  12. Conditional independence • The Naive Bayes example assumes: • But if valid, the calculation is easier and fewer probabilities need to be specified

  13. The conditional independence implies that if A is observed, then evidence on B is irrelevant in calculating the posterior of C

  14. A Bayesian Belief Network • R and S are independent until H is observed

  15. A Markov Graph • The conditional independence structure is found by marrying parents with common children

  16. Factoring • The following factorisation is implied • So P(S) can be calculated as follows (although there is little point, yet)

  17. If H and W are observed to be in states h and w, then the posterior of S can be expressed as follows (where epsilon denotes ‘the evidence’)

  18. Graph Triangulation

  19. Belief Propagation • Message passing in a Clique Tree

  20. Message passing in a Directed Junction Tree

  21. A Typical BBN

  22. Belief Network Summary • Inference requires a decomposable graph • Efficient inference requires a good decomposition • Inference involves evidence instantiation, table combination and variable marginalisation

  23. Statistical Disclosure Control • Releases of small area population (census) data • Attribution occurs when a data intruder can make inferences (with probability 1) about a member of the population

  24. Negative Attribution - An individual who is an accountant does not work for Department C • Positive Attribution - An individual who works in Department C is a lawyer

  25. Release of the full table is not safe from an attribute disclosure perspective (it contains a zero) • Each of the two marginal tables is safe (neither contains a zero) • Is the release of the two marginal tables ‘jointly’ safe?

  26. The Bounds Problem • Given a set of released tables (relating to the same population), what inferences about the counts in the ‘full’ table can be made? • Can a dataintruder derive an upper bound of zero for any cell count?

  27. A non-graphical case • All 2 × 2 marginals of a 2×2×2 table • A maximal complete subgraph (clique) without an individual corresponding table

  28. Original cell counts can be recovered from the marginal tables

  29. Each cell’s upper bound is the minimum of it’s relevant margins (Dobra and Fienberg)

  30. SDC Summary • A set of released tables relating to a given population • If the resulting graph is both graphical and decomposable, then the upper bounds can be derived efficiently

  31. Common aspects • Graphical representations Graphs / cliques / nodes / trees • Combination of tables Pointwise operations

  32. BBNs pointwise multiplication • SDC pointwise minimum and pointwise addition pointwise subtraction } For calculating exact lower bounds

  33. Coercing Numeric built-ins • A table is a numeric array with an associated list of variables • Marginalisation is trivial, using the built-in Numeric.add.reduce() function and removing the relevant variable from the list

  34. Conditioning is easily achieved using a Numeric.take() slice, appropriately reshaping the array with Numeric.reshape() and removing the variable from the list

  35. Pointwise multiplication • Numeric.multiply() generates the appropriate table IF the two tables have identical ranks and variable lists • This is ensured by adding new axes (Numeric.NewAxis) for the ‘missing’ axes and transposing one of the tables (Numeric.transpose()) so that the variable lists match

  36. array([24, 5]) ['profession'] (2,) array([20, 7, 2]) ['department'] (3,) array([[24], [ 5]]) (2, 1) ['profession', 'department'] array([[20, 7, 2]]) (1, 3) ['profession', 'department']

  37. >>> prof * dept array ([[480, 168, 48], [100, 35, 10]]) ['profession', 'department'] >>> (prof * dept).normalise(29) array([[ 16.551, 5.793, 1.655], [ 3.448, 1.206, 0.344]]) ['profession', 'department']

  38. Pointwise minimum / addition / subtraction • Numeric.minimum(), Numeric.add() and Numeric.subtract() generate the appropriate tables IF the two tables have identical ranks and variable lists AND the two tables also have identical shape • This is ensured by a secondary preprocessing stage where the tables from the first preprocessing stage are multiplied by a ‘correctly’ shaped table of ones (this is actually quicker than using Numeric.concatenate())

  39. array([[24], [ 5]]) (2, 1) ['profession', 'department'] array([ [20, 7, 2]]) (1, 3) ['profession', 'department'] array([[20, 7, 2] [20, 7, 2]]) (2,3) (2nd stage preprocessing)

  40. >>> prof.minimum(dept) array([[20, 7, 2], [ 5, 5, 2]]) ['profession', 'department']

  41. Summary • The Bayesian Belief Network software was originally implemented in Python for two reasons 1. The author was, at the time, a relatively inexperienced programmer 2. Self-learning (albeit with some help) was the only option

  42. The SDC software was implemented in Python because, 1. Python + Numeric turned out to be a wholly appropriate solution for BBNs (Python is powerful, Numeric is fast) 2. Existing code could be reused

More Related