1 / 47

Data Analysis II and Project Definitions (Teams)

Explore the types of errors in data analysis, learn about new visualization methods, and understand how to use data in projects. Discusses statistical uncertainties, standard measures of error, spatial analysis, error propagation, and various statistical tests.

vduncan
Télécharger la présentation

Data Analysis II and Project Definitions (Teams)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Analysis II and Project Definitions (Teams) Peter Fox and Greg Hughes Data Science – ITWS/CSCI/ERTH-4350/6350 Module 5 (week 7), October 18, 2016

  2. Contents • Errors and some uncertainty… • Visualization as an information tool and analysis tool • New visualization methods (new types of data) • Use, citation, attribution and reproducability • Projects!

  3. Types of data

  4. Errors • Personal errors are mistakes on the part of the experimenter. It is your responsibility to make sure that there are no errors in recording data or performing calculations • Systematic errors tend to decrease or increase all measurements of a quantity, (for instance all of the measurements are too large). E.g. calibration • Random errors are also known as statistical uncertainties, and are a series of small, unknown, and uncontrollable events

  5. Errors • Statistical uncertainties are much easier to assign, because there are rules for estimating the size • E.g. If you are reading a ruler, the statistical uncertainty is half of the smallest division on the ruler. Even if you are recording a digital readout, the uncertainty is half of the smallest place given. This type of error should always be recorded for any measurement

  6. Standard measures of error • Absolute deviation • is simply the difference between an experimentally determined value and the accepted value • Relative deviation • is a more meaningful value than the absolute deviation because it accounts for the relative size of the error. The relative percentage deviation is given by the absolute deviation divided by the accepted value and multiplied by 100% • Standard deviation

  7. Spatial analysis of continuous fields • Possibly more important than our answer is our confidence in the answer. • Our confidence is quantified by uncertainties as discussed earlier. • Once we combine numbers, we need to be able to assess how the uncertainties change for the combination. • This is called propagation of errors or more correctly the propagation of our understanding/ estimate of errors in the result we are looking at…

  8. Bathymetry

  9. Cause of errors?

  10. Resolution

  11. Reliability • Changes in data over time • Non-uniform coverage • Map scales • Observation density • Sampling theorem (aliasing) • Surrogate data and their relevance • Round-off errors in computers

  12. Propagating errors • This is an unfortunate term – it means making sure that the result of the analysis carries with it a calculation (rather than an estimate) of the error • E.g. if C=A+B (your analysis), then ∂C=∂A+∂B • E.g. if C=A-B (your analysis), then ∂C=∂A+∂B! • Exercise – it’s not as simple for other calcs. • When the function is not merely addition, subtraction, multiplication, or division, the error propagation must be defined by the total derivative of the function.

  13. Error propagation • Errors arise from data quality, model quality and data/model interaction. • We need to know the sources of the errors and how they propagate through our model. • Simplest representation of errors is to treat observations/attributes as statistical data – use mean and standard deviation.

  14. Analytic approaches Addition and subtraction

  15. Multiply, divide, exponent, log

  16. Parametric statistical ‘tests’ • F-test: test if two distributions with the same mean are the same or different based on their variances and degrees of freedom. • T-test: test if two distributions with different means are the same or different based on their variances and degrees of freedom

  17. F-test F = S12 / S22 where S1and S2 are the sample variances. The more this ratio deviates from 1, the stronger the evidence for unequal population variances.

  18. T-test

  19. Variability

  20. Dealing with errors • In analyses: • report on the statistical properties • does it pass tests at some confidence level? • On maps: • exclude data that are not reliable (map only subset of data) • show additional map of some measure of confidence

  21. Elevation map meters

  22. Larger errors ‘whited out’ m

  23. Elevation errors meters

  24. Types of analysis • Preliminary • Detailed • Summary • Reporting the results and propagating uncertainty • Qualitative v. quantitative, e.g. see http://hsc.uwe.ac.uk/dataanalysis/index.asp

  25. What is preliminary analysis? • Self-explanatory…? • We’ve discussed the sampling issue • The more measurements that can be made of a quantity, the better the result • Reproducibility is an axiom of science • When time is involved, e.g. a signal – the ‘sampling theorem’ – having an idea of the hypothesis is useful, e.g. periodic versus aperiodic or other… • http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

  26. Detailed analysis • Most important distinction between initial and the main analysis is that during initial data analysis it refrains from any analysis. • Basic statistics of important variables • Scatter plots • Correlations • Cross-tabulations • Dealing with quality, bias, uncertainty, accuracy, precision limitations - assessing • Dealing with under- or over-sampling • Filtering, cleaning

  27. Summary analysis • Collecting the results and accompanying documentation • Repeating the analysis (yes, it’s obvious) • Repeating with a subset • Assessing significance, e.g. the confusion matrix we used in the supervised classification example for data mining, p-values (null hypothesis probability)

  28. Reporting results/ uncertainty • Consider the number of significant digits in the result which is indicative of the certainty of the result • Number of significant digits depends on the measuring equipment you use and the precision of the measuring process - do not report digits beyond what was recorded • The number of significant digits in a value infers the precision of that value

  29. Reporting results… • In calculations, it is important to keep enough digits to avoid round off error. • In general, keep at least one more digit than is significant in calculations to avoid round off error • It is not necessary to round every intermediate result in a series of calculations, but it is very important to round your final result to the correct number of significant digits.  

  30. Uncertainty • Results are usually reported as result ± uncertainty (or error) • The uncertainty is given to one significant digit, and the result is rounded to that place • For example, a result might be reported as 12.7 ± 0.4 m/s2. A more precise result would be reported as 12.745 ± 0.004 m/s2. A result should not be reported as 12.70361 ± 0.2 m/s2 • Units are very important to any result

  31. Secondary analysis • Depending on where you are in the data analysis pipeline (i.e. do you know?) • Having a clear enough awareness of what has been done to the data (either by you or others) prior to the next analysis step is very important – it is very similar to sampling bias • Read the metadata (or create it) and documentation

  32. Tools • 4GL • Matlab • IDL • Ferret • Many others • Statistics • SPSS • Gnu R • Numpy, scipy • Excel • What have you used?

  33. Considerations for viz. as analysis • What is the improvement in the understanding of the data as compared to the situation without visualization? • Which visualization techniques are suitable for one's data? • E.g. Are direct volume rendering techniques to be preferred over surface rendering techniques?

  34. Why visualization? • Reducing amount of data, quantization • Patterns • Features • Events • Trends • Irregularities • Leading to presentation of data, i.e. information products • Exit points for analysis

  35. Types of visualization • Color coding (including false color) • Classification of techniques is based on • Dimensionality • Information being sought, i.e. purpose • Line plots • Contours • Surface rendering techniques • Volume rendering techniques • Animation techniques • Non-realistic, including ‘cartoon/ artist’ style

  36. Compression (any format) • Lossless compression methods are methods for which the original, uncompressed data can be recovered exactly. Examples of this category are the Run Length Encoding, and the Lempel-Ziv Welch algorithm. • Lossy methods - in contrast to lossless compression, the original data cannot be recovered exactly after a lossy compression of the data. An example of this category is the Color Cell Compression method. • Lossy compression techniques can reach reduction rates of 0.9, whereas lossless compression techniques normally have a maximum reduction rate of 0.5.

  37. Remember - metadata • Many of these formats already contain metadata or fields for metadata, use them!

  38. Tools • Conversion • Imtools • GraphicConverter • Gnu convert • Many more • Combination/Visualization • R • Matlab • Gnuplot • Python • http://disc.sci.gsfc.nasa.gov/giovanni

  39. New modes • http://www.actoncopenhagen.decc.gov.uk/content/en/embeds/flash/4-degrees-large-map-final • http://www.smashingmagazine.com/2007/08/02/data-visualization-modern-approaches/ • Many modes: • http://www.siggraph.org/education/materials/HyperVis/domik/folien.html

  40. Periodic table

  41. Managing visualization products • The importance of a ‘self-describing’ product • Visualization products are not just consumed by people • How many images, graphics files do you have on your computer for which the origin, purpose, use is still known? • How are these logically organized?

  42. (Class 2) Management • Creation of logical collections • Physical data handling • Interoperability support • Security support • Data ownership • Metadata collection, management and access. • Persistence • Knowledge and information discovery • Data dissemination and publication

  43. Use, citation, attribution • Think about and implement a way for others (including you) to easily use, cite, attribute any analysis or visualization you develop • This must include suitable connections to the underlying (aka backbone) data – and note this may not just be the full data set! • Naming, logical organization, etc. are key • Make them a resource, e.g. URI/ URL • See http://commons.esipfed.org/node/308

  44. Producability/ reproducability • The documentation around procedures used in the analysis and visualization are very often neglected – DO NOT make this mistake • Treat this just like a data collection (or generation) exercise • Follow your management plan • Despite the lack or minimal metadata/ metainformation standards, capture and record it • Get someone else to verify that it works

  45. Summary • Purpose of analysis should drive the type that is conducted • Many constraints due to prior management of the data • Become proficient in a variety of methods, tools • Many considerations around visualization, similar to analysis, many new modes of viz. • Management of the products is a significant task

  46. Reading • Note reading for week 7 – data sources for project definitions • There is a lot of material to review • Assignment 3 and 4! • Note – for week 8 (Oct. 25) • Brief Introduction to Data Mining • Longer Introduction to Data Mining and slide sets • Software resources list • Example: Data Mining

  47. Project Teams (A4) • Byrne, Corey; Cheng, Jie; Deb, Arijit; De Los Santos, Hannah; Zednik, Stephan • 2. Cattarin, Lee; Del Priore, James; Dwivedi, Aayush; Gadomski, Paula; Wang, Jixuan; Xu, Zhe • 3. Grivas, Genevieve; Gentyala, Vijay; Ho, Chien-Wei; Khachaturyan, David; Renus, Christopher • 4. Falk, Jeremy; Kolankowski, Sophia; Kuzmin, Konstantin; Peshin, Ankur; Yu, Guo • 5. Li, Lilli; Liu, Guohui; Russo, Robert; Sharma, Sidharth; Thorne, Brandon • 6. Francis, Kyle; Liu, Yi; Llewellyn, Maxwell; Sedlacek, Aaron; Troughia, Chandan Singh; You, Charles • 7. Gass, Damon; Lu, Yongqian; Machado, Ylonka; Norris, Spencer; SubhedarShradda, Arun • 8. Maeser, Adrienne; Minster, Zachary; Mukherjee Partha, Sarathi; Pan, Xiaoman; Tse, Timothy • 9. Boulter, James; Nunez, Nicole; Prabhakaran, Sidharth; Qui, Yichen; Ward, David Anyone missing?

More Related