1 / 68

Variables and measurements

Variables and measurements. Today´s programme. Why do experimental research? Variables and measurement Different types of research methods Introduction to the scientific method Planning experimental work Experimental design Exercises: Introduction to data entering in SPSS

raven
Télécharger la présentation

Variables and measurements

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Variables and measurements

  2. Today´s programme • Why do experimental research? • Variables and measurement • Different types of research methods • Introduction to the scientific method • Planning experimental work • Experimental design • Exercises: • Introduction to data entering in SPSS • Team questions session • Introduction to the mouse experiment – week 5 start

  3. Practical information • Exercises in room: 4A58 • Starts at 13.00 – ends at 15.00 (you can stay longer if you wish) • Handouts for exercises on the course website: http://experimentdesign.wordpress.com

  4. Science 101 • We perform research in order to answer questions • Do the users understand our menu structure? • Does our design put the user in a pleasant mood? • Can our customers use the product? • Does giving elderly electric shocks when they whine about today´s youth cause them to stop? • Etc. etc.

  5. Answering questions • Two general ways in which to answer questions • Observe what happens naturally in the world • Correlational and observational methods • Manipulate an aspect of the environment and observe what happens • Experimental methods

  6. Comparing methods • Correlational and experimental methods have much in common: • Empirical: Gather evidence via observation and measurement • Measurement: Measures something • Replicability: Results can be replicated by others • Objectivity: We seek an answer to the question that is objective and unbiased • Difference: Experimental methods manipulate variables

  7. Variables I • Scientists are interested in how variables change, and what causes the change • Anything that we can measure and which changes, is called a variable • ”Why do people like the color red?” • Variable: Preference of the color red • Variables can take many forms, i.e. Numbers, abstract values, etc.

  8. Measurement • Measuring is important for comparing results between studies/projects • Different measures provide different quality of data • Nominal (categorical) data • Ordinal data • Interval data • Ratio data Non-parametric Parametric

  9. Measurement • Nominal data (categorical, frequency data) • When numbers are used as names • No relationship between the size of the number and what is being measured • Two things with same number are equivalent • Two things with different numbers are different

  10. Measurement • E.g. Numbers on the shirts of soccer players • Nominal data are only used for frequencies • How many times ”3” occurs in a sample • How often player 3 scores compared to player 1

  11. Measurement • Ordinal data • Provides information about the ordering of the data • Does not tell us about the relative differences between values

  12. Measurement • For example: The order of people who complete a race – from the winner to the last to cross the finish line. • Typical scale for questionnaire data

  13. Measurement • Interval data • When measurements are made on a scale with equalintervals between points on the scale, but the scale has notrue zero point.

  14. 1 2 3 4 5 6 7 8 9 -4 -3 -2 -1 0 1 2 3 4 Measurement • Examples: • Celsius temperature scale: 100 is water's boiling point; 0 is an arbitrary zero-point (when water freezes), not a true absence of temperature. • Equal intervals represent equal amounts, but ratio statements are meaningless - e.g., 60 deg C is not twice as hot as 30 deg!

  15. Measurement • Ratio data • When measurements are made on a scale with equal intervals between points on the scale, and the scale has a true zero point. • e.g. height, weight, time, distance. • Measurements of relevance include: Reaction times, numbers correct answered, error scores in usability tests.

  16. Variables II • Variables can take many forms • Continous variable • Aggression – from calm to extremely violent • Discrete variables: No underlying continuum exists • Either pregnant or not • You cannot be ”a bit pregnant” • Difference can be fuzzy, and some continuous variables can be measured in discrete terms • Measuring reaction times to the nearest millisecond

  17. Experimental vs. Correlational research • Correlational research: We observe what happens • Experimental research: We maniulate something and observe what happens • Correlational research is unbiased by the researcher • So why do experimental research?

  18. Causality • Research questions often imply a causal link between variables • Does giving the teacher red apples increase a student´s grade? • Many research questions can be broken down into a proposed cause, and a proposed outcome • The cause (apples) and the outcome (grade) are variables • The key is to figure out how the proposed cause and the outcome relate to each other: Causal relationship

  19. Causality • Some problems with causality: (David Hume) • We must be aware of confoundingvariables (another variable than the one measured causes the effect) • Direction of causality: Is the cause the effect of the outcome? Or the other way around? • Need to isolate the causal variables (John Stuart Mill) • Solution: Compare two controlled situations: one where the cause is present and one where it is absent

  20. Causality • Karl Popper • Distinguished between scientific- and nonscientific statements • Scientific statements can be verified with reference to empirical testing • ”beating children is morally wrong” – non-scientific • ”On Earth, gravitational forces pulls objects with mass towards the center of the planet” – scientific • He also argued that even testable theories may not be true – could just not be disproved yet

  21. Testing theories Summary: To test a theory we must: • 1) Rule out other explanations of the supposed cause • A) Controltheconditions of experiment • B) Minimizerisk of random/unknown factors influencing result • C) Randomize the procedure • 2) Gain confidence that one theory is correct and another is not • How do we do this in practice?

  22. Testing theories 1) Ruling out other explanations: • A) We need to control the conditions of the experiment • To verify if eating candy makes you fat, we need one experimental setup where candy is present, one where it is not • The condition where the cause is absent is known as a control condition or baseline

  23. Testing theories • In the simplest situation, the cause is either present or absent • There can also be multiple levels – (0 pieces of candy per day, 2, 4, 7, 10, 10000 etc.) • The variable being manipulated is the independent variable – it dependsonly on the experimenter • Outcome variable • The variable not manipulated is the dependent variable – it´s value depend on the value of other variables in the experiment • Causal variable

  24. Testing theories • B) Minimize risk of random factors • We should compare situations that are identical in all respects apart from the proposed cause (causal/dependent variable) • All random factors should be held constant • Everyone should eat the same candy • Eat it at the same time of day • Be the same gender • Etc.

  25. Testing theories • C) Randomizing procedure • We can rule out many random influences by randomizing parts of the experimental study • E.g. randomly alllocating participants to experimental and control groups – this spreads attributes randomly • I.e.: Do not permit any systematic bias to enter the experiment

  26. Testing theories • 2) Comparing theories • So far we have: • Experimental conditions that control for confounding factors • We have isolated causal factors • We have randomized our procedure • Now we need an objectiveway of comparing one condition with another: Math

  27. Testing theories • In science, we draw inferences based on the confidence about a given set of results • i.e.: The difference between the experimental group (cause is present) and the control group (cause is absent) must be ”large” so as not to have occured by chance • This is where the statistics come in – to let us calculate the magnitude of the difference, and the chance of the result recuring randomly

  28. Testing theories - Summary • Experimental research seek to isolate cause and effect by manipulating the proposed causal variable/-s • Correlational research does not always permit isolation of causal variables or controloing for confounding variables ....

  29. Summary II • Correlational methods merely identify relationships: they cannot establish cause and effect. • A correlation between two variables is inherently ambiguous: • X might cause Y • Y might cause X • X and Y might both be caused by a third variable or set of variables.

  30. Summary III • The experimental method is the best way of identifying causal relationships. Example: X causes Y (METEOR CAUSES NO DINOSAURS) if: X occurs while Y is present (DINOSAURS MUST BE PRESENT BEFORE METEOR); Y happens in the presence of X (DINOSAURS CROAK WHEN METEOR HITS); Y does not happen in absence of X (NO METEOR, DINOSAURS (and no mammals...). Meteor: No meteor:

  31. Summary IV Experiments enable us to eliminate alternative explanations: To establish causality, we use experimental situations that differ systematically only on one variable (the independentvariable) ... ... and measure the effects of this on an outcome variable (the dependentvariable).

  32. Questions • A study of student boredom examined factors involved in boredom during lectures. 50 students were involved. Half attended a very boring 2-hour lecture, the other half sat outside in the sun for 2 hours. After each period, the students were asked to rate their boredom on a scale from 1-10 • How was outcome measured? • What levels were used? • What was the control group • Were confounding variables controlled for?

  33. Answers • How was outcome measured? • Ordinal data (ranked, no zero point, not equal distance between points) • What levels were used? • Scale of 1-10 • What was the control group • The students in the sun • Were confounding variables controlled for? • Arguable – the environment of the control group was different, this problem if intent to compare between ”boring” and ”non-boring” lecture • As is now, the experiment only shows effect ”sun” vs. ”boring lecture” • If ”non-boring” lecture included – how do we make sure it is not boring?

  34. The scientific method

  35. Science • Science: Any systematic knowledge or practice. • Science generally refers to a way of acquiring knowledge through the scientific method, as well as the organized body of knowledge gained through such research. • Adheres to positivist philosophy: Only authentic knowledge is scientific knowledge • Science = Logic + Observation

  36. Science • Three types of science: • Natural science: The study of natural phenomena • Social science: The study of human behavior and societies • Formal science: Mathematics – uses a priori rather than empirical methods, includes statistics and logic • Two first are empirical sciences, third a mixture, however all feed into each other • A priori = deductive knowledge (independent of experience) • A posteriori = Inductive knowledge (dependent on experience)

  37. Science • Experimental science: Another term for empirical sciences • Applied science: Application of scientific research to specific human needs • The two are often combined

  38. Empirical Sciences • Empirical sciences • Knowledge obtained from observable phenomena • Reproduceable: Phenomena must be reproduceable under experimental conditions by other scientists, in order to validated. • Careful, objective and systematic study of an area of knowledge • Must follow the scientific method

  39. The scientific method • The scientific method • A body of techniques for investigating phenomena, and acquiringknowledge • Collection of data through observation and experimentation, and the formulation and testing of hypotheses • Evidence must be observable, empirical and measureable, subject to principles of reasoning

  40. The scientific method • Empirical research must follow: • Define the question • Gather information and resources (observe) • Form hypothesis • Perform experiment and collect data • Analyze data • Interpret data and draw conclusions that serve as a starting point for new hypothesis • Redo entire cycle if necessary • Publish results • Retest (frequently done by other scientists) Alternative: Explorative approach – similar requirements on objectivity and reasoning, but forgoes hypothesis forming.

  41. The ACTUAL scientific method

  42. Hypothesis • A hypothesis defines an expected relationship between variables, which can be empirically tested. • For example: • Eliminating the minimap in StarCraft will increase player engagement • Flash animations make a website more attractive to blind white mice

  43. Quantitative vs. Qualitative • Empirical research methods come in two forms: • Quantitative methods: Collect numericaldata, strictlyobjective, analyzed using statistical methods • Qualitative methods: Collect data in the form of text, images, sounds etc. • Drawn from observations, interviews, documentary evidence etc., analyzed using qualitative data analysis methods (e.g. content coding) • Data and analysis can be subjective: Relies on researcher experience

  44. Selecting methods • Qualitative: • More appropriate in early stages of research (exploratory research) and for theory building • Qualitative methods applies well in real world setting, but lack validity and control • Problem with subjective interpretation of the data • Examples • Case study: Observations carried out in a real world setting • Action research: Applying a research idea in practice, evaluate results, modify idea (cross btw. experiment and case study)

  45. Selecting methods • Quantitative: • Appropriate when theory is well developed. • Theory testing and refinement • Examples: • Experiment: Apply treatment, measure results: This is the only method that can demonstrate causal relationship between variables. Associated with the scientific method • Survey: Asking rated questions in an interview • Historical data: Patterns in WOW auction house spendings • Most quality research include both types of methods

  46. Selecting methods • Method selection is critical to success of any project • Selection must be driven by state of knowledge • All hypotheses should be tested using two independent venues of data (enables crosss-correlation or data-triangulation)

  47. Planning experiments

  48. Planning experiments • Proper preparation is vital to all research • And eliminates nasty surprises • Preparation 101: Ask relevant questions: • What should I research? • What has been done already? • How should I research it? • Can my experimental design be meaningfully analyzed? • Is my measure valid? • What am I expecting to find?

  49. What should I research? • Well, that is kinda up to you, but: • The process of finding out can be extensive • Process of going from an initial interest to defining a specific research question: Perception of colors? Reading textbook + browse the net Reading science journals Finding key literature on the topic Formulating research question Does cultural background impact on the emotional impact of the color red on monitors? Emotional impact as a function of cultural background Emotional impact of perceiving colors Color perception on monitors

More Related