1 / 88

UNIT 2

UNIT 2. Research Design. Research Design. Although every problem and research objective may seem unique, there are usually enough similarities among problems and objectives to allow decisions to be made in advance about the best plan to resolve the problem.

batts
Télécharger la présentation

UNIT 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UNIT 2 Research Design

  2. Research Design • Although every problem and research objective may seem unique, there are usually enough similarities among problems and objectives to allow decisions to be made in advance about the best plan to resolve the problem. • There are some basic research designs that can be successfully matched to given problems and research objectives. • The research design is the master plan specifying the methods and procedures for collecting and analyzing the needed information.

  3. Types of Research Design • Three traditional categories of research design: • Exploratory • Descriptive • Causal • The choice of the most appropriate design depends largely on the objectives of the research and how much is known about the problem and these objectives. • The overall research design for a project may include one or more of these three designs as part(s) of it. • Further, if more than one design is to be used, typically we progress from Exploratory toward Causal.

  4. Basic Research Objectives and Research Design Research Objective Appropriate Design To gain background information, to define terms, to clarify Exploratory problems and develop hypotheses, to establish research priorities, to develop questions to be answered To describe and measure marketing phenomena at a point Descriptive in time To determine causality, test hypotheses, to make “if-then” Causal statements, to answer questions

  5. Research Design: Exploratory Research • Exploratory research is most commonly unstructured, “informal” research that is undertaken to gain background information about the general nature of the research problem. • Exploratory research is usually conducted when the researcher does not know much about the problem and needs additional information or desires new or more recent information. • Exploratory research is used in a number of situations: • To gain background information • To define terms • To clarify problems and hypotheses • To establish research priorities

  6. Research Design: Exploratory Research • A variety of methods are available to conduct exploratory research: • Secondary Data Analysis • Experience Surveys • Case Analysis • Focus Groups • Projective Techniques

  7. Research Design: Causal Research • Causality may be thought of as understanding a phenomenon in terms of conditional statements of the form “If x, then y.” • Causal relationships are typically determined by the use of experiments, but other methods are also used. • Causal Research is • undertaken with the aim of identifying cause and effect relationships amongst variables • are normally preceeded by exploratory and descriptive research studies • Often difficult to determine because of the influence of other variables (concommitant Variation and the presence of other hidden variables) • Example: Higher ice-cream consumption causes more people to drown (indicative of a causal relationship (?))

  8. Research Design: Descriptive Research • Descriptive research is undertaken to provide answers to questions of who, what, where, when, and how – but not why. • Two basic classifications: • Cross-sectional studies • Longitudinal studies

  9. Research Design: Descriptive Research Cross-sectional Studies • Cross-sectional studies measure units from a sample of the population at only one point in time. • Sample surveys are cross-sectional studies whose samples are drawn in such a way as to be representative of a specific population. • On-line survey research is being used to collect data for cross-sectional surveys at a faster rate of speed.

  10. Research Design: Descriptive Research Longitudinal Studies • Longitudinal studies repeatedly draw sample units of a population over time. • One method is to draw different units from the same sampling frame. • A second method is to use a “panel” where the same people are asked to respond periodically. • On-line survey research firms recruit panel members to respond to online queries.

  11. Research Design: Descriptive Research Longitudinal Studies • Two types of panels: • Continuous panels ask panel members the same questions on each panel measurement. • Discontinuous (Omnibus) panels vary questions from one time to the next. • Longitudinal data used for: • Market tracking • Brand-switching • Attitude and image checks

  12. Experiments • An experimentis defined as manipulating (changing values/situations) one or more independent variables to see how the dependent variable(s) is/are affected, while also controlling the affects of additional extraneous variables. • Independent variables: those over which the researcher has control and wishes to manipulate i.e. package size, ad copy, price. • Dependent variables: those over which the researcher has little to no direct control, but has a strong interest in testing i.e. sales, profit, market share. • Extraneous variables: those that may effect a dependent variable but are not independent variables.

  13. Experimental Design • An experimental design is a procedure for devising an experimental setting such that a change in the dependent variable may be solely attributed to a change in an independent variable. • Symbols of an experimental design: • O = measurement of a dependent variable • X = manipulation, or change, of an independent variable • R = random assignment of subjects to experimental and control groups • E = experimental effect • After-Only Design: X O1 • One-Group, Before-After Design: O1 X O2 • Before-After with Control Group: • Experimental group: O1 X O2 • Control group: O3 O4 • Where E = (O2 – O1) – (O4 – O3) Treatment, control, comparison to do something to subjects who are randomly selected and randomly assigned to groups for the purpose of determining the cause of an effect (difference between groups)

  14. The language of experimentation • independent variable • dependent variable • controlled variables • experimental group • control group • random selection and random assignment to yield groups that are equivalent prior to treatment • measurement after (or before and after) treatment

  15. Types of Experimental Designs • Basic Experimental Designs – A single independant variable is used to determine its impact on a single dependant variable. Basic experiments have the advantage of simplicity and easy measurability, but they also have the disadvantage of not being realistic • Factorial Experiment Designs – These allow for the investigation of the interaction of multiple (two or more) independant variables. Factorial experiments are more realistic but are also more complex and difficult to undertake than basic experiments • Laboratory experiments: those in which the independent variable is manipulated and measures of the dependent variable are taken in a contrived, artificial setting for the purpose of controlling the many possible extraneous variables that may affect the dependent variable • Field experiments: those in which the independent variables are manipulated and measurements of the dependent variable are made on test units in their natural setting • In field research (as in the lab) it is important to maximize treatment variance, minimize error variance, control extraneous variables

  16. Experimental Research (1) • An experiment is a research method in which the conditions are controlled so that one or more variables can be manipulated in order to test a hypothesis • Typically, the purpose of undertaking experiments is to determine causal relationships between variables (chosen dependent and independant variables), while eliminating or controlling all other variables that may have an impact on these variables under investigation • The simplest form of experimental research involves only two variables: the independant variable, whose value is altered, and the dependant variable, whose value reflects the alteration in the independant variable‘s value

  17. Experimental Research (2) • An ‚experimental group‘ is the group of subjects who are exposed to the experimental environment • A control group consists of individuals who are exposed to the ‚control condition‘ in a experiment, meaning, that they are not subject to the experiment in question, but are used as a reference to assess the impact on the experimental group • Some experiments can be quite complex, encompassing several independant variables. Special techniques have been developed to deal with such experiments

  18. Issues in Experimental Design Manipulation of the Independant Variable Four Basic Elements of an Experiment in the Business Field Selection and Measurement of the Dependant Variable Selection and Assignment of Test Units Control over Extraneous Variables

  19. Issues in Experimental Design:Manipulation of the Independant Variable • The independant variable‘s value can be altered without bring about any change in other variables – except the dependant variable • In business research, the independant variable can be qualitative or non-quantitative (for e.g., the training programs, financial reporting formats), or quantitative (for e.g., the amount of Rupees spent on training the employees in Organization X)

  20. Issues in Experimental Design:Selection and Measurement of the Dependant Variable • The dependant variable‘s value depends or is determined by changes in the value of the independant variable, which in turn, is manipulated by the researcher as part of the experiment • The choice of dependant variable by the researcher can sometimes be a difficult, not-so-obvious undertaking, and requires considerable skill and insight on the part of the researcher in order to avoid making mistakes which reduce the value of the research (Example: New Products Introduction and Sales Potential) • The time factor should be taken into consideration when choosing a dependant variable, as sometimes the outcomes are measurable after a long time

  21. Issues in Experimental Design:Selection and Assignment of Test Units • The Test Units are the subjects of the experimental research and can include individuals, organizational units, sales territories • Examples of Test Units: Consumers, Supermarkets, Functional Departments in an organization • In selecting test units, certain possible types of error must be taken into consideration, e.g. random sampling error (test units in the experimental and control groups should ideally have the same key characteristics but this may not be the case with statistical random assignment of the test units) and sample selection error (an administrative procedural error caused by improper selection of the sample, resulting in the introduction of a bias)

  22. Issues in Experimental Design:Control Over Extraneous Variables (1) INDEPENDANT VARIABLE Extraneous Variable A Extraneous Variable C Extraneous Variable B Extraneous Variable D DEPENDANT VARIABLE Experiment Environment

  23. Issues in Experimental Design:Control Over Extraneous Variables (2) • There are types of extraneously-conditioned errors which have to be considered in the experimental environment as hey effect the quality of the research: • Constant Experimental Error – This occurs when extraneous influences which are not controlled or eliminated have a similar impact on the experiment‘s dependant variable(s) every time the experiment is performedDemand Characteristics – This occurs when the research subject(s) are unintentionally exposed to the experimenter‘s hypothesis, causing them to respond or act in a manner which they may not have adopted were they not exposed to this information • Experimenter Bias – This occurs when the experimenter‘s presence, actions, or comments influences the research subjects‘ behaviour, making them to try to appear more favourable to the experimenter • Guineau Pig Effect – This occurs when the theme of the experiment causes the research subjects‘ to consciously modify their attitudes in order to please the experimenter • Hawthorne Effect – This is the unintended effect on the results of a research experiment which is caused by the subjects knowing that they are the participants

  24. Issues in Experimental Design:Control Over Extraneous Variables (3) • In order to reduce the chances of errors from reducing the value of an experiment, several counter-measures can be adopted, for example: • Making it difficult for the research subjects to know what the experiment is all about • Using trained and experienced experimenters • Designing experimental situations with a view to minimizing the chances of error • Preventing social interaction among research subjects so that they don‘t influence each other (joint decisions as opposed to the desired individual responses)

  25. Issues in Experimental Design:Control Over Extraneous Variables (4) • Many times, extraneous variables cannot be controlled or eliminated by the experimenter. However, researchers do have some options at their disposal to help mitigate the impact of the extraneous variables on their experiments • Consistency of Conditions – This means that the subjects in experimental groups are exposed to situations that are exactly alike, except for the differing conditions of the independant variable (e.g. all experimental sessions are conducted in the same room at the same time by the same experimenter) • Counterbalancing – This strives to eliminate the so-called ‚order of presentation bias‘ which arises when research subjects, who are participating in multiple experimental phases, acquire experience in the initial experimental phase which enables them to perform better in subsequent phases (e.g. job assembly) • Blinding – This is used in order to control the research subjects‘ knowledge of whether or not they have been exposed to an experimental treatment, e.g. research subjects in a Coca-Cola taste test may be told that they have or have not been given a new Cola product in order to test their reactions • Random Assignment – This is used to randomly assign the research subjects to experimental groups as a means of curtailing the impact of extraneous variables

  26. Validity in Research • Refers to whether the research actually measures what it says it’ll measure. Validity is the strength of our conclusions, inferences or propositions. • Internal Validity: the difference in the dependent variable is actually a result of the independent variable • External Validity: the results of the study are generalizable to other groups and environments outside the experimental setting • Conclusion Validity: we can identify a relationship between treatment and observed outcome • Construct Validity: we can generalize our conceptualized treatment and outcomes to broader constructs of the same concepts Reliability in Research The consistency of a measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects. In short, it is the repeatability of your measurement. A measure is considered reliable if a person's score on the same test given twice is similar. It is important to remember that reliability is not measured, it is estimated. Measured by test/retest and internal consistency.

  27. Validity and Reliability The relationship between reliability and validity is a fairly simple one to understand: a measurement can be reliable, but not valid. However, a measurement must first be reliable before it can be valid. Thus reliability is a necessary, but not sufficient, condition of validity. In other words, a measurement may consistently assess a phenomena (or outcome), but unless that measurement tests what you want it to, it is not valid. How Valid Are Experiments? • An experiment is valid if: • the observed change in the dependent variable is, in fact, due to the independent variable (internal validity) • if the results of the experiment apply to the “real world” outside the experimental setting (external validity)

  28. Internal Validity • Internal Validity – This refers to whether the experimental treatment was the sole cause of observed changes in the dependant variable. If the observed results were influenced by extraneous variables, then a valid conclusion about the relationship between the experimental treatment and the dependant variable cannot be made • There are 6 types of extraneous variables that may jeopardize internal validity – history effect, maturation effect, test effect, instrumentation effect, selection effect and mortality effect External Validity • External Validity – This refers to the quality of the researcher or experimenter to generalize beyond the data of an experiment to other subjects or other groups in the population under study, i.e. the external environment (e.g. are the results of a new product study in district A be applicable to the whole country?) • External validity can be jeopardized if internal validity of an experiment is lacking • Some issues have to be considered in the context of external validity, such as, the choice of research subjects and trade-offs between internal and external validity (e.g.: laboratory experiments have more internal validity than field experiments, but they have comparatively less external vaidity)

  29. The language of experimentation • independent variable • dependent variable • controlled variables • experimental group • control group • random selection and random assignment to yield groups that are equivalent prior to treatment • measurement after (or before and after) treatment

  30. Experimental Group Control Group Taught using new method. Score higher on Exam. Maybe this group is smarter. No way--groups were equivalent. Taught using usual method. Score lower on Exam. Maybe this group isn’t as smart. No way--groups were equivalent. Normal variation in the dependent variable that is present if the independent variable has no effect. Remember this example Dependent Variable was exam scores.

  31. In the experimental group, the dependent variable ends up here Notice that it can end up there even if the treatment had no effect--it can end up there 5 times out of a hundred (p<.05)

  32. Remember---there is normal variation in the dependent variable even if the independent variable has no effect. Nothing stays the same!!!!!!!!!!

  33. Measurement and Scaling Measurement means assigning numbers or other symbols to characteristics of objects according to certain prespecified rules. • One-to-one correspondence between the numbers and the characteristics being measured. • The rules for assigning numbers should be standardized and applied uniformly. • Rules must not change over objects or time.

  34. Measurement and Scaling Scaling involves creating a continuum upon which measured objects are located. Consider an attitude scale from 1 to 100. Each respondent is assigned a number from 1 to 100, with 1 = Extremely Unfavorable, and 100 = Extremely Favorable. Measurement is the actual assignment of a number from 1 to 100 to each respondent. Scaling is the process of placing the respondents on a continuum with respect to their attitude toward department stores.

  35. Nominal scale • Classifies data according to a category only. • E.g., which color people select. • Colors differ qualitatively not quantitatively. • A number could be assigned to each color, but it would not have any value. • The number serves only to identify the color. • No assumptions are made that any color has more or less value than any other color.

  36. Nominal scale • Assign subjects to groups or categories • No order or distance relationship • No arithmetic origin • Only count numbers in categories • Only present percentages of categories • Chi-square most often used test of statistical significance

  37. Primary Scales of MeasurementOrdinal Scale • A ranking scale in which numbers are assigned to objects to indicate the relative extent to which the objects possess some characteristic. • Can determine whether an object has more or less of a characteristic than some other object, but not how much more or less. • Any series of numbers can be assigned that preserves the ordered relationships between the objects. • In addition to the counting operation allowable for nominal scale data, ordinal scales permit the use of statistics based on centiles, e.g., percentile, quartile, median.

  38. Ordinal scale • Classifies data according to someorder or rank;e.g. names ordered alphabetically • With ordinal data, it is fair to say that one response is greater or less than another. • E.g. if people were asked to rate the hotness of 3 chili peppers, a scale of "hot", "hotter" and "hottest" could be used. Values of "1" for "hot", "2" for "hotter" and "3" for "hottest" could be assigned. • The gap between the items is unspecified.

  39. Ordinal scale • Can include opinion and preference scales • Median but not mean • No unique, arithmetic origin • Items cannot beadded • In marketing research practice, ordinal scale variables are often treated as interval scale variables

  40. Primary Scales of MeasurementInterval Scale • Numerically equal distances on the scale represent equal values in the characteristic being measured. • It permits comparison of the differences between objects. • The location of the zero point is not fixed. Both the zero point and the units of measurement are arbitrary. • Any positive linear transformation of the form y = a + bx will preserve the properties of the scale. • It is meaningful to take ratios of scale values. • Statistical techniques that may be used include all of those that can be applied to nominal and ordinal data, and in addition the arithmetic mean, standard deviation, and other statistics commonly used in marketing research.

  41. Interval scale • assumes that the measurements are made in equal units. • i.e. gaps between whole numbers on the scale are equal. • e.g. Fahrenheit and Celsius temperature scales • an interval scale does not have to have a true zero. e.g. A temperature of "zero" does not mean that there is no temperature...it is just an arbitrary zero point. • can’t perform full range of arithmetic equations. 40 degrees is not twice as hot as 20 degrees • permissible statistics: count/frequencies, mode, median, mean, standard deviation

  42. Primary Scales of MeasurementRatio Scale • Possesses all the properties of the nominal, ordinal, and interval scales. • It has an absolute zero point. • It is meaningful to compute ratios of scale values. • Only proportionate transformations of the form y = bx, where b is a positive constant, are allowed. • All statistical techniques can be applied to ratio data.

  43. Ratio scale • similar to interval scales except that the ratio scale has a true zero value. • e.g. the time something takes • allows you to compare differences between numbers. • permits full arithmetic operation. • if a train journey takes 2 hours and 30 min, then this is half as long as a journey which takes 5 hours.

  44. Ratio scale • Indicates actual amount of variable • Shows magnitude of differences between points on scale • Shows proportions of differences • All statistical techniques useable • Most powerful with most meaningful answers • Allows comparisons of absolute magnitudes

  45. Primary Scales of Measurement

  46. Primary scales of measurement Nominal Numbers assigned to runners 4 81 9 Ordinal Rank order of winners Third Place Second Place First Place Performance rating on a 0 to 10 Scale Interval 8.2 9.1 9.6 Time to finish in seconds Ratio 15.2 14.1 13.4

  47. Always use the most powerful scale possible Adding sophistication to scales • Concept: Desire to watch Star Wars movies • If a Star Wars movie is on television will you watch it? • Yes _____ No _____ • How likely are you to watch a Star Wars movie shown on television? • Very Likely ____ Likely ____ Indifferent ___ • Unlikely _____ Very Unlikely _____

  48. A classification of scaling techniques SCALING TECHNIQUES Comparative scales Non-comparative scales Constant sum Paired comparison Rank order Others Continuous rating scales Itemized rating scales Semantic differential Likert Stapel

  49. Types of scaling Techniques • COMPARATIVE SCALES • Involve the respondent directly comparing stimulus objects. • e.g. How does Pepsi compare with Coke on sweetness • NON-COMPARATIVE SCALES • Respondent scales each stimulus object independently of other objects • e.g. How would you rate the sweetness of Pepsi on a scale of 1 to 10

  50. A Comparison of Scaling Techniques • Comparative scales involve the direct comparison of stimulus objects. Comparative scale data must be interpreted in relative terms and have only ordinal or rank order properties. • In noncomparative scales, each object is scaled independently of the others in the stimulus set. The resulting data are generally assumed to be interval or ratio scaled.

More Related