top of page
Search
  • Writer's pictureKevan Oswald

Hypotheses Testing



The first step in hypotheses testing is to form a null hypothesis and an alternative hypothesis. A null hypothesis is that there is no difference. It is a statement of status quo. An alternative hypotheses is one where some difference is expected. When the alternative hypothesis is accepted changes are implemented. In marketing research rejection of the null hypothesis leads to the acceptance of the desired conclusion. The null hypothesis is always the hypotheses that is tested.


Example: A company is considering redesigning the packaging for one of its products. The package will be redesigned if more than 60% of the people surveyed like the new package design better than the old package design. The null hypothesis is that less that 60% of the survey respondents will like the redesign. If the null hypothesis is rejected because more than 60% of respondents like the new design than the product packaging will be redesigned.


A Type I error occurs when the sample results lead to the rejection of the null hypothesis when it is in fact true (the results show that more than 60% like the package redesign when in fact less than 60% like the package redesign). In other words, we rejected the null hypothesis and we shouldn’t have. The probability of a Type I error is also called the level of significance. We control for a Type I error by establishing a tolerable level of risk for committing a Type I error. This level of risk is associated with the sample size.


A Type II error occurs when the null hypothesis is not rejected when in fact it is false (the results show that less than 60% like the package redesign when in fact more than 60% like the package redesign). In other words, we didn’t reject the null hypothesis and we should have.


Parametric or Nonparametric: Hypothesis testing can be classified as parametric or nonparametric. Parametric tests assume that the variables are measured on at least an interval scale whereas nonparametric tests assume that the variables are measured on a nominal or ordinal scale. Parametric and nonparametric tests can be further classified based on how the data is collected, one sample or two samples, and if two-samples, whether or not it is an independent sample or paired sample.


T-Test for Parametric Testing: The most common test used in parametric testing is the t-test. A t-test will help determine if the difference is significant or due to chance. With a two sample t-test we learn whether there is a difference in the performances of two groups, such as if the users and non-users of a brand differ in terms of their perception of the brand. With a one sample t-test we are interested in making a statement of whether or not a difference exists between a single variable and a known standard, such as if at least 60% surveyed like the new package design. With a paired sample t-test two sets of observations relate to the same respondents such as a survey rating two competing brands.

https://youtu.be/oIpzdTc0reI


Nonparametric Testing: Nonparametric tests include Kolmogorvo-Smirnov (K-S), to measure goodness-of-fit, Mann-Whitney U, which ranks cases in order of increasing size to determine if two-samples are random or not, and Wilcoxon Matched-Pairs, to analyze differences accounting for the magnitude of the differences.


Chi-Squared: Chi-squared is the most common nonparametric test. It is used to compare the observed versus the expected. It helps determine if a relationship exists between two variables. It answers the question of whether or not the observed pattern of frequencies corresponds to an expected pattern and is often used to compare proportions. The null hypothesis is that there is no association between the variables.


Chi-squared is commonly used in cross-tabulation analysis. The chi-square test provides a chi-squared statistic, degrees of freedom, and a p-value to help determine if the variables are associated or independent. The p-value is the one to pay the most attention to. It is the probability of observing a sample statistic as extreme as the test statistic. If the p-value is less than or equal to .05, then the variables are typically associated. We reject the hypothesis, and conclude that a factor other than chance is operating for the deviation to be so great. For example, a p-value of 0.02 means that there is only a 2% chance that this deviation is due to chance alone. Therefore, it is extremely likely that other factors are involved. If the p-value is greater than .05, then we can conclude that the variables are likely independent. A p-value of 0.7, for example, means that there is a 70% probability that any deviation from expected is due to chance only.

3 views0 comments

Recent Posts

See All
bottom of page