top of page

June 20th, 2018 - Effect Size

The Strategic Partner for your Qualitative & Quantitative Research Needs

An internet image search of “statistically significant” often yields memes that imply joy and elation. Finding evidence of a hypothesized effect can be exciting, but many people misunderstand what this means. Statistical significance by itself tells you nothing about the extent of the effect you have.

 

A statistically significant finding implies that one can be confident that a hypothesized finding (e.g., difference between two groups, relationship between variables, etc.) is present. The lower the p-value, the greater the statistical significance is, and the greater the level of confidence. However, it tells you little about the type of effect you have.

 

Consider this analogy:   A positive pregnancy test means that a woman can be confident that she is expecting.

However, the test says nothing about the baby’s condition, size, health, gender, or appearance.  Other tests are

required, like amniocentesis or ultrasound to help determine that information.

 

In research, the “effect size” is a general measure of an effect’s magnitude. Different types of effect size measures exist, depending on the type of analysis performed. It is possible to obtain a result with strong statistical significance, but with a very small effect size.

 

How so? Well, when comparing two groups (say, the difference in favorability ratings of a company among those who identify as male or female), significance is based on several (for this example three) factors which affect the p-value:

 

  1. Difference between the two groups: the larger the difference, the more likely the statistical significance.

  2. Within-group variability: There is a greater probability of statistical significance if the variance within the groups is small (as it gives greater strength to the difference between the groups).

  3. Sample size:  the larger the group, the better the chance of finding statistical significance. When assessing the likability of a TV program, only having the opinions of three people yields little confidence when making inferences of the entire viewing population, but those of a thousand certainly increases that confidence.

 

Consider a recent study conducted by Facebook:  The newsfeeds of over 600,000 randomly selected users were

surreptitiously manipulated. Some saw more positive status updates from their friends while others saw more

negative ones. Facebook then tracked the content of users’ subsequent newsfeeds. There was a statistically significant

finding: those who saw positive statuses tended to post positive statuses in turn, while the opposite was found for those

who saw negative ones.

 

The study concluded that emotions can be “contagious” in virtual forms. However, the effect sizes of the variable assessing the differences between groups were very small, ranging from .008 to .02 (as a rough rule of thumb, an effect size of .1 is considered small). Such a small difference would normally be significant, but with over half a million participants, it did. Not surprisingly, people were naturally frightened by how easily one can be “controlled”.  However, the study actually shows that the effect, while likely present was so minimal that it hardly makes a difference for most people.

 

If a market research company reports a significant finding, be sure to inquire further! Ask for the effect sizes and how they should be interpreted in the scope of the research. Otherwise, you may be misled into making decisions that actually make very little difference.

bottom of page