No matter the field, if a researcher is collecting data of any kind, at some point he is going to have to analyze it. And odds are he’ll turn to statistics to figure out what the data can tell him.
A wide range of disciplines – such as the social sciences, marketing, manufacturing, the pharmaceutical industry and physics – try to make inferences about a large population of individuals or things based on a relatively small sample. But many researchers are using antiquated statistical techniques that have a relatively high probability of steering them wrong. And that’s a problem if it means we’re misunderstanding how well a potential new drug works, or the effects of some treatment on a city’s water supply, for instance.
As a statistician who’s been following advances in the field, I know there are vastly improved methods for comparing groups of individuals or things, as well as understanding the association between two or more variables. These modern robust methods offer the opportunity to achieve a more accurate and more nuanced understanding of data. The trouble is that these better techniques have been slow to make inroads within the larger scientific community.