We are often told not to believe everything we see and read but how many people actually follow this ‘rule’? Would you believe a scientific report that had statistical evidence for the proof that humans can see the future? Here's why you shouldn't:

In 2015 a report was published stating chocolate could help you lose weight. Not surprisingly this article gained huge interest with the public, it was something everyone wanted to hear. Big reliable organisations released articles supporting this new discovery, what made the hypothesis even more believable was the fact that it was ‘supported by science’. Unfortunately, to people's despair, the entire claim was hoaxed. The real question now is how? How did this report fool thousands of people including companies such as the BBC and Huffington Post? 

The answer is simple: P-Hacking.

Scientists often use P-values to assess the significance of results. This value, also known as calculated probability, it is the probability of finding the observed results, or something more extreme, when the null hypothesis of the experiment is true. In this case, the null hypothesis is ‘chocolate doesn't help you lose weight and the results found were simply coincidental.’ Most results, to be worthy of publication, have to have a p-value less than 0.05 or in other words a 0.05% chance that the data was collected by luck. The p-value of the data that proved chocolate helped weight loss was less than .05, so the data must have proven that eating chocolate helps weight loss, right? Wrong. 

The experiment had 3 groups. One group with a low carb diet, another with a low carb diet and 1.5oz of chocolate, and a control group. After 3 weeks, the control group stayed exactly the same but both low-carb diets lost around 2.3kg per person. The interesting thing is that the group with chocolate lost weight 10% quicker. The finding was statistically significant and had a p-value lower than .05. This ‘great’ discovery spread fast, making the headlines of many news agencies. However, the researchers had deliberately designed the experiment to increase the likelihood of getting true results even if the data was false.It was more likely to get ‘chocolate helps weight loss’ than ‘chocolate doesn't help weight loss’. The sample size was very small, just 5 people per group and for each person, 18 different things were tracked. They measured cholesterol, sodium levels, weight, sleep quality, etc. This meant that the headline could have been chocolate improves sleep quality or eating chocolate leads to better wellbeing. If one variable didn't show a good result another would, weight loss just happened to show statistical significance by chance.

P-values are accurate when experimenting one value, for example just weight. Once you start adding more variables, for example, blood protein levels and sleep, the probability of getting results that are false seem true increases, the data can be cut and processed so that it is more likely to get a p-value of less than 0.05. This is also known as p-hacking. Scientists have to decide how they analyse their data based on the results found. They can manipulate their analysis so that the p-value goes down.  Ways to do this are by adding 2 dependent variables, controlling gender, increasing amount of observations and dropping conditions. These variables together can increase the chance of a positive result that is actually false by 60%.  The false assumption that chocolate helps weight loss seemed true because of p-hacking.The researchers that conducted this experiment analysed the data so that it would produce findings that had statistical value even if the data wasn't correct. 

Maybe now, you'll be more caution and question what you read. Who knows, maybe this whole article is fake?