Thursday, November 19, 2015

Whose data analysis is correct?

We in advanced data-driven analytics believe that the patterns in the data do not lie.  Yet, our biases drive what features we feel should be key to the interpretation of data, pick the pattern that invokes the anomaly we want to detect, etc.  This experiment detailed in the Nature here by Dr. Raphael Silberzahn, Assistant Professor, Department of Managing People in Organizations, IESE Business School, Barcelona, Spain (bio and research here) and Dr. Eric L. Uhlmann, Associate Professor, Organizational Behavior, INSEAD, Singapore (bio and research here), highlights that today accuracy in data analytics must comprehend more than one method and more importantly group of data scientists before drawing conclusions.

"The experiment Last year, we recruited 29 teams of researchers and asked them to answer the same research question with the same data set. Teams approached the data with a wide array of analytical techniques, and obtained highly varied results. Next, we organized rounds of peer feedback, technique refinement and joint discussion to see whether the initial variety could be channelled into a joint conclusion. We found that the overall group consensus was much more tentative than would be expected from a single-team analysis."

In the near future, I hope that machines will simply consume data, and extract and raise the anomaly above the noise.  Though, we make the machine, so will the machine be biased?

The data set:

"All teams were given the same large data set collected by a sports-statistics firm across four major football leagues. It included referee calls, counts of how often referees encountered each player, and player demographics including team position, height and weight. It also included a rating of players' skin colour. As in most such studies, this ranking was performed manually: two independent coders sorted photographs of players into five categories ranging from 'very light' to 'very dark' skin tone."

The article concluded:

"Of the 29 teams, 20 found a statistically significant correlation between skin colour and red cards (see 'One data set, many analysts'). The median result was that dark-skinned players were 1.3 times more likely than light-skinned players to receive red cards. But findings varied enormously, from a slight (and non-significant) tendency for referees to give more red cards to light-skinned players to a strong trend of giving more red cards to dark-skinned players. After reviewing each other's reports, most team leaders concluded that a correlation between a player having darker skin and the tendency to be given a red card was present in the data."

No comments: