It is encouraging that computational social scientists are trying to anticipate threats to trust that are implicit in their work. Any data on human subjects inevitably raise privacy issues, and the real risks of abuse of such data are difficult to quantify. But although the risks posed by researchers seem far lower than those posed by governments, private companies and criminals, their soul-searching is justified. Abuse or sloppiness could do untold damage to the emerging field.
Rules are needed to ensure data can be safely and routinely shared among scientists, thus avoiding a Wild West where researchers compete for key data sets no matter what the terms. The complexities of anonymizing data correctly, and the lack of experience of local ethical committees in such matters, calls for an institutionalized approach to setting standards and protocols for using personal data, as rightly recommended recently by the US National Academy of Sciences. Solid and well thought out rules for research are essential for building trust.