Halaman

    Social Items

A new study was published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). The study is titled “Experimental evidence of massive-scale emotional contagion through social networks”, and it analyzes an experiment on Facebook users conducted by Facebook, in collaboration with researchers from UCSF and Cornell, almost two years ago. The experiment was a success, as it showed that Facebook was able to alter the emotional state of its users by making subtle and deliberate changes to the content users were shown in their news feeds. The study was subsequently edited for publishing by a Princeton professor, and accepted for publication by the prestigious National Academies, which by the way include the Institute of Medicine (IOM).

The experiment “manipulated” the News Feeds of 689,003 people, randomly selected, and then measured the effect on the subjects’ own Facebook postings, due to increased exposure to either positive or negative content from their own friends. The results show modest but significant ability to affect people’s emotional state by ever so slightly altering what they see on the Internet. The study concludes by pointing out that “the well-documented connection between emotions and physical well-being suggests the importance of these findings for public health”. And if this line of thought leadership is not creepy enough for you, there is one more little thing to note here. The subjects of this bold experiment had no idea that their friend feeds were being manipulated and that they were being studied by Facebook.

According to The Atlantic, who first broke the story, neither Facebook nor the authors were available for comments. However, the Princeton professor who prepared the study for publication, Prof. Susan Fiske, agreed to talk with The Atlantic reporter. It seems that she had some initial concerns which were addressed by the authors when “they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research”. Yes, the research itself is "inventive and useful", according to Prof. Fiske, and its “originality” should not be lost because “ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...”  As it turns out, now that we know about the study, Prof. Fiske is “a little creeped out, too”.

The idea here seems to be that the definition of ethics at any given time depends on the personal opinion of those in the know. So if you conduct experiments on human subjects in secret, it is only your opinion that counts towards the definition of ethics. If the study becomes public, and if the public has a different opinion about ethics, you just say oops, maybe we shouldn’t have done that, but the results are way too cool, so let’s use them anyway. If indeed the study was approved by the review boards at either UCSF or Cornell, and contrary to explicit PNAS policy, there is no note to that effect in the article, it also appears that institutional review boards at academic centers will approve experiments on human subjects without consent or notification based on a solid track record of similar transgressions that went unnoticed and unchallenged in the past. Stated discomfort and feelings of creepiness emerge briefly only after public disclosure, and then we move on to the next adventure.

This little experiment is a perfect illustration of what Big Data can do for us. Big Data can spread mass happiness without “in-person interactions and nonverbal cues”, which can in turn induce “physical well-being” and ultimately improve the health of the public, at presumably much lower per capita costs. Here you have it; two of the Triple Aim goals are easily achievable by technology alone. All we need to figure out now is how to hit our third goal of better care for the individual, and this too is amenable to Big Data solutions once we get past the “creepiness” hurdle.

A recent article from Bloomberg describes precisely how highly individualized care is already provided to more fortunate patients through the beneficence of Big Data. Mammoth hospital systems turned health insurers, or just apprehensive about having to accept risk for their patients’ outcomes, are purchasing information from Big Data brokers, including credit card purchases, household and demographic information, and who knows what else. When combined with clinical and claims data these entities already have, Big Data allows health corporations to profile their customers and identify not only the ones that may put them at increased financial risk in the future, but, according to The New York Times, also the customers most likely to bring in increased revenues. And just like any other big business, health systems can then devise marketing and outreach strategies to mitigate their risk and increase their profits. Or in terms better suited for public consumption, they can provide better patient-centered care to individuals to help them get healthy and stay healthy. Problem solved.

Big Data is by definition a weapon of mass destruction. Some have likened Big Data to nuclear power, which can be used for unspeakable horrors or for the public good. This is an apt analogy, if we remember that nuclear power was first used for mass destruction, then it was (and still is) used for terrorizing nations, and when it is used to generate electricity, mountains of safety measures must be employed, and even then accidents do occur with dire consequences. Following the public discovery of unprecedented government surveillance on citizens’ communications (yes, that is Big Data), President Obama asked us to remember that "the folks at NSA and other intelligence agencies are our neighbors and our friends", and that they “are not dismissive of civil liberties”. Of course not, and the folks working in nuclear weapons plants, or nuclear reactors are also our friends and neighbors, and they are not mass murderers either. And yet, we found it necessary to enforce strict regulations on their work, instead of trusting their better angels and personal ethics.

The Facebook trial balloon floated nonchalantly by the National Academy of Sciences to gauge public reaction to mass psychological experimentation on people is most likely indicative of a much larger iceberg in the making. Creepiness is not a legal term and right now we are allowing every garage entrepreneur, every corporate entity and every governmental department to collect, distribute, sell, purchase and utilize unlimited amounts of Big Data for any purpose they see fit, including mass deception of the public, with no legal guidance and no legal consequences. We would never dream of a similar arrangement for nuclear materials. The polite reactions from self-appointed “privacy advocates” urging “transparency” and “patient ownership” of their data are woefully inadequate, because they demonstrate an utter lack of understanding of what Big Data is, how Big Data works, and how Big Data is being used. Besides, this is not about “privacy” anymore. This is about freedom, liberty and the non-enumerated right to human dignity.

The Ethics of Big Data Workers

A new study was published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS). The study is titled “Experimental evidence of massive-scale emotional contagion through social networks”, and it analyzes an experiment on Facebook users conducted by Facebook, in collaboration with researchers from UCSF and Cornell, almost two years ago. The experiment was a success, as it showed that Facebook was able to alter the emotional state of its users by making subtle and deliberate changes to the content users were shown in their news feeds. The study was subsequently edited for publishing by a Princeton professor, and accepted for publication by the prestigious National Academies, which by the way include the Institute of Medicine (IOM).

The experiment “manipulated” the News Feeds of 689,003 people, randomly selected, and then measured the effect on the subjects’ own Facebook postings, due to increased exposure to either positive or negative content from their own friends. The results show modest but significant ability to affect people’s emotional state by ever so slightly altering what they see on the Internet. The study concludes by pointing out that “the well-documented connection between emotions and physical well-being suggests the importance of these findings for public health”. And if this line of thought leadership is not creepy enough for you, there is one more little thing to note here. The subjects of this bold experiment had no idea that their friend feeds were being manipulated and that they were being studied by Facebook.

According to The Atlantic, who first broke the story, neither Facebook nor the authors were available for comments. However, the Princeton professor who prepared the study for publication, Prof. Susan Fiske, agreed to talk with The Atlantic reporter. It seems that she had some initial concerns which were addressed by the authors when “they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people's News Feeds all the time... I understand why people have concerns. I think their beef is with Facebook, really, not the research”. Yes, the research itself is "inventive and useful", according to Prof. Fiske, and its “originality” should not be lost because “ethics are kind of social decisions. There's not an absolute answer. And so the level of outrage that appears to be happening suggests that maybe it shouldn't have been done...”  As it turns out, now that we know about the study, Prof. Fiske is “a little creeped out, too”.

The idea here seems to be that the definition of ethics at any given time depends on the personal opinion of those in the know. So if you conduct experiments on human subjects in secret, it is only your opinion that counts towards the definition of ethics. If the study becomes public, and if the public has a different opinion about ethics, you just say oops, maybe we shouldn’t have done that, but the results are way too cool, so let’s use them anyway. If indeed the study was approved by the review boards at either UCSF or Cornell, and contrary to explicit PNAS policy, there is no note to that effect in the article, it also appears that institutional review boards at academic centers will approve experiments on human subjects without consent or notification based on a solid track record of similar transgressions that went unnoticed and unchallenged in the past. Stated discomfort and feelings of creepiness emerge briefly only after public disclosure, and then we move on to the next adventure.

This little experiment is a perfect illustration of what Big Data can do for us. Big Data can spread mass happiness without “in-person interactions and nonverbal cues”, which can in turn induce “physical well-being” and ultimately improve the health of the public, at presumably much lower per capita costs. Here you have it; two of the Triple Aim goals are easily achievable by technology alone. All we need to figure out now is how to hit our third goal of better care for the individual, and this too is amenable to Big Data solutions once we get past the “creepiness” hurdle.

A recent article from Bloomberg describes precisely how highly individualized care is already provided to more fortunate patients through the beneficence of Big Data. Mammoth hospital systems turned health insurers, or just apprehensive about having to accept risk for their patients’ outcomes, are purchasing information from Big Data brokers, including credit card purchases, household and demographic information, and who knows what else. When combined with clinical and claims data these entities already have, Big Data allows health corporations to profile their customers and identify not only the ones that may put them at increased financial risk in the future, but, according to The New York Times, also the customers most likely to bring in increased revenues. And just like any other big business, health systems can then devise marketing and outreach strategies to mitigate their risk and increase their profits. Or in terms better suited for public consumption, they can provide better patient-centered care to individuals to help them get healthy and stay healthy. Problem solved.

Big Data is by definition a weapon of mass destruction. Some have likened Big Data to nuclear power, which can be used for unspeakable horrors or for the public good. This is an apt analogy, if we remember that nuclear power was first used for mass destruction, then it was (and still is) used for terrorizing nations, and when it is used to generate electricity, mountains of safety measures must be employed, and even then accidents do occur with dire consequences. Following the public discovery of unprecedented government surveillance on citizens’ communications (yes, that is Big Data), President Obama asked us to remember that "the folks at NSA and other intelligence agencies are our neighbors and our friends", and that they “are not dismissive of civil liberties”. Of course not, and the folks working in nuclear weapons plants, or nuclear reactors are also our friends and neighbors, and they are not mass murderers either. And yet, we found it necessary to enforce strict regulations on their work, instead of trusting their better angels and personal ethics.

The Facebook trial balloon floated nonchalantly by the National Academy of Sciences to gauge public reaction to mass psychological experimentation on people is most likely indicative of a much larger iceberg in the making. Creepiness is not a legal term and right now we are allowing every garage entrepreneur, every corporate entity and every governmental department to collect, distribute, sell, purchase and utilize unlimited amounts of Big Data for any purpose they see fit, including mass deception of the public, with no legal guidance and no legal consequences. We would never dream of a similar arrangement for nuclear materials. The polite reactions from self-appointed “privacy advocates” urging “transparency” and “patient ownership” of their data are woefully inadequate, because they demonstrate an utter lack of understanding of what Big Data is, how Big Data works, and how Big Data is being used. Besides, this is not about “privacy” anymore. This is about freedom, liberty and the non-enumerated right to human dignity.

No comments