Search This Blog

Showing posts with label suicide. Show all posts
Showing posts with label suicide. Show all posts

Wednesday, December 13, 2017

Can Facebook Prevent Suicide? Ethical Questions Arising from AI

In today’s hyperconnected world, we are generating and collecting so much data that it is beyond human capability to sift through it all. Indeed, one application of artificial intelligence is identifying patterns and deviations that signal intent on posts. Facebook is using AI in this way to extract value from its own Big Data trove. While that may be applied to a good purpose, it also raises ethical concerns.
Where might one get insight into this issue? In my own search, I found an organization called PERVADE (Pervasive Data Ethics for Computational Research). With the cooperation of six universities and the funding it received this September, it is working to frame the questions and move toward the answers.
I reached out to the organization for some expert views on the ethical questions related to Facebook’s announcement that it was incorporating AI in its expanded suicide-signal detection effort. That led to a call with one of the group’s members, Matthew Bietz.
Bietz told me the people involved in PERVADE are researching the ramifications of pervasive data, which encompasses continuous data collection — not just from what we post to social media, but also from the “digital traces that we leave behind anytime we’re online,” such as when we Google or email. New connections from the Internet of Things (IoT) and wearables further contribute to the growing body of “data about spaces we’re in,” he said. As this phenomenon is “relatively new,” it opens up new questions to explore with respect to “data ethics.”

Read more in 

The Ethics of AI for Suicide Prevention