Tiptoeing Around Facebook In Healthcare

Tiptoeing Around Facebook In Healthcare

Data privacy scandals, help in rigging elections, spreading fake news: Facebook has some tough months behind it and users are not happy with the social media giant’s performance. However, Mark Zuckerberg’s company does not only have a political and social impact, but it’s also quite relevant in healthcare. We looked around what Facebook currently does in healthcare and evaluated whether those are viable ways to follow in the future.

What have you done to the world, Zuck?

In November 2018, a Fortune poll suggested that Americans consider Facebook to be the least trustworthy of all the major technology companies as far as protecting user data. Only 22% of Americans said that they trust Facebook with their personal information, far less than Amazon (49%), Google (41%), Microsoft (40%), and Apple (39%). That might comes as little surprise in the sequel of events such as the scandal connected to Cambridge Analytica, a British data-mining firm affiliated with Donald Trump’s presidential campaign, which harvested personal information from more than 87 million Facebook users – among them Mark Zuckerberg’s – in an attempt to influence the results of the US presidential elections.

The social media machine built on sophisticated, computer-driven engines for extracting users’ personal information and data trails, found itself in the middle of social and political debates as it failed to take into consideration its influence on its users – be it individuals, companies, publishers or nation states. The issue of filter bubbles, the tendency to use the platform as a tool to express rage, the creation of an economic system that rewarded publishers for sensationalism, not accuracy or depth, all had eroded trust. However, it was the road to elect Trump which irrevocably destroyed the credibility of Facebook. By the end of the presidential campaign, the top fake stories on the platform were generating more engagement than the top real ones, and it was not Macedonian troll-farms producing clickbait for money, but Russian agencies attempting to divert the outcome of an election for their own interest. Zuck’s company didn't notice it or dealt with it in time. The same might be true about Facebook becoming a central tool for spreading propaganda against the Rohingya in Myanmar – even the UN blamed the platform for it - or stepping up support for Rodrigo Duterte in the Philippines.

Source: www.npr.org

As the social media giant seems to be an all-encompassing entity, we looked around what its relation is to healthcare. Does it handle private health data and if so, how exactly? Does it allow the dissemination of insane, health-related fake news? How does it fair in community building and how does it offer its own tools for health issues?

Sharing intimate health data? Not cool.

It’s long been known that apps outside of the Facebook ecosystem can and do willingly share data with the company to make it easier to reach existing and new users on the platform through ads. However, a couple of days ago, a Wall Street Journal report found that that’s especially troubling in the case of health and fitness apps which could be sharing anything from menstruation cycles through food allergies to blood pressure data. It turned out that quite popular health apps take part in the practice.

For example, the period and fertility tracker, Flo Health, in which case Facebook can match information it collects from the software’s ovulation-tracking feature to real profiles – so that users can then be potentially labeled as viable targets for ads presenting products for expecting mothers and new parents. Another company involved is a meditation tool, Breethe, which recognized its mistake regarding the issue. What the most troubling thing is that none of the apps, at least 11 out of the 70 that The Wall Street Journal examined, bothered to notify users about the data-sharing tool through privacy policies or terms of service. Facebook neither.

In every single instance, but especially in the case of sensitive health information, users have the right to know who collects their data and for what purposes, otherwise, there won’t be any trust towards tech companies – including the ones dealing with digital health. So what are you waiting for, Facebook? Ask users about their preferences in an open and unambiguous way!

Source: www.vox.com

Spreading anti-vaxx messages? Not cool. Cracking down on fake health news? Cool.

Facebook is still a hotbed for misleading or outright false information – and that’s also true in the case of healthcare. Alternative health pages have been known to spread falsehood about medicinal remedies that are not backed by traditional science, or debate issues like vaccination. In August 2018, Facebook deleted dozens of pages dedicated to fringe or holistic medicine in an apparent crackdown on pseudoscience. The accounts mainly concentrated on natural remedies or organic living, such as Just Natural Medicine (1 million followers), Natural Cures Not Medicine (2.3 million followers), and People’s Awakening (3.6 million followers).

However, that was clearly not enough. Some weeks ago, researchers at Health Feedback, a network of scientists dedicated to reviewing media coverage of health and medical news published their findings of the accuracy of the most popular health articles. They found that out of the ten most-shared health-related articles, seven contained misleading or false information – with the most popular piece, “Federal Study Finds Marijuana 100 Times Less Toxic than Alcohol, Safer than Tobacco”. The experts added that Facebook proved to be the largest source of inaccurate articles, accounting for 96 percent of the shares of the examined stories.

As one of the most favorite topics of medically not grounded pieces is anti-vaccination, and the number of people choosing not to vaccinate has grown so much that the World Health Organization recently called it a global health threat, the social media giant said that it is considering making anti-vaccination content on its site less visible. A more obvious cause for Facebook’s decision might be the recent measles outbreak in Washington, in which more than 50 people have been infected, mostly unvaccinated children.

A Chrome extension lets readers know if they're reading fake or biased news.

Source: www.cnet.com

Maintaining patient communities? Cool.

Zuck would be happy to know (if he wasn’t aware of it yet) that there are still corners of Facebook working around his original purpose: connecting people. The platform serves many patients as a virtual support group, people who may feel lonely and isolated while battling a severe disease. Patients use Facebook to learn from patient leaders who have already been through what they are going through. Many look to their patient peers for an unfiltered account of what a particular medical procedure is really like, or to get an idea of what kind of side effects to expect from a new medication.

Melissa Adams VanHouten, an advocate for patients with gastroparesis, manages a private Facebook group with more than 21,000 members. After the Cambridge Analytica scandal, she told Medical Marketing & Media that some people suggested relocating the group to another site, but that idea was quickly dismissed. The main reason was that she found no other community site that equals the flexibility and options Facebook offers. That might also be the explanation of why the majority of patient communities didn’t experience the expected “digital exodus” amidst the sequel of Facebook scandals.

Source: www.behance.net

A tool promoting blood donation? Almost cool.

In 2017, the social media giant launched a blood donation tool, whereby people in Bangladesh, India, and Pakistan could find opportunities to donate nearby. People who visit Blood Donations on Facebook can also sign up to be a blood donor to get notified directly when there is a need for blood nearby. Facebook said that up until June 2018, more than 11 million people signed up and thousands of donations were facilitated through the initiative.

However, the tool's person-to-person format is ringing alarm bells among experts and professionals in the field, who say that it's too easy to abuse the service, leaving vulnerable people at risk of paying extraordinarily high prices and receiving tainted blood, among other issues. That’s why only 3 months after the launch of the tool, several public health officials in India were calling for Facebook to make changes to the blood donation tool, warning that the tech project — although well-meaning — risks fueling a dangerous black market for blood and harming the country's fragile blood collection system.

Source: www.vecteezy.com

Suicide prevention through A.I. and reviewers? Cool.

For years, the company has allowed users to report suicidal content, which was sent to in-house reviewers. They evaluate the report and decide whether a person should be offered support from a suicide prevention hotline or, in extreme cases, have Facebook’s law enforcement response team intervene.

The social network ramped up these efforts after several people live-streamed their suicides on Facebook Live in early 2017. About a year ago, Facebook even added artificial intelligence-based technology that automatically flags posts with expressions of suicidal thoughts for the company’s human reviewers to analyze. Thus, the company now leverages on both algorithms and user reports to flag possible suicide threats.

Facebook says the enhanced program is indicating 20 times more cases of suicidal thoughts for content reviewers, and twice as many people are receiving Facebook’s suicide prevention support materials. However, some mental health experts said the social media giant’s calls to the police could also cause harm – for example unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.

Source: www.dailydot.com

Where is Facebook heading?

In the wake of the scandals, Zuckerberg announced what he would like to see in the following as the path for the social media platform with the responsibilities of a publisher (the latter has been denied by Facebook for too long): by focusing on bringing people closer together — whether it’s with family and friends, or around important moments in the world — we can help make sure that Facebook is time well spent.” For Zuck, this means more meaningful social interactions on the platform, although for ethicists criticizing Facebook for a long time, for example for Tristan Harris, who coined the term “Time Well Spent” first, it should rather be about a shift away from measuring comments and shares to emphasizing companies’ positive contributions to users’ lives. That would also presuppose some inherent design changes on the platform, not only prioritizing posts of friends over content of companies.

Maybe that could change with the introduction of VR? Facebook spent billions of dollars to buy Oculus Rift in 2014, and ever since it has been developing a range of VR experiences. One of them is Facebook Spaces, where users build their own lifelike avatar and then create groups to meet their friends online in virtual reality and chat, play games and generally mess about. Could that make a meaningful difference? As the Oculus Quest, the special headwear which will be available in the spring of 2019, for playing in the Facebook Spaces we might get a response sooner than expected.

Source: www.inc.com

What about data privacy issues?

Facebook hasn't seemed to understand so far how important data privacy is to its users – or rather doesn’t want to give up its privileges to mine and sell the enormous data aggregated on its servers. That’s the reason why people are rather suspicious when hearing about news that for example Facebook was discussing sharing user data with medical institutions and think that they might rather end up in a dystopian Black Mirror episode than being hopeful for getting customized care with knowledge about patients’ lifestyles and their medical needs. Facebook stands at that cross-road, exactly. Either to become a social media/publishing company that could help connect people in meaningful ways and bring humanity towards a better future, or not considering social, political and healthcare impacts and utterly failing to live up to the expectations.

With the recent news about how it treats its content moderators, we don’t know yet which will happen. The story again reflects Facebook’s “usual” behavior: hiring content moderators is a positive response in trying to solve the fake news issue, however, how the exact move unfolds, that rather signifies a step taken too fast without too much consideration. But that’s not the best strategy. Zuck, perhaps it would be time to move past the “Move fast and break things” motto, to learn more philosophy and to play more chess. 

Dr. Bertalan Mesko, PhD is The Medical Futurist and Director of The Medical Futurist Institute analyzing how science fiction technologies can become reality in medicine and healthcare. As a geek physician with a PhD in genomics, he is a keynote speaker and an Amazon Top 100 author.

Subscribe here for The Medical Futurist newsletter to get exclusive details about digital health!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics