Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-plugin-hostgator domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the ol-scrapes domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /home4/scienrds/scienceandnerds/wp-includes/functions.php:6114) in /home4/scienrds/scienceandnerds/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":4984,"date":"2022-05-04T14:48:13","date_gmt":"2022-05-04T14:48:13","guid":{"rendered":"https:\/\/scienceandnerds.com\/2022\/05\/04\/mental-health-app-privacy-language-opens-up-holes-for-user-data\/"},"modified":"2022-05-04T14:48:15","modified_gmt":"2022-05-04T14:48:15","slug":"mental-health-app-privacy-language-opens-up-holes-for-user-data","status":"publish","type":"post","link":"https:\/\/scienceandnerds.com\/2022\/05\/04\/mental-health-app-privacy-language-opens-up-holes-for-user-data\/","title":{"rendered":"Mental health app privacy language opens up holes for user data"},"content":{"rendered":"

Source: https:\/\/www.theverge.com\/2022\/5\/4\/22985296\/mental-health-app-privacy-policies-happify-cerebral-betterhealth-7cups<\/a>
\n
<\/br><\/code><\/p>\n

\n

In the world of mental health apps, privacy scandals have become almost routine. Every few months, reporting or research uncovers unscrupulous-seeming data sharing practices at apps like the Crisis Text Line, Talkspace, BetterHelp, and others: people gave information to those apps in hopes of feeling better, then it turns out their data was used in ways that help companies make money (and don\u2019t help them).<\/p>\n

It seems to me like a twisted game of whack-a-mole. When<\/a> under<\/a> scrutiny<\/a>, the apps<\/a> often<\/a> change or adjust their policies \u2014 and then new apps or problems pop up. It isn\u2019t just me: Mozilla researchers said this week<\/a> that mental health apps have some of the worst privacy protections of any app category. <\/p>\n

Watching the cycle over the past few years got me interested in how, exactly, that keeps happening. The terms of service and privacy policies on the apps are supposed to govern what companies are allowed to do with user data. But most people barely read them before signing (hitting accept), and even if they do read them, they\u2019re often so complex that it\u2019s hard to know their implications on a quick glance. <\/p>\n

\u201c\u200b\u200bThat makes it completely unknown to the consumer about what it means to even say yes,\u201d says David Grande, an associate professor of medicine at the University of Pennsylvania School of Medicine who studies digital health privacy. <\/p>\n

So what does it mean to say yes? I took a look at the fine print on a few to get an idea of what\u2019s happening under the hood. \u201cMental health app\u201d is a broad category, and it can cover anything from peer-to-peer counseling hotlines to AI chatbots to one-on-one connections with actual therapists. The policies, protections, and regulations vary between all of the categories. But I found two common features between many privacy policies that made me wonder what the point even was of having a policy in the first place. <\/p>\n

We can change this policy at any time<\/h3>\n

Even if you do a close, careful read of a privacy policy before signing up for a digital mental health program, and even if you feel really comfortable with that policy \u2014 sike, the company can go back and change that policy whenever they want. They might tell you \u2014 they might not. <\/p>\n

Jessica Roberts, director of the Health Law and Policy Institute at the University of Houston, and Jim Hawkins, law professor at the University of Houston, pointed out the problems<\/a> with this type of language in a 2020 op-ed in the journal Science<\/em>. Someone might sign up with the expectation that a mental health app will protect their data in a certain way and then have the policy rearranged to leave their data open to a broader use than they\u2019re comfortable with. Unless they go back to check the policy, they wouldn\u2019t know. <\/p>\n

One app I looked at, Happify, specifically says in its policy that users will be able to choose if they want the new uses of the data in any new privacy policy to apply to their information. They\u2019re able to opt out if they don\u2019t want to be pulled into the new policy. BetterHelp, on the other hand, says that the only recourse if someone doesn\u2019t like the new policy is to stop using the platform entirely. <\/p>\n

Having this type of flexibility in privacy policies is by design. The type of data these apps collect is valuable, and companies likely want to be able to take advantage of any opportunities that might come up for new ways to use that data in the future. \u201cThere\u2019s a lot of benefit in keeping these things very open-ended from the company\u2019s perspective,\u201d Grande says. \u201cIt\u2019s hard to predict a year or two years, five years in the future, about what other novel uses you might think of for this data.\u201d <\/p>\n

If we sell the company, we also sell your data<\/h3>\n

Feeling comfortable with all the ways a company is using your data at the moment you sign up to use a service also doesn\u2019t guarantee someone else won\u2019t be in charge of that company in the future. All the privacy policies I looked at included specific language saying that, if the app is acquired, sold, merged with another group, or another business-y thing, the data goes with it. <\/p>\n

The policy, then, only applies right now. It might not apply in the future, after you\u2019ve already been using the service and giving it information about your mental health. \u201cSo, you could argue they\u2019re completely useless,\u201d says John Torous, a digital health researcher in the department of psychiatry at Beth Israel Deaconess Medical Center. <\/p>\n

\n