Thursday, July 1, 2021

The Verge - Healths

The Verge - Healths


Pinterest bans weight loss ads due to eating disorder concerns

Posted: 01 Jul 2021 07:40 AM PDT

Photo by Amelia Holowaty Krales / The Verge

Pinterest said Thursday it is updating its ad policies to "prohibit all ads with weight loss language and imagery." The company said in a blog post that it developed its new policy with guidance from the National Eating Disorders Association (NEDA), whose research shows there has been a rise in eating disorders and unhealthy eating habits in young people during the pandemic.

"Pinterest is the place people come for inspiration to create life they love," the blog post reads. "It's where everyone belongs—regardless of body shape or size. We're empowering Pinners to plan for a summer and beyond without weight loss ads, so they can focus on what matters most."

The company claims it is the "only major platform to prohibit all weight loss ads," and that the new rules build on the company's existing ad policies against body shaming and weight loss scams. In 2019, Instagram restricted weight loss content and certain types of cosmetic surgery from being seen by users under age 18.

Beginning Thursday, Pinterest's policy will prohibit:

  • Any weight loss language or imagery
  • Any testimonials regarding weight loss or weight loss products
  • Any language or imagery that idealizes or denigrates certain body types
  • Referencing body mass index or similar indices
  • Any products that claim weight loss through something worn or applied to the skin

Ads for weight loss or appetite suppressant pills, before-and-after weight loss imagery, weight loss procedures like liposuction or fat burning, body shaming, and "unrealistic" claims about cosmetic results were already banned under the policy, the company added.

Elizabeth Thompson, interim CEO for NEDA, applauded the move. "NEDA is encouraged by this necessary step in prioritizing the mental health and well-being of Pinners, especially those impacted by diet culture, body shaming, and eating disorders," she said in a statement. "We are hopeful this global policy will encourage other organizations and companies to reflect on potentially harmful ad messages and to establish their own working policies that will create meaningful change."

Pinterest has been one of the more forward-thinking social media platforms when it comes to reining in harmful or misleading content in its ads. In 2016, it prohibited ads for "sensitive content including cultural appropriated and inappropriate costumes." It stopped running political ads in 2018, and in 2019, Pinterest was one of the first social media companies to block anti-vaccination content, to prevent misinformation from spreading on its platform.

Earlier this year the company said it would end the use of nondisclosure agreements (NDAs) when employees leave the company, following a settlement with its former COO who had alleged gender discrimination.

EU officially launches digital vaccine passport

Posted: 01 Jul 2021 07:06 AM PDT

The European Union's digital COVID-19 certificate officially launched today. The certificate allows people to show proof of COVID-19 vaccination, a recent negative test result, or a past COVID-19 infection.

The certificate, which includes a QR code and digital signature, can either be displayed on a digital device or printed out. People who have the certificate should not have to get an additional COVID-19 test or quarantine when traveling in the EU. The certificate only recognizes COVID-19 vaccines authorized in the EU — that includes the AstraZeneca, Pfizer / BioNTech, Moderna, and Johnson & Johnson shots.

Some countries in the EU have already been issuing and recognizing the certificate. Germany, for example, said in mid-June that it had already issued 5 million certificates. There will be a six-week phase-in period to get the rest of the EU member countries online.

Countries not in the EU, like the UK, have started rolling out their own systems. England has its own COVID-19 pass through the National Health Service (NHS), which can also show proof of vaccination, recent test result, or past infection. People in Scotland and Wales can get a paper version. The EU does not currently recognize the NHS pass, although some individual countries within the EU do, and that could change as countries work to make the systems compatible.

Some states in the United States (like New York) have their own COVID-19 vaccine certificates. Walmart rolled out a digital record for people who were vaccinated at its stores. But there isn't a nationwide system in the US. In March, the Biden administration was working on plans to organize and streamline the various certificate projects. Google announced yesterday it was adding support for vaccine cards to Android.

Google is building support for digital COVID vaccine cards into Android

Posted: 30 Jun 2021 05:30 PM PDT

A digital COVID vaccine card in Android. | Image: Google

Google is opening up Android's built-in passes system to let Android users store a digital vaccine card, which it calls a COVID Card, on their phone. The feature will initially roll out in the US, and it will rely on support from healthcare providers, local governments, or other organizations authorized to distribute COVID vaccines. The feature will also support storing COVID test results.

For vaccinations, your COVID Card will show info on when you were vaccinated and which vaccine you received, according to a Google support page. The card can be saved from your healthcare provider's app or website as well as from texts or emails sent to you.

Google recommends that you add a shortcut to the card on your home screen and will offer the option when you save your card to your device. Google says that the card won't be saved the cloud and that it won't use the information you provide for advertising purposes, but it does say that it will collect some information, like how many times you use your card and on which days. And you won't have to have the Google Pay app downloaded to save and access cards.

It's good to see that Google is making it easier for people to save their vaccination status digitally on their phones, though whether you're actually able to use the feature or not will still depend on your healthcare provider or government. Some states, like New York and California, have implemented their own digital vaccine cards, but Google's version could streamline the process for other authorities.

WHO outlines principles for ethics in health AI

Posted: 30 Jun 2021 09:50 AM PDT

SWITZERLAND-HEALTH-VIRUS-WHO
Photo by FABRICE COFFRINI/AFP via Getty Images

The World Health Organization released a guidance document outlining six key principles for the ethical use of artificial intelligence in health. Twenty experts spent two years developing the guidance, which marks the first consensus report on AI ethics in healthcare settings.

The report highlights the promise of health AI, and its potential to help doctors treat patients — particularly in under-resourced areas. But it also stresses that technology is not a quick fix for health challenges, particularly in low- and middle-income countries, and that governments and regulators should carefully scrutinize where and how AI is used in health.

The WHO said it hopes the six principles can be the foundation for how governments, developers, and regulators approach the technology. The six principles its experts came up with are: protecting autonomy; promoting human safety and well-being; ensuring transparency; fostering accountability; ensuring equity; and promoting tools that are responsive and sustainable.

There are dozens of potential ways AI can be used in healthcare. There are applications in development that use AI to screen medical images like mammograms, tools that scan patient health records to predict if they might get sicker, devices that help people monitor their own health, and systems that help track disease outbreaks. In areas where people don't have access to specialist doctors, tools could help evaluate symptoms. But when they're not developed and implemented carefully, they can — at best — not live up to their promise. At worst, they can cause harm.

Some of the pitfalls were clear during the past year. In the scramble to fight the COVID-19 pandemic, healthcare institutions and governments turned to AI tools for solutions. Many of those tools, though, had some of the features the WHO report warns against. In Singapore, the government admitted that a contact tracing application collected data that could also be used in criminal investigations — an example of "function creep," where health data was repurposed beyond the original goal. Most AI programs that aimed to detect COVID-19 based on chest scans were based on poor data and didn't end up being useful. Hospitals in the United States used an algorithm designed to predict which COVID-19 patients might need intensive care before the program was tested.

"An emergency does not justify deployment of unproven technologies," the report said.

The report also recognized that many AI tools are developed by large, private technology companies (like Google and Chinese company Tencent) or by partnerships between the public and private sector. Those companies have the resources and data to build these tools, but may not have incentives to adopt the proposed ethical framework for their own products. Their focus may be toward profit, rather than the public good. "While these companies may offer innovative approaches, there is concern that they might eventually exercise too much power in relation to governments, providers and patients," the report reads.

AI technology in healthcare is still new, and many governments, regulators, and health systems are still figuring out how to evaluate and manage them. Being thoughtful and measured in the approach will help avoid potential harm, the WHO report said. "The appeal of technological solutions and the promise of technology can lead to overestimation of the benefits and dismissal of the challenges and problems that new technologies such as AI may introduce."


Here's a breakdown of the six ethical principles in the WHO guidance and why they matter:

  • Protect autonomy: Humans should have oversight of and the final say on all health decisions — they shouldn't be made entirely by machines, and doctors should be able to override them at any time. AI shouldn't be used to guide someone's medical care without their consent, and their data should be protected.
  • Promote human safety: Developers should continuously monitor any AI tools to make sure they're working as they're supposed to and not causing harm.
  • Ensure transparency: Developers should publish information about the design of AI tools. One regular criticism of the systems is that they're "black boxes," and it's too hard for researchers and doctors to know how they make decisions. The WHO wants to see enough transparency that they can be fully audited and understood by users and regulators.
  • Foster accountability: When something goes wrong with an AI technology — like if a decision made by a tool leads to patient harm — there should be mechanisms determining who is responsible (like manufacturers and clinical users).
  • Ensure equity: That means making sure tools are available in multiple languages, that they're trained on diverse sets of data. In the past few years, close scrutiny of common health algorithms has found that some have racial bias built in.
  • Promote AI that is sustainable: Developers should be able to regularly update their tools, and institutions should have ways to adjust if a tool seems ineffective. Institutions or companies should also only introduce tools that can be repaired, even in under-resourced health systems.

No comments:

Post a Comment

End of Summer Sale ☀️😎

20% OFF Inside!🤯 ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏  ͏...