YouTube blocks all anti-vaccine contents

No big deal. Youtube has been on a decline for some time by now... and there are plenty of clickbait merchants there.

There are better alternatives for watching videos.


 
Time to take out the garbage. Other SM platforms are following suit as well. There is no freedom of speech when there is a public health crisis.
 
Disgusting… talk about freedom of speech

Susan Wojcicki the Jew woman has been clamping down on 'alt-right' and Trump-supporting content for the past few years. So this is not surprising.
 
Time to take out the garbage. Other SM platforms are following suit as well. There is no freedom of speech when there is a public health crisis.
The trouble with this is who is that to determine what is spreading of false info and what is not… the fundamental premise of any policy has to be people judge for themselves what they chose to believe. Otherwise information will forever be manipulated by a selected few who decides what is the right content.
 
The trouble with this is who is that to determine what is spreading of false info and what is not… the fundamental premise of any policy has to be people judge for themselves what they chose to believe. Otherwise information will forever be manipulated by a selected few who decides what is the right content.
While that may be true, there will always be bad actors who are intent on spreading hate, fear and gross misinformation. For these nutcases and people with bad intentions, there are always other places to go spread their shit to other crazies who may want to listen. Just not on FB and other credible SM platforms. :o-o:
 
While that may be true, there will always be bad actors who are intent on spreading hate, fear and gross misinformation. For these nutcases and people with bad intentions, there are always other places to go spread their shit to other crazies who may want to listen. Just not on FB and other credible SM platforms. :o-o:

Facebook a credible platform????


nytimes.com


Opinion | Facebook Shuts Down Researchers Looking Into Misinformation​


Laura Edelson, Damon McCoy

6-7 minutes



Guest Essay

We Research Misinformation on Facebook. It Just Disabled Our Accounts.​


Aug. 10, 2021

09edelson-articleLarge-v3.jpg

Credit...Nicholas Konrad/The New York Times

Laura Edelson and Damon McCoy

Ms. Edelson is a Ph.D. candidate in computer science at N.Y.U.’s Tandon School of Engineering, where Dr. McCoy is an associate professor of computer science and engineering. They are affiliated with the nonpartisan research group Cybersecurity for Democracy.

We learned last week that Facebook had disabled our Facebook accounts and our access to data that we have been using to study how misinformation spreads on the company’s platform.

We were informed of this in an automated email. In a statement, Facebook says we used “unauthorized means to access and collect data” and that it shut us out to comply with an order from the Federal Trade Commission to respect the privacy of its users.

This is deeply misleading. We collect identifying information only about Facebook’s advertisers. We believe that Facebook is using privacy as a pretext to squelch research that it considers inconvenient. Notably, the acting director of the F.T.C.’s consumer protection bureau told Facebook last week that the “insinuation” that the agency’s order required the disabling of our accounts was “inaccurate.”

“The F.T.C. is committed to protecting the privacy of people, and efforts to shield targeted advertising practices from scrutiny run counter to that mission,” the acting director, Samuel Levine, wrote to Mark Zuckerberg, Facebook’s founder and chief executive.

Our team at N.Y.U.’s Center for Cybersecurity has been studying Facebook’s platform for three years. Last year, we deployed a browser extension we developed called Ad Observer that allows users to voluntarily share information with us about ads that Facebook shows them. It is this tool that has raised the ire of Facebook and that it pointed to when it disabled our accounts.

In the course of our overall research, we’ve been able to demonstrate that extreme, unreliable news sources get more engagement — that is, user interaction — on Facebook, at the expense of accurate posts and reporting. What’s more, our work shows that the archive of political ads that Facebook makes available to researchers is missing more than 100,000 ads.

There is still a lot of important research we want to do. When Facebook shut down our accounts, we had just begun studies intended to determine whether the platform is contributing to vaccine hesitancy and sowing distrust in elections. We were also trying to figure out what role the platform may have played leading up to the Capitol assault on Jan. 6.

We are privacy and cybersecurity researchers whose careers are built on protecting users. That’s why we’ve been so careful to make sure that our Ad Observer tool collects only limited and anonymous information from the users who agreed to participate in our research. And it is also why we made the tool’s source code public so that Facebook and others can verify that it does what we say it does.

We strongly believe we are not violating Facebook’s terms of service, as the company contends. But even if we had been, Facebook could have authorized our research. As Facebook declared in announcing the disabling of our accounts, “We’ll continue to provide ways for responsible researchers to conduct studies that are in the public interest while protecting the security of our platform and the privacy of people who use it.”

Our research is responsible and in the public interest. We’ve protected the privacy of our volunteers. Essentially, our ad tool collects the ads our volunteers see on their Facebook accounts, plus information provided by Facebook about when and why they were shown the ads and who paid for them. These ads are seen by the specific audience the advertiser targets.

This tool provides a way to see which entities are trying to influence the public and how they’re doing it. We think that’s important to democracy. Yet Facebook has denied us important access to continue to do much of our work.

One of the odd things about this dispute is that while Facebook has barred us from research tools available to users and other academic researchers, it has not blocked our Ad Observer browser by technical or legal means. It is still operational, and we are still collecting data from volunteers.

Still, by shutting us off from its own research tools, Facebook is making our work harder. This is unfortunate. Facebook isn’t protecting privacy. It’s not even protecting its advertisers. It’s protecting itself from scrutiny and accountability.

The company suggests that the Ad Observer is unnecessary, that researchers can study its platform with tools the company provides. But the data Facebook makes available is woefully inadequate, as the gaps we’ve found in its political ad archive prove. If we were to rely on Facebook, we simply could not study the spread of misinformation on such topics as elections, the Capitol riot and Covid-19 vaccines.

By blocking us from its platform, Facebook sent us a message: It wants to stop us from examining how it operates.

We have a message for Facebook: The public deserves more transparency about the systems the company uses to sell the public’s attention to advertisers and the algorithms it employs to promote content. We will keep working to ensure the public gets that transparency.

Laura Edelson is a Ph.D. candidate in computer science at New York University’s Tandon School of Engineering, where Damon McCoy is an associate professor of computer science and engineering. They are affiliated with the nonpartisan research group Cybersecurity for Democracy.
 
Never mind… TikTok will replace YouTube
Tik Tok will also eventually filter out contents that spread misinformation as well...

TikTok to flag and downrank ‘unsubstantiated’ claims fact checkers can’t verify​

Sarah Perez@sarahintampa / 9:00 PM GMT+8•February 3, 2021
Comment
New prompts to consider before sharing

Image Credits: TikTok
TikTok this morning announced a new feature that aims to combat the spread of misinformation on its platform. In addition to removing videos that are identified to be spreading false information, as verified by fact-checking partners, the company says it will now also flag videos where fact checks are inconclusive. These videos may also become ineligible for promotion into anyone’s For You page, TikTok notes.
The new feature will first launch in the U.S. and Canada, but will become globally available in the “coming weeks.”
The company explains that fact checkers aren’t always able to verify the information being reported in users’ videos. This could be because the fact check is inconclusive or can’t be immediately confirmed, such as in the case of “unfolding events.” (The recent storming of the U.S. Capitol comes to mind as an “unfolding event” that led to a surge of social media posts, only some of which were able to be quickly and accurately fact-checked.)
TikTok today works with partner fact checkers to help the company determine which videos are sharing misinformation. In the U.S., its partners include PolitiFact, Lead Stories and SciVerify, which work to assess the accuracy of content in areas related to civic processes, like elections, as well as health (e.g. COVID-19, vaccines), climate and more.
Internationally, TikTok works with Agence France-Presse (AFP), Animal Político, Estadão Verifica, Lead Stories, Logically, Newtral, Pagella Politica, PolitiFact, SciVerify and Teyit.
Typically, TikTok’s internal investigation and moderation team works to first verify misinformation using readily available information, like existing public fact checks. If it can’t do so, it will send the video to a fact-checking partner. If the fact check determines content is false, disinformation, manipulated media or anything else that violates TikTok’s misinformation policy, it’s simply removed.


These fact checks can be returned in as fast as one hour and most happen within less than one day, TikTok tells TechCrunch.
But going forward, if the fact checker can’t confirm the accuracy of the video’s content, it will be flagged as unsubstantiated content instead.

New-banner-on-videos.png

Image Credits: TikTok

A viewer who comes across one of these flagged videos will see a banner that says the content has been reviewed but can’t be conclusively validated. Unlike the COVID-19 banner, which appears at the bottom of the video, this new banner is more prominently overlaid across the video at the top of screen.
If the user then tries to share that flagged video, they’ll receive a prompt that reminds them the video has been flagged as unverified content. This additional step is meant to give the user a moment to pause and reconsider their actions. They’ll then need to choose whether to click the brightly colored “Cancel” button or the unhighlighted choice, “Share anyway.”

New-share-prompt.png

Image Credits: TikTok

The video’s original creator will also be alerted if their video is flagged as unverified content.
TikTok said it tested this labeling system in the U.S. late last year and found that viewers decreased the rate at which they shared videos by 24%. It also found that “likes” on unsubstantiated content decreased by 7%.
This system itself isn’t all that different from efforts made at other social networks to reduce the sharing of false content. For example, Facebook now labels misinformation after it’s reviewed by fact-checking partners and determined to be false. It also notifies people before they try to share the information and downranks the content so it appears lower in users’ News Feeds.
Twitter, too, uses a labeling system to identify misinformation and discourage sharing.
But on other platforms, only verifiably false information is labeled as such. TikTok’s new system moves to tackle the viral spread of unverified content, as well.

Video-creator-notification.png

Image Credits: TikTok

That doesn’t mean users won’t see the videos. If someone follows the account, they could still see the flagged video in their Following feed or by visiting the account profile directly.
But TikTok believes the new system will encourage users to “be mindful about what they share.”
It could also potentially defer people from making vague but incendiary claims meant to draw viewers and attention. Knowing that these videos could be downranked to the point that they may not ever reach the “For You” page could have a dampening effect on a certain type of social media content — the kind that comes from creators who post first, then ask questions later. Or those who largely shrug their shoulders over the impact of their rumors.

The new feature was designed and tested with Irrational Labs, a behavioral science lab that uses the psychology of decision-making to develop solutions that aim to drive positive user behavior changes. TikTok also says the addition was a part of its ongoing work to advance media literacy, which had included the “Be Informed” educational videos it created in partnership with the National Association of Media Literacy Education.
The banner will begin appearing in the U.S. and Canada starting today.
 
Tik Tok will also eventually filter out contents that spread misinformation as well...

TikTok to flag and downrank ‘unsubstantiated’ claims fact checkers can’t verify​

Sarah Perez@sarahintampa / 9:00 PM GMT+8•February 3, 2021
Comment
New prompts to consider before sharing

Image Credits: TikTok
TikTok this morning announced a new feature that aims to combat the spread of misinformation on its platform. In addition to removing videos that are identified to be spreading false information, as verified by fact-checking partners, the company says it will now also flag videos where fact checks are inconclusive. These videos may also become ineligible for promotion into anyone’s For You page, TikTok notes.
The new feature will first launch in the U.S. and Canada, but will become globally available in the “coming weeks.”
The company explains that fact checkers aren’t always able to verify the information being reported in users’ videos. This could be because the fact check is inconclusive or can’t be immediately confirmed, such as in the case of “unfolding events.” (The recent storming of the U.S. Capitol comes to mind as an “unfolding event” that led to a surge of social media posts, only some of which were able to be quickly and accurately fact-checked.)
TikTok today works with partner fact checkers to help the company determine which videos are sharing misinformation. In the U.S., its partners include PolitiFact, Lead Stories and SciVerify, which work to assess the accuracy of content in areas related to civic processes, like elections, as well as health (e.g. COVID-19, vaccines), climate and more.
Internationally, TikTok works with Agence France-Presse (AFP), Animal Político, Estadão Verifica, Lead Stories, Logically, Newtral, Pagella Politica, PolitiFact, SciVerify and Teyit.
Typically, TikTok’s internal investigation and moderation team works to first verify misinformation using readily available information, like existing public fact checks. If it can’t do so, it will send the video to a fact-checking partner. If the fact check determines content is false, disinformation, manipulated media or anything else that violates TikTok’s misinformation policy, it’s simply removed.


These fact checks can be returned in as fast as one hour and most happen within less than one day, TikTok tells TechCrunch.
But going forward, if the fact checker can’t confirm the accuracy of the video’s content, it will be flagged as unsubstantiated content instead.

New-banner-on-videos.png

Image Credits: TikTok

A viewer who comes across one of these flagged videos will see a banner that says the content has been reviewed but can’t be conclusively validated. Unlike the COVID-19 banner, which appears at the bottom of the video, this new banner is more prominently overlaid across the video at the top of screen.
If the user then tries to share that flagged video, they’ll receive a prompt that reminds them the video has been flagged as unverified content. This additional step is meant to give the user a moment to pause and reconsider their actions. They’ll then need to choose whether to click the brightly colored “Cancel” button or the unhighlighted choice, “Share anyway.”

New-share-prompt.png

Image Credits: TikTok

The video’s original creator will also be alerted if their video is flagged as unverified content.
TikTok said it tested this labeling system in the U.S. late last year and found that viewers decreased the rate at which they shared videos by 24%. It also found that “likes” on unsubstantiated content decreased by 7%.
This system itself isn’t all that different from efforts made at other social networks to reduce the sharing of false content. For example, Facebook now labels misinformation after it’s reviewed by fact-checking partners and determined to be false. It also notifies people before they try to share the information and downranks the content so it appears lower in users’ News Feeds.
Twitter, too, uses a labeling system to identify misinformation and discourage sharing.
But on other platforms, only verifiably false information is labeled as such. TikTok’s new system moves to tackle the viral spread of unverified content, as well.

Video-creator-notification.png

Image Credits: TikTok

That doesn’t mean users won’t see the videos. If someone follows the account, they could still see the flagged video in their Following feed or by visiting the account profile directly.
But TikTok believes the new system will encourage users to “be mindful about what they share.”
It could also potentially defer people from making vague but incendiary claims meant to draw viewers and attention. Knowing that these videos could be downranked to the point that they may not ever reach the “For You” page could have a dampening effect on a certain type of social media content — the kind that comes from creators who post first, then ask questions later. Or those who largely shrug their shoulders over the impact of their rumors.

The new feature was designed and tested with Irrational Labs, a behavioral science lab that uses the psychology of decision-making to develop solutions that aim to drive positive user behavior changes. TikTok also says the addition was a part of its ongoing work to advance media literacy, which had included the “Be Informed” educational videos it created in partnership with the National Association of Media Literacy Education.
The banner will begin appearing in the U.S. and Canada starting today.
And that includes the thrash from the western msm… so there you go, one man’s meat is another’s poison
 
And that includes the thrash from the western msm… so there you go, one man’s meat is another’s poison

It is actually the fact checkers that need to be fact checked.
 
No big deal. Youtube has been on a decline for some time by now... and there are plenty of clickbait merchants there.

There are better alternatives for watching videos.



Not true btw, all 3 parts of project veritas still there. I just watchee the johnson and jihnson ine yesterday
 
All these big techs rely on big-data from tiongcock to finesse their AI becos that fucking cuntry has no privacy laws.
In exchange, these companies will abide by CCPee rules to weed out any info that will make the fucking regime look bad.
Sadly, there still isn't a good YouTube alternative around as it is essentially a cash-burning business.
 
No big deal. Youtube has been on a decline for some time by now... and there are plenty of clickbait merchants there.

There are better alternatives for watching videos.




Tiktok?! How old are you? 12 with the attention span of a 6 month old?
 
Not true btw, all 3 parts of project veritas still there. I just watchee the johnson and jihnson ine yesterday

Still there... for now.

Bitchute, Rumble and Telegram are your alternatives.
 
Back
Top