After part 1 of this article The Facebook Papers: Internal Documents leaked Part-2 will let you know more about Facebook leaked papers.
Politics tends to inform internal decisions
The Wall Street Journal notes that there has been an internal controversy over the use of Facebook on the right and that political considerations are high among company executives:
Documents reviewed by the Journal did not rule out bias affecting its general decisions. They show that employees and their managers are fiercely arguing over how to block right-wing publishers and how, and top executives often provide a check-up from the level and file. Documents viewed by the Journal, which do not cover all staff messages, do not state the same arguments regarding the left publication.
Other documents also reveal that the Facebook management team was so focused on avoiding bias cases that it always put political views in the middle of decision-making.
Deals of Facebook
Facebook’s attempts to deal with harmful content in the Arab world have failed
Reports that internal documents show that by the end of 2020, Facebook researchers concluded that the company’s efforts to quantify hate speech in the Middle East had failed, and with no result:
Only six percent of the hate-filled Arabic language content was found on Instagram before it entered Facebook’s photo-sharing site. That compares with the 40 percent discount rate on Facebook. Ads attacking women and the LGBTQ community were rarely flagged to be removed from the Middle East.
In a related study, Egyptian users told the company that they were afraid to post political views on the platform for fear of being arrested or attacked online.
In Afghanistan, where 5 million people are users every month, Facebook has hired a few local language speakers to balance content, resulting in less than a 1 percent reduction in hate speech. Across the Middle East, mysterious algorithms for obtaining terrorist content have incorrectly removed non-violent Arabic content 77 percent of the time, harming people’s ability to express themselves online and limiting the reporting of possible war crimes.
In Iraq and Yemen, high levels of counterfeit accounts combined – many related to political or jihadist causes – spread false information and escalate local violence, often among rival religious groups.
How Facebook’s has brought it back?
Leaked documents reveal that facebook’s new algorithms motives to increase users in 2017 by promoting postings that support emotional responses. This was to reverse the decline in how many users were posting and communicating on the site.
According to the Washington Post:
Facebook has developed an algorithm that determines what people see in their feed news in order to use the emoji response as signals to suppress emotionally sensitive and irritating content – including content that may irritate them. As of 2017, the Facebook-level algorithm has treated the emoji response as five times more important than “popular,” revealing internal documents.
Facebook for three years has systematically developed one of the worst algorithms for its platform, made it more prominent in the user experience, and distributed it to a wider audience.
The “angry” emoji itself has also caused an internal controversy.
- There is a lot of detail in the Facebook scam test
- The Washington Post also published new information revealed in leaked texts about Facebook’s attempts to research user fraud:
- The culture of testing went deep into Facebook, as engineers pulled the weights and measured the results. The 2014 survey sought to trick the emotional valence of users’ post-feed feeds into positive or negative, and then look to see if the post has changed to match, raising behavioral concerns, The Post reported at the time. The other, reported by [whistleblower Frances] Haugen to Congress this month, involved turning off user security settings as a benchmark to show that the measures are working at all.
The previously unannounced test set involved raising more people in the feed of some of their randomly selected friends – and then, when the test was over, the friends checked to see if they kept in touch, according to the document. The researcher thought that, in other words, Facebook could bring the relationship closer.
What Facebook says about Vaccine?
It appears that Facebook has been reluctant to immediately use anti-piracy methods for the COVID vaccine
The Associated Press reports that according to leaked documents, last March, as the US vaccine release was rampant, Facebook staff researched ways to counter anti-vaccination claims on the platform, but the solutions they suggested were not immediately implemented either. at all:
By changing the way vaccinated posts are placed in people’s stories, researchers at the company have realized that they can reduce misleading information people have seen about the COVID-19 vaccine and provide users with posts from official sources such as the World Health Organization.
“Given these results, I think we hope to launch ASAP,” wrote one Facebook employee, in response to an internal invitation to the study. Instead, Facebook has set aside some suggestions for research. Some changes were not made until April. While another Facebook researcher suggested that the comments on the vaccine post be disabled in March until the platform could do a better job of dealing with the anti-vaccine messages hidden in it, that suggestion was ignored.
And Facebook is already struggling to find and address comments from users expressing opposition or doubts about vaccines:
Company employees have admitted that they do not have the handle to hold those words. And if they did, Facebook would have no policy to reduce comments. Free-for-all allowed users to compile vaccination posts from news outlets or organizations that help people with negative comments about policies.
Read More :