Archive

Archive for the ‘Facebook’ Category

Revealed: Facebook’s internal rulebook on sex, terrorism and violence

Leaked policies guiding moderators on what content to allow are likely to fuel debate about social media giant’s ethics

Facebook’s secret rules and guidelines for deciding what its 2 billion users can post on the site are revealed for the first time in a Guardian investigation that will fuel the global debate about the role and ethics of the social media giant.

The Guardian has seen more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm.

There are even guidelines on match-fixing and cannibalism.

The Facebook Files give the first view of the codes and rules formulated by the site, which is under huge political pressure in Europe and the US.

They illustrate difficulties faced by executives scrabbling to react to new challenges such as “revenge porn” – and the challenges for moderators, who say they are overwhelmed by the volume of work, which means they often have “just 10 seconds” to make a decision.

“Facebook cannot keep control of its content,” said one source. “It has grown too big, too quickly.”

Many moderators are said to have concerns about the inconsistency and peculiar nature of some of the policies. Those on sexual content, for example, are said to be the most complex and confusing.

Facebook revenge porn slide
Pinterest
A slide on Facebook’s revenge porn policy. Photograph: Guardian

One document says Facebook reviews more than 6.5m reports a week relating to potentially fake accounts – known as FNRP (fake, not real person).

Using thousands of slides and pictures, Facebook sets out guidelines that may worry critics who say the service is now a publisher and must do more to remove hateful, hurtful and violent content.

Yet these blueprints may also alarm free speech advocates concerned about Facebook’s de facto role as the world’s largest censor. Both sides are likely to demand greater transparency.

The Guardian has seen documents supplied to Facebook moderators within the last year. The files tell them:

  • Remarks such as “Someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”, or “fuck off and die” because they are not regarded as credible threats.
  • Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness.
  • Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or “actioned” unless there is a sadistic or celebratory element.
  • Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as “disturbing”.
  • All “handmade” art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not.
  • Videos of abortions are allowed, as long as there is no nudity.
  • Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”.
  • Anyone with more than 100,000 followers on a social media platform is designated as a public figure – which denies them the full protections given to private individuals.

Other types of remarks that can be permitted by the documents include: “Little girl needs to keep to herself before daddy breaks her face,” and “I hope someone kills you.” The threats are regarded as either generic or not credible.

In one of the leaked documents, Facebook acknowledges “people use violent language to express frustration online” and feel “safe to do so” on the site.

It says: “They feel that the issue won’t come back to them and they feel indifferent towards the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face.

Facebook credible violence slide
Pinterest
Facebook’s policy on threats of violence. A tick means something can stay on the site; a cross means it should be deleted. Photograph: Guardian

“We should say that violent language is most often not credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective language such as ‘I’m going to kill you’ or ‘Fuck off and die’ is not credible and is a violent expression of dislike and frustration.”

It adds: “People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.”

Facebook conceded that “not all disagreeable or disturbing content violates our community standards”.

Monika Bickert, ‎Facebook’s head of global policy management, said the service had almost 2 billion users and that it was difficult to reach a consensus on what to allow.

A Facebook slide on threats of violence.
Pinterest
A Facebook slide on threats of violence. The ticks mean these statements need not be deleted. Photograph: Guardian

“We have a really diverse global community and people are going to have very different ideas about what is OK to share. No matter where you draw the line there are always going to be some grey areas. For instance, the line between satire and humour and inappropriate content is sometimes very grey. It is very difficult to decide whether some things belong on the site or not,” she said.

https://interactive.guim.co.uk/2016/08/explainer-interactive/embed/embed.html?id=388299dc-ebc2-4635-83b8-b1064d6dac80

“We feel responsible to our community to keep them safe and we feel very accountable. It’s absolutely our responsibility to keep on top of it. It’s a company commitment. We will continue to invest in proactively keeping the site safe, but we also want to empower people to report to us any content that breaches our standards.”

She said some offensive comments may violate Facebook policies in some contexts, but not in others.

Facebook’s leaked policies on subjects including violent death, images of non-sexual physical child abuse and animal cruelty show how the site tries to navigate a minefield.

The files say: “Videos of violent deaths are disturbing but can help create awareness. For videos, we think minors need protection and adults need a choice. We mark as ‘disturbing’ videos of the violent deaths of humans.”

Such footage should be “hidden from minors” but not automatically deleted because it can “be valuable in creating awareness for self-harm afflictions and mental illness or war crimes and other important issues”.

Facebook animal abuse policy
Pinterest
A slide on animal cruelty. Photograph: Guardian

Regarding non-sexual child abuse, Facebook says: “We do not action photos of child abuse. We mark as disturbing videos of child abuse. We remove imagery of child abuse if shared with sadism and celebration.”

One slide explains Facebook does not automatically delete evidence of non-sexual child abuse to allow the material to be shared so “the child [can] be identified and rescued, but we add protections to shield the audience”. This might be a warning on the video that the content is disturbing.

Facebook confirmed there are “some situations where we do allow images of non-sexual abuse of a child for the purpose of helping the child”.

Its policies on animal abuse are also explained, with one slide saying: “We allow photos and videos documenting animal abuse for awareness, but may add viewer protections to some content that is perceived as extremely disturbing by the audience.

“Generally, imagery of animal abuse can be shared on the site. Some extremely disturbing imagery may be marked as disturbing.”

Photos of animal mutilation, including those showing torture, can be marked as disturbing rather than deleted. Moderators can also leave photos of abuse where a human kicks or beats an animal.

Facebook said: “We allow people to share images of animal abuse to raise awareness and condemn the abuse but remove content that celebrates cruelty against animals.”

The files show Facebook has issued new guidelines on nudity after last year’s outcry when it removed an iconic Vietnam war photo because the girl in the picture was naked.

It now allows for “newsworthy exceptions” under its “terror of war” guidelines but draws the line at images of “child nudity in the context of the Holocaust”.

Facebook told the Guardian it was using software to intercept some graphic content before it got on the site, but that “we want people to be able to discuss global and current events … so the context in which a violent image is shared sometimes matters”.

Some critics in the US and Europe have demanded that the company be regulated in the same way as mainstream broadcasters and publishers.

A Facebook slide on its Holocaust policy.
Pinterest
A Facebook slide on its Holocaust policy. Photograph: Guardian

But Bickert said Facebook was “a new kind of company. It’s not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used. We don’t write the news that people read on the platform.”

A report by British MPs published on 1 May said “the biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal or dangerous content, to implement proper community standards or to keep their users safe”.

Sarah T Roberts, an expert on content moderation, said: “It’s one thing when you’re a small online community with a group of people who share principles and values, but when you have a large percentage of the world’s population and say ‘share yourself’, you are going to be in quite a muddle.

“Then when you monetise that practice you are entering a disaster situation.”

Facebook has consistently struggled to assess the news or “awareness” value of violent imagery. While the company recently faced harsh criticism for failing to remove videos of Robert Godwin being killed in the US and of a father killing his child in Thailand, the platform has also played an important role in disseminating videos of police killings and other government abuses.

In 2016, Facebook removed a video showing the immediate aftermath of the fatal police shooting of Philando Castile but subsequently reinstated the footage, saying the deletion was a “mistake”.

Facebook fined $122 million for misleading EU over WhatsApp deal

Facebook says it couldn’t automatically match WhatsApp accounts; EC disagrees.

Source: (UK) 

Facebook has been smacked with a €110 million fine by the antitrust wing of the European Commission for providing incorrect or misleading information about its acquisition of WhatsApp.

Three years ago, Facebook claimed that it did not have the technical capabilities to match existing Facebook accounts with the WhatsApp accounts it would acquire—a claim that Brussels’ competition chief Margrethe Vestager strongly disagrees with.

“The commission has found that… the technical possibility of automatically matching Facebook and WhatsApp users’ identities already existed in 2014, and that Facebook staff were aware of such a possibility,” the commission said on Thursday.

Back in 2014, upon the pronouncement of their impending nuptials, WhatsApp promised that “nothing” would change for its hundreds of millions of users after being acquired by Facebook. By August 2016, however, the free content ad network had reneged on that claim: WhatsApp rolled out some new terms of service that explicitly allowed Facebook to hoover up user data, ostensibly to provide targeted advertising.

It was these new terms of service that caught the eye of Vestager and her team, and in December 2016 the European Commission announced it would be investigating whether Facebook had provided incorrect or misleading information.

The commission can impose fines of up to one percent of the turnover of a company when it intentionally or negligently provides incorrect information during a merger or acquisition. The fine imposed on Facebook—€110 million or about £94/$122 million—is about 0.5 percent of the company’s reported $27 billion revenues in 2016.

Vestager said Facebook had committed two separate infringements: once when it provided incorrect or misleading information in its paperwork to acquire WhatsApp in 2014, and again when the EC requested further information from the company. By that rationale, the €110m fine for two breaches under EU competition rules is small: it could have been as large as €480 million.

“In setting the amount of a fine, the commission takes into account the nature, the gravity, and duration of the infringement, as well as any mitigating and aggravating circumstances,” the commission said. It added:

the Commission considers that Facebook staff were aware of the user matching possibility and that Facebook was aware of the relevance of user matching for the commission’s assessment, and of its obligations under the Merger Regulation. Therefore, Facebook’s breach of procedural obligations was at least negligent.

The commission has also considered the existence of mitigating circumstances, notably the fact that Facebook cooperated with the commission during the procedural infringement proceedings. In particular, in its reply to the commission’s Statement of Objections, Facebook acknowledged its infringement of the rules and waived its procedural rights to have access to the file and to an oral hearing. This allowed the commission to conduct the investigation more efficiently. The commission has taken Facebook’s cooperation into account in setting the level of the fine.

On the basis of these factors, the commission has concluded that an overall fine of €110 million is both proportionate and deterrent.

Vestager said the “decision sends a clear signal to companies that they must comply with all aspects of EU merger rules, including the obligation to provide correct information. And it imposes a proportionate and deterrent fine on Facebook. The commission must be able to take decisions about mergers’ effects on competition in full knowledge of accurate facts.”

It’s important to note that the fine has no impact on the commission’s authorisation in 2014 of a Facebook–WhatsApp merger; the commission already knew that automated user matching was a possibility and approved the merger anyway.

Facebook claimed it had “acted in good faith” with Vestager’s office and had “sought to provide accurate information at every turn.” It added: “The errors we made in our 2014 filings were not intentional and the commission has confirmed that they did not impact the outcome of the merger review.”

Now read: Why not ban cars, Amber Rudd? It’d be more effective than banning end-to-end encryption

This post originated on Ars Technica UK

Leaked document reveals Facebook conducted research to target emotionally vulnerable and insecure youth

A SECRET document shows in scary detail how Facebook can exploit the insecurities of teenagers using the platform.

FACEBOOK has come under fire over revelations it is targeting potentially vulnerable youths who “need a confidence boost” to facilitate predatory advertising practices.

The allegation was revealed this morning by The Australian which obtained internal documents from the social media giant which reportedly show how Facebook can exploit the moods and insecurities of teenagers using the platform for the potential benefit of advertisers.

The confidential document dated this year detailed how by monitoring posts, comments and interactions on the site, Facebook can figure out when people as young as 14 feel “defeated”, “overwhelmed”, “stressed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”.

Such information gathered through a system dubbed sentiment analysis could be used by advertisers to target young Facebook users when they are potentially more vulnerable.

While Google is the king of the online advertising world, Facebook is the other major player which dominates the industry worth about $80 billion last year.

But Facebook is not one to rest on its laurels. The leaked document shows it has been honing the covert tools its uses to gain useful psychological insights on young Australian and New Zealanders in high school and tertiary education.

Facebook targeting Australian children

The social media services we use can derive immense insight and personal information about us and our moods from the way we use them, and arguably none is more fastidious in that regard than Facebook which harvests immense data on its users.

The secret document was put together by two Australian Facebook execs and includes information about when young people are likely to feel excited, reflective, as well as other emotions related to overcoming fears.

“Monday-Thursday is about building confidence; the weekend is for broadcasting achievements,” the document said, according to the report.

Facebook did not return attempts by news.com.au to comment on the issue but was quick to issue an apology and told The Australian that it will conduct an investigation into the matter, admitting it was inappropriate to target young children in such a way.

“The data on which this research is based was aggregated and presented consistent with applicable privacy and legal protections, including the removal of any personally identifiable information,” Facebook said in a statement issued to the newspaper.

However there is suggestion that the research could be in breach of Australian guidelines for advertising and marketing towards children.

Facebook CEO Mark Zuckerberg speaks at his company's annual F8 developer conference in San Jose last month. Picture: Noah Berger

Facebook CEO Mark Zuckerberg speaks at his company’s annual F8 developer conference in San Jose last month. Picture: Noah BergerSource:AP

Many commentators have suspected Facebook engaged in this sort of cynical exploitation of the data it gathers but the leaked document is scarce proof.

Mark Zuckerberg’s company has not been shy about exploring ways it can manipulate the data it collects on users.

For one week in 2012, Facebook ran an experiment on some of its users in which it altered the algorithms it used determine which status updates appeared in the news feed of nearly 700,000 randomly selected users based on the post’s emotional content.

Posts were determined to be either negative or positive and Facebook wanted to see if it could make the selected group sad by showing them more negative posts in their feed. It deemed it could.

The results were published in a scientific journal but Facebook was criticised by those concerned about the potential of the company to engage in social engineering for commercial benefit.

Facebook’s Data Use Policy warns users that the company “may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

Currently information about your relationship status, location, age, number of friends and the manner and frequency with which you access the site is sold to advertisers. But according to the report, Facebook is also seeking to sell ads to users concerned with insights gleaned from posts such as those concerned with body confidence and losing weight.

Facebook murder suspect still at large as cops get ‘dozens’ of tips

April 19, 2017 Leave a comment

Facebook murder suspect still at large as cops get ‘dozens’ of tipsFacebook murder suspect still at large as cops get ‘dozens’ of tipsFacebook murder suspect still at large as cops get ‘dozens’ of tips

A murder suspect who police said posted a video of himself on Facebook shooting an elderly man in Cleveland remained on the loose on Tuesday as authorities appealed to the public for help in the case.

Police said they have received “dozens and dozens” of tips and possible sightings of the suspect, Steve Stephens, and tried to persuade him to turn himself in when they spoke with him via his cellphone on Sunday after the shooting.

But Stephens remained at large as the search for him expanded nationwide, police said.

The shooting marked the latest video clip of a violent crime to turn up on Facebook, raising questions about how the world’s biggest social media network moderates content.

The company on Monday said it would begin reviewing how it monitors violent footage and other objectionable material in response to the killing.

Police said Stephens used Facebook Inc’s service to post video of him killing Robert Godwin Sr., 74.

Stephens is not believed to have known Godwin, a retired foundry worker who media reports said spent Easter Sunday morning with his son and daughter-in-law before he was killed.

Facebook vice president Justin Osofsky said the company was reviewing the procedure that users go through to report videos and other material that violates the social media platform’s standards. The shooting video was visible on Facebook for nearly two hours before it was reported, the company said.

Stephens, who has no prior criminal record, is not suspected in any other murders, police said.

The last confirmed sighting of Stephens was at the scene of the homicide. Police said he might be driving a white or cream-colored Ford Fusion, and asked anyone who spots him or his car to call police or a special FBI hotline (800-CALLFBI).

Facebook Refuses to Remove Flagged Child Pornography, ISIS Videos

April 16, 2017 Leave a comment

Tyler O’Neil

Britain’s The Times reported that Facebook refused to remove potentially illegal terrorist and child pornography content despite it being flagged by users. This content potentially puts the social media giant at risk of criminal prosecution.

“Last month The Times created a fake profile on Facebook to investigate extremist content,” Alexi Mostrous, the paper’s head of investigations, reported Thursday. “It did not take long to come across dozens of objectionable images posted by a mix of jihadists and those with a sexual interest in children.”

Mostrous reported that a Times reporter posed as an IT professional in his thirties, befriended more than 100 supporters of the Islamic State (ISIS), and joined groups promoting lewd or pornographic images of children. He then “flagged” many of the images and ISIS videos.

Facebook moderators reportedly kept online pro-jihadist posts including one praising ISIS attacks “from London to Chechnya to Russia and now Bangladesh in less than 48 hours,” promising to bring war “in the heart of your homes.” The site’s moderators also refused to remove an official news bulletin posted by the Islamic State praising the slaughter of 91 “Christian warriors” in the Palm Sunday bombings of two Egyptian churches.

Moderators, who are based in Ireland, California, Texas, and India, also kept up a video showing the gruesome beheading of hostages by ISIS terrorists. Facebook said it did not break its own rules against graphic violence when it kept up a video with a masked British jihadist holding a knife over a beheaded man, saying, “The spark has been lit here in Iraq. Here we are burying the first American crusader.”

Facebook also left up dozens of pornographic cartoons depicting child abuse, which Mostrous argued are likely illegal under a 2009 British law. “Intermingled with the cartoons, posted on forums with titles such as Raep Me, are pictures of real children, including several likely to be illegal.”

The Times also reported that Facebook kept up a video which appeared to show a young child being violently abused.

“In my view, many of the images and videos identified by The Times are illegal,” Julian Knowles, a Queen’s Counsel (an eminent British lawyer appointed by Queen Elizabeth II), told the paper. “One video appears to depict a sexual assault on a child. That would undoubtedly break UK indecency laws. The video showing a beheading is very likely to be a publication that encourages terrorism.”

Knowles added that he “would argue that the actions of people employed by Facebook to keep up or remove reported posts should be regarded as the actions of Facebook as a corporate entity.”

“If someone reports an illegal image to Facebook and a senior moderator signs off on keeping it up, Facebook is at risk of committing a criminal offense because the company might be regarded as assisting or encouraging its publication and distribution,” the lawyer concluded.

The Times reportedly informed the London Metropolitan Police, which coordinates counterterrorism investigations, and the National Crime Agency (NCA), about its findings.

A Metropolitan Police spokesman did not reveal whether Facebook would be investigated.

“Social media companies need to get their act together fast, this has been going on for too long,” declared Yvette Cooper, chairwoman of the police department’s home affairs select committee. “It’s time the government looked seriously at the German proposal to invoke fines if illegal and dangerous content isn’t swiftly removed.”

Robert Buckland, the solicitor general for England and Wales, warned that if social media companies were “reckless” in allowing terrorist material to remain online, they might be charged with breaking British law under the 2006 Terrorism Act. This law forbids the dissemination of terrorist material.

Facebook has reportedly removed the images and videos in question, but only after The Times contacted the social media network for comment. “The majority of the pornographic cartoons remained live until Facebook removed them after the newspaper’s approach yesterday,” Mostrous reported.

Justin Osofsky, Facebook’s vice president of global operations, thanked The Times for notifying the social media company of the potentially illegal content.

“We are grateful to The Times for bringing this content to our attention,” Osofsky told the paper. “We have removed all of these images, which violate our policies and have no place on Facebook. We are sorry that this occurred. It is clear that we can do better, and we’ll continue to work hard to live up to the high standards people rightly expect of Facebook.”

According to The Times, however, an undercover user had already flagged this material as offensive, and Facebook had decided to keep it up, with moderators reportedly saying the images and videos did not violate the site’s “community standards.”

Most users do not have access to an established news outlet like The Times, which was founded in 1785 and is published daily throughout London. It is truly a tragedy if Facebook does not take ordinary users’ flagging such material seriously, and only decides to remove such content when approached by such a longstanding and well-known outlet.

As Osofsky said, child porn and terrorist videos “have no place on Facebook.” None whatsoever.

Police: Chicago teen apparently gang-raped on Facebook Live

March 21, 2017 Leave a comment

Source:

CHICAGO (AP) — A 15-year-old Chicago girl was apparently sexually assaulted by five or six men or boys on Facebook Live, and none of the roughly 40 people who watched the live video reported the attack to police, authorities said Tuesday.

Police only learned of the attack when the girl’s mother approached police Superintendent Eddie Johnson late Monday afternoon as he was leaving a department in the Lawndale neighborhood on the city’s West Side, police spokesman Anthony Guglielmi said. She told him her daughter had been missing since Sunday and showed him screen grab photos of the alleged assault.

He said Johnson immediately ordered detectives to investigate and the department asked Facebook to take down the video, which it did.

Guglielmi tweeted Tuesday that detectives found the girl and reunited her with her family, and that they’re conducting interviews.

He said Johnson was “visibly upset” after he watched the video, both by its contents and the fact that there were “40 or so live viewers and no one thought to call authorities.”

It is the second time in months that the department has investigated an apparent attack that was streamed live on Facebook. In January, four people were arrested after a cellphone footage showed them allegedly taunting and beating a mentally disabled man.

 

Facebook begins rolling out its much-anticipated solution to fake news

March 6, 2017 Leave a comment

Mark Zuckerberg

 

 

Facebook, which was criticized for its role in facilitating the spread of misinformation doing the presidential election, just debuted its first attempt at dealing with the problem.

As spotted by Gizmodo Media Group’s Anna Merlan, Facebook has started to tag articles as “disputed” by third-party fact-checking organizations.

The company announced in December 2016 that it would start labeling and burying fake news. To do that, Facebook teamed up with a host of media organizations that are part of an international non-partisan fact-checking network led by journalism non-profit Poynter. The list includes 42 organizations, but Facebook is initially relying on four: Snopes, Factcheck.org, ABC News, and PolitiFact. (All fact-checkers are required to adhere to a code of principles created by Poynter.)

The new system is expected to make it easier for users to flag and report stories that are misleading or false. Those stories will then be reviewed by third-party fact-checkers and labeled as potentially fake in the News Feed.

Facebook also recently rolled out a new section explaining the process of how a story gets marked as disputed, and a step-by-step guide for how readers can mark a story as fake if something questionable comes across their feeds.

facebook disputed news screenshotFacebook

A Facebook representative told Business Insider in December that a team of researchers would eventually begin reviewing website domains and sending fake sites to fact-checkers as well.

The issue of false information being distributed on Facebook gained prominence during the recent election. Perhaps the most notable example was the “Pizzagate” conspiracy theory, a false report that accused Hillary Clinton and others connected to her campaign of running a child sex ring out of a Washington DC pizza parlor. A North Carolina man was arrested in early December 2016, after walking into the restaurant with an assault rifle and allegedly firing a shot.

Clinton and former President Barack Obama both spoke out about the problem — Obama accused Facebook of creating a “dust cloud of nonsense” by allowing crazy theories to spread, and Clinton called the proliferation of fake news “an epidemic” after the election.

Facebook CEO Mark Zuckerberg was initially dismissive of such accusations, and said it was “pretty crazy” for anyone to suggest that fake news on Facebook could have any sway over election results. But after facing intense backlash, he changed his tune slightly.

“I recognize we have a greater responsibility than just building technology that information flows through,” Zuckerberg wrote in a statement December 15.

Since the issue of fake news gained national attention, President Donald Trump has adopted the phrase and incorporated it into his criticisms of the news media.

“Russia is fake news,” he said at a February press conference, in response to allegations about his campaign’s ties to Russia.

Whether Facebook’s new approach to curbing the spread of misinformation on the platform will actually help people better differentiate between factual and misleading stories is still to be determined. The tool isn’t yet available to all users — but future disputes about what exactly should be deemed “disputed” seem inevitable.

%d bloggers like this: