Archive for the ‘Facebook’ Category

Facebook murder suspect still at large as cops get ‘dozens’ of tips

April 19, 2017 Leave a comment

Facebook murder suspect still at large as cops get ‘dozens’ of tipsFacebook murder suspect still at large as cops get ‘dozens’ of tipsFacebook murder suspect still at large as cops get ‘dozens’ of tips

A murder suspect who police said posted a video of himself on Facebook shooting an elderly man in Cleveland remained on the loose on Tuesday as authorities appealed to the public for help in the case.

Police said they have received “dozens and dozens” of tips and possible sightings of the suspect, Steve Stephens, and tried to persuade him to turn himself in when they spoke with him via his cellphone on Sunday after the shooting.

But Stephens remained at large as the search for him expanded nationwide, police said.

The shooting marked the latest video clip of a violent crime to turn up on Facebook, raising questions about how the world’s biggest social media network moderates content.

The company on Monday said it would begin reviewing how it monitors violent footage and other objectionable material in response to the killing.

Police said Stephens used Facebook Inc’s service to post video of him killing Robert Godwin Sr., 74.

Stephens is not believed to have known Godwin, a retired foundry worker who media reports said spent Easter Sunday morning with his son and daughter-in-law before he was killed.

Facebook vice president Justin Osofsky said the company was reviewing the procedure that users go through to report videos and other material that violates the social media platform’s standards. The shooting video was visible on Facebook for nearly two hours before it was reported, the company said.

Stephens, who has no prior criminal record, is not suspected in any other murders, police said.

The last confirmed sighting of Stephens was at the scene of the homicide. Police said he might be driving a white or cream-colored Ford Fusion, and asked anyone who spots him or his car to call police or a special FBI hotline (800-CALLFBI).

Facebook Refuses to Remove Flagged Child Pornography, ISIS Videos

April 16, 2017 Leave a comment

Tyler O’Neil

Britain’s The Times reported that Facebook refused to remove potentially illegal terrorist and child pornography content despite it being flagged by users. This content potentially puts the social media giant at risk of criminal prosecution.

“Last month The Times created a fake profile on Facebook to investigate extremist content,” Alexi Mostrous, the paper’s head of investigations, reported Thursday. “It did not take long to come across dozens of objectionable images posted by a mix of jihadists and those with a sexual interest in children.”

Mostrous reported that a Times reporter posed as an IT professional in his thirties, befriended more than 100 supporters of the Islamic State (ISIS), and joined groups promoting lewd or pornographic images of children. He then “flagged” many of the images and ISIS videos.

Facebook moderators reportedly kept online pro-jihadist posts including one praising ISIS attacks “from London to Chechnya to Russia and now Bangladesh in less than 48 hours,” promising to bring war “in the heart of your homes.” The site’s moderators also refused to remove an official news bulletin posted by the Islamic State praising the slaughter of 91 “Christian warriors” in the Palm Sunday bombings of two Egyptian churches.

Moderators, who are based in Ireland, California, Texas, and India, also kept up a video showing the gruesome beheading of hostages by ISIS terrorists. Facebook said it did not break its own rules against graphic violence when it kept up a video with a masked British jihadist holding a knife over a beheaded man, saying, “The spark has been lit here in Iraq. Here we are burying the first American crusader.”

Facebook also left up dozens of pornographic cartoons depicting child abuse, which Mostrous argued are likely illegal under a 2009 British law. “Intermingled with the cartoons, posted on forums with titles such as Raep Me, are pictures of real children, including several likely to be illegal.”

The Times also reported that Facebook kept up a video which appeared to show a young child being violently abused.

“In my view, many of the images and videos identified by The Times are illegal,” Julian Knowles, a Queen’s Counsel (an eminent British lawyer appointed by Queen Elizabeth II), told the paper. “One video appears to depict a sexual assault on a child. That would undoubtedly break UK indecency laws. The video showing a beheading is very likely to be a publication that encourages terrorism.”

Knowles added that he “would argue that the actions of people employed by Facebook to keep up or remove reported posts should be regarded as the actions of Facebook as a corporate entity.”

“If someone reports an illegal image to Facebook and a senior moderator signs off on keeping it up, Facebook is at risk of committing a criminal offense because the company might be regarded as assisting or encouraging its publication and distribution,” the lawyer concluded.

The Times reportedly informed the London Metropolitan Police, which coordinates counterterrorism investigations, and the National Crime Agency (NCA), about its findings.

A Metropolitan Police spokesman did not reveal whether Facebook would be investigated.

“Social media companies need to get their act together fast, this has been going on for too long,” declared Yvette Cooper, chairwoman of the police department’s home affairs select committee. “It’s time the government looked seriously at the German proposal to invoke fines if illegal and dangerous content isn’t swiftly removed.”

Robert Buckland, the solicitor general for England and Wales, warned that if social media companies were “reckless” in allowing terrorist material to remain online, they might be charged with breaking British law under the 2006 Terrorism Act. This law forbids the dissemination of terrorist material.

Facebook has reportedly removed the images and videos in question, but only after The Times contacted the social media network for comment. “The majority of the pornographic cartoons remained live until Facebook removed them after the newspaper’s approach yesterday,” Mostrous reported.

Justin Osofsky, Facebook’s vice president of global operations, thanked The Times for notifying the social media company of the potentially illegal content.

“We are grateful to The Times for bringing this content to our attention,” Osofsky told the paper. “We have removed all of these images, which violate our policies and have no place on Facebook. We are sorry that this occurred. It is clear that we can do better, and we’ll continue to work hard to live up to the high standards people rightly expect of Facebook.”

According to The Times, however, an undercover user had already flagged this material as offensive, and Facebook had decided to keep it up, with moderators reportedly saying the images and videos did not violate the site’s “community standards.”

Most users do not have access to an established news outlet like The Times, which was founded in 1785 and is published daily throughout London. It is truly a tragedy if Facebook does not take ordinary users’ flagging such material seriously, and only decides to remove such content when approached by such a longstanding and well-known outlet.

As Osofsky said, child porn and terrorist videos “have no place on Facebook.” None whatsoever.

Police: Chicago teen apparently gang-raped on Facebook Live

March 21, 2017 Leave a comment


CHICAGO (AP) — A 15-year-old Chicago girl was apparently sexually assaulted by five or six men or boys on Facebook Live, and none of the roughly 40 people who watched the live video reported the attack to police, authorities said Tuesday.

Police only learned of the attack when the girl’s mother approached police Superintendent Eddie Johnson late Monday afternoon as he was leaving a department in the Lawndale neighborhood on the city’s West Side, police spokesman Anthony Guglielmi said. She told him her daughter had been missing since Sunday and showed him screen grab photos of the alleged assault.

He said Johnson immediately ordered detectives to investigate and the department asked Facebook to take down the video, which it did.

Guglielmi tweeted Tuesday that detectives found the girl and reunited her with her family, and that they’re conducting interviews.

He said Johnson was “visibly upset” after he watched the video, both by its contents and the fact that there were “40 or so live viewers and no one thought to call authorities.”

It is the second time in months that the department has investigated an apparent attack that was streamed live on Facebook. In January, four people were arrested after a cellphone footage showed them allegedly taunting and beating a mentally disabled man.


Facebook begins rolling out its much-anticipated solution to fake news

March 6, 2017 Leave a comment

Mark Zuckerberg



Facebook, which was criticized for its role in facilitating the spread of misinformation doing the presidential election, just debuted its first attempt at dealing with the problem.

As spotted by Gizmodo Media Group’s Anna Merlan, Facebook has started to tag articles as “disputed” by third-party fact-checking organizations.

The company announced in December 2016 that it would start labeling and burying fake news. To do that, Facebook teamed up with a host of media organizations that are part of an international non-partisan fact-checking network led by journalism non-profit Poynter. The list includes 42 organizations, but Facebook is initially relying on four: Snopes,, ABC News, and PolitiFact. (All fact-checkers are required to adhere to a code of principles created by Poynter.)

The new system is expected to make it easier for users to flag and report stories that are misleading or false. Those stories will then be reviewed by third-party fact-checkers and labeled as potentially fake in the News Feed.

Facebook also recently rolled out a new section explaining the process of how a story gets marked as disputed, and a step-by-step guide for how readers can mark a story as fake if something questionable comes across their feeds.

facebook disputed news screenshotFacebook

A Facebook representative told Business Insider in December that a team of researchers would eventually begin reviewing website domains and sending fake sites to fact-checkers as well.

The issue of false information being distributed on Facebook gained prominence during the recent election. Perhaps the most notable example was the “Pizzagate” conspiracy theory, a false report that accused Hillary Clinton and others connected to her campaign of running a child sex ring out of a Washington DC pizza parlor. A North Carolina man was arrested in early December 2016, after walking into the restaurant with an assault rifle and allegedly firing a shot.

Clinton and former President Barack Obama both spoke out about the problem — Obama accused Facebook of creating a “dust cloud of nonsense” by allowing crazy theories to spread, and Clinton called the proliferation of fake news “an epidemic” after the election.

Facebook CEO Mark Zuckerberg was initially dismissive of such accusations, and said it was “pretty crazy” for anyone to suggest that fake news on Facebook could have any sway over election results. But after facing intense backlash, he changed his tune slightly.

“I recognize we have a greater responsibility than just building technology that information flows through,” Zuckerberg wrote in a statement December 15.

Since the issue of fake news gained national attention, President Donald Trump has adopted the phrase and incorporated it into his criticisms of the news media.

“Russia is fake news,” he said at a February press conference, in response to allegations about his campaign’s ties to Russia.

Whether Facebook’s new approach to curbing the spread of misinformation on the platform will actually help people better differentiate between factual and misleading stories is still to be determined. The tool isn’t yet available to all users — but future disputes about what exactly should be deemed “disputed” seem inevitable.

Dad who live-streamed his son’s birth on Facebook loses in court

February 18, 2017 Leave a comment

Man filmed his partner’s labor, then sued TV companies that picked up the video.

A father who live-streamed his son’s birth on Facebook and proceeded to sue for copyright infringement several media outlets that used the clips has lost his case.

US District Judge Lewis Kaplan ruled yesterday that the lawsuit filed by Kali Kanongataa must be thrown out, after the American Broadcasting Company and other defendants filed motions arguing that their use of the clips was covered by “fair use.”

Kaplan’s reasoning wasn’t included in his written order. Minutes from yesterday’s court hearing aren’t yet available. But ABC’s argument in favor of fair use is on the public record, and Kaplan presumably accepted some or all of that argument.

Kanongataa started broadcasting his wife giving birth on Facebook in May 2016, intending to share it with family and friends. According to news reports, he realized it was actually streaming publicly after about half an hour, but he decided to leave it that way. That led to about 120,000 people worldwide watching his partner, Sarah Dome, deliver their child.

In September, Kanongataa filed suit (PDF) against ABC and Yahoo for showing portions of his video on Good Morning America as well as the ABC news website and a Yahoo site that hosts ABC content. He also sued COED Media Group and iHeartMedia. In October, he sued magazine publisher Rodale over a clip and screenshot used on the website for its magazine Women’s Health. Last month, he sued Cox Communications.

In November, ABC lawyers filed a motion (PDF) calling their client’s use of the Kanongataa clip a “textbook example of fair use.” ABC used 22 seconds of a 45 minute video in order to produce a news story that would “enable viewers to understand and form an opinion about the couple’s actions.” The motion continues:

Where pictures or videotapes themselves are the focus of a major news story, news reporters may make brief use of selected footage to explain to the public what the story is about.

If the Copyright Act did not permit ABC to engage in this type of use, it would substantially inhibit important First Amendment activities by enabling copyright holders to exercise control over the public’s ability to understand news events. The Copyright Act specifically avoids this outcome.

Fair use of copyrighted works is permitted for news reporting, and ABC argues that the use of Facebook Live to broadcast a birth was a “socially significant phenomenon.” That’s backed up by Kanongataa himself, who said he thought it was the first time Facebook Live had been used to broadcast a birth, ABC lawyers note.

The ABC clip is clearly social commentary, because it treats the filming itself as newsworthy, not the underlying event, the brief states.

Judge Kaplan’s order shuts down Kanongataa’s lawsuit against ABC, NBC, Yahoo, and COED Media Group. A lawsuit against CBS and Microsoft was dropped in November, possibly due to a settlement. The case against Rodale is still pending and is also being overseen by Judge Kaplan. Kanongataa’s lawsuit against Cox was filed in a different district and remains pending in the Eastern District of New York.

A lawyer for Kanongataa didn’t respond to a request for comment about the order.

Kanongataa and Dome spoke to the TV show Inside Edition for a report that came out shortly after the birth. During that segment, they explained that just a day after Dome gave birth, Child Protective Services took the baby into custody. Someone from a past relationship had recognized Kanongataa on Facebook and reported to CPS that he had domestic violence allegations against him. Kanongataa denies those allegations.

“They came in and took our baby,” Dome said on the program. “I only spent one night with him.”

Facebook Live continues to be an outlet for gruesome crime

February 17, 2017 1 comment

On Tuesday, two journalists were shot dead during a Facebook Live stream in the Dominican Republic. That afternoon, Facebook Live captured a separate incident where a two-year-old child and 26-year-old man were shot dead and a pregnant woman was critically injured.

Why this matters: Like the spread of fake news, Facebook is struggling to balance the freedom of its users to post what they want with having some control over what spreads among its billions of users.

The grim toll: Earlier this year four teens kidnapped and tortured a disabled teen and streamed it live. These incidents are just the latest in a string of similar murders and gruesome crimes taking place on Facebook Live last year:

  • In March, a Chicago man was shot in his home.
  • In June, a 28-year-old Chicago man captured his own murder.
  • In June, an ISIS terrorist killed a Parisian police officer and his wife and then threatened their terrified three-year-old.
  • In July, a shooting spree against three men hanging out in their car in Norfolk, Virginia, which left one of the victims critically injured.
  • In July, a woman in Minnesota captured her finance’s death when he was shot dead by police.
Facebook’s reaction: Facebook has not yet commented on the recent incidents. In the past, Mark Zuckerberg has posted heartfelt reactions to these instances on his own Facebook page and admitted that while his platform brings people together, these incidents show “how far we still have to go.” Following a string of crimes in July, the tech giant said the rules for live video are “the same for all the rest of our content.” Later in an interview with TechCrunch, the company elaborated on this saying they will only remove content “if it celebrates or glorifies violence.”

Want to post a discriminatory ad? Facebook may try to stop you automatically

February 10, 2017 1 comment

Get your text right for your Facebook housing ad if you want the company's figurative thumbs-up for it.

Follows November outcry over targeted FB ads’ possible violations of Fair Housing Act.

Following Facebook’s November promises to take action, the company unveiled a suite of rules and machine-learning tactics on Wednesday, all in the name of curtailing discriminatory ad-targeting practices.

The most notable of these is Facebook’s new automated toolset that is built to identify advertisements for “housing, employment or credit opportunities”—and flags them if they employ the site’s “multicultural affinity targeting” system. In other words, if those types of ads in any way are built with requests that Facebook not deliver them to African-American, Hispanic, or Asian-American viewers, the site will attempt to automatically block the ad with a relevant notice. Advertisers can then either remove the cultural limitations from the ad or request a manual review for its approval.

Should the automated toolset recognize this type of ad but not pick up on apparent cultural targeting, advertisers will instead be directed to a three-paragraph “certification” notice, which advertisers will have to sign. This notice, among other things, forces advertisers to pledge that they “will not use Facebook advertising to improperly discriminate.” This notice coincides with Facebook updating language in its advertiser-policy pages about discriminatory practices. In an October report, Pro Publica exposed previous issues in the social network’s advertising platform by buying and running discriminatory ads that flew in the face of the Fair Housing Act.

The announcement in no way clarifies what terms and cues Facebook will track or trace to recognize an ad as one for housing, employment, or credit, nor does it suggest that any advertiser who receives a notice and then immediately rewrites an ad to remove trackable terms will be scrutinized any further. And the announcement didn’t mention any further changes to its multicultural targeting system beyond changing its name from “ethnic affinity targeting.”

The American Civil Liberties Union took the opportunity to commend Facebook for its efforts—and asked other tech companies with self-serve advertising platforms to do the same. “All ad platforms should make it impossible to target ads in these categories by any protected class status, including race, gender, and religion. And we need to keep educating platforms and advertisers about the danger of discrimination that targeting presents, even when ads are targeted by zip code or based on what music you listen to.”

%d bloggers like this: