Archive

Archive for the ‘Facebook’ Category

New Patent Will Allow Facebook to Secretly Spy on You – Here’s How to Stop It

June 27, 2017 1 comment

The new patent allows Facebook to monitor your facial expressions and study your mood.

The social media giant that everyone loves is contemplating secretly watching you or possibly even recording your emotions through your smartphone’s camera or your computer’s webcam – according to the new patent filed by Facebook.

facebook
Image Source: The Sun – CBI Insights found a patent granted in May which lets Facebook determine your emotion using pictures through your smartphone camera.

The information from the patent highlights how the social media giant will use this technology to only monitor your emotions when you see something on Facebook.

Once they have your emotional image(s) they will then use the information to keep you scrolling on the website for long periods of time.

facebook
Image Source: The Sun – A patent from 2015 reveals Facebook’s plans to watch you read content on its social network and analyse your reaction

For instance, if you were looking at a video of someone doing something that you liked, and if you smiled looking at it, then the new algorithm will notice and provide you with similar content.

However, the patent also states that if you were looking at an advertisement and you seemed interested, which was shown by your facial expressions, than the company will also target you with similar advertisements.

facebook
Image Source: The Sun – It’s most recent patent aims to add emotion to your messages based on typing speed and location.

The concerning part of this is that the patent was submitted three years ago and was published in 2015; however, it has only recently caught the attention of alternative media.

According to Facebook’s office, the company has filed for many technology patents, but this doesn’t mean they’ll use them all. Facebook representatives went on to say that patents should not be taken as if they are going to be the future plans of the company.

facebook
Image Source: The Sun – Facebook founder Mark Zuckerberg in his home with wife Priscilla and baby Max.

However, if you remember Facebook back in 2014 conducting an experiment(s) where it controlled almost all of its users’ news feeds, checking to see if it impacted on the emotions of their users, they failed to disclose their ethics.

Facebook later said they were unsuccessful in telling the people clearly, as to why and under what ethics they practiced when toying with a user’s emotions.

facebook
Image Source: The Guardian – Mark Zuckerberg celebrates 500 million monthly active users on Instagram – but he also revealed a lot about himself by leaving his laptop in the background.

An image of Facebook’s founder made a big highlight when he was seen covering his computer’s webcam and microphone with  tape while he celebrated the success of Instagram. The internet went crazy over this, and this particular patent will only add fuel to the fire.

The documents also reveal that there is a new application that will judge your emotions based on how hard you are pressing your touch screens for typing so its algorithm can analyse your emotions.

How Can You Stop This

For iPhone users, you need to go into your SETTINGS and then find the Facebook application. Tap on it. Once in, you will find SETTINGS again. From there you will see the option of ALLOW FACEBOOK TO ACCESS. Along with other options, you will be able to see CAMERA and MICROPHONE – toggle these to off. Also toggle off LOCATION for added protection.

The same goes for Android users: go into you SETTINGS then from there go to APPS. Once you are in, search for FACEBOOK. From there look for PERMISSION, tap on it for your options to present. Toggle CAMERA and MICROPHONE off.

Facebook Has a Patent to Use Your Camera & Watch Emotional Reactions

June 11, 2017 Leave a comment

(Claire Bernish, The Daily Sheeple) In its latest dystopian innovation, Facebook has decided it would like to surreptitiously spy on people through their cameras, employing contentious facial recognition technology to analyze their emotions — in essence, this amounts to reading a person’s mind.

Facebook Has a Patent to Use Your Camera & Watch Emotional Reactions

A recently-resurfaced patent filed by Facebook previously evinces the social media platform’s remarkably Orwellian goal of reading users’ emotions upon encountering various content — information which would then be used to tailor material for relevance to a person’s mood.

Flatly called “Techniques for emotion detection and content delivery,” the patent — filed in November 2015 and rediscovered by New York-based marketing intelligence firm, CB Insights, upon its granting on May 25, 2017 — “would automatically add emotional information to text messages, predicting the user’s emotion based on methods of keyboard input. The visual format of the text message would adapt in real time based on the user’s predicted emotion. As the patent notes (and as many people have likely experienced), it can be hard to convey mood and intended meaning in a text-only message; this system would aim to reduce misunderstandings.

“The system could pick up data from the keyboard, mouse, touch pad, touch screen, or other input devices, and the patent mentions predicting emotion based on relative typing speed, how hard the keys are pressed, movement (using the phone’s accelerometer), location, and other factors.”

It seems the goliath social media platform’s goal — at least superficially — is to keep users interested enough to make visits to the site longer in duration.

But the creepiness factor — and the tacit removal of user’s ability to choose which content they’d like to see — makes Facebook’s patent a dangerous foray into further control, beyond even the already infuriating algorithms about which users have complained for years.

The Independent reports:

“If you smiled as you looked at pictures of one of your friends, for instance, Facebook’s algorithm would take note of that and display more pictures of that friend in your News Feed.

“Another example included in the patent application explains that if you looked away from your screen when a video of a kitten played, Facebook would stop showing similar type of videos in your Feed.

“In another case, the document says that if you happened to watch an advert for scotch, Facebook could choose to target you with more adverts for scotch.”

A second patent, called “Systems and methods for dynamically generating emojis based on image analysis of facial features,” indicates the possibility emojis could be conjured to fit a user’s mood — and Facebook refused to confirm to the Daily Mail whether or not it would be employed.

CB Insights additionally notes,

“The patent mentions several additional features, such as the ability to modify the emoji based on more detailed analysis of the user’s face, and the ability to capture gestures made by the user and add those to the emoji […]

“By reducing users’ facial expressions to emojis from a pre-set list, Facebook could potentially analyze users’ emotions more easily. Facebook could gain clearer insight into feelings and reactions, while also adding a new interactive feature.”

It appears Facebook CEO Mark Zuckerberg didn’t learn any conceivable lesson in previous flirtations with manipulating users’ emotions.

In 2014, millions of users were jolted to learn Facebook had performed a secret social engineering experiment by deftly crafting information posted to 689,000 profiles — filtering posts, videos, comments, images, and links — to make people feel more positively or negatively through a process known as “emotional contagion.”

“Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks,” the joint Cornell University and University of California study, conducted unbeknownst to users and without their permission, found.

Ensuing contention excoriated the world’s largest social media platform, and Facebook was later forced to capitulate, stating it “failed to communicate clearly why and how we did it.”

Privacy advocates, activists, and attorneys were appalled at the mass, secretive intrusion, and — mostly due to the covert nature of the experiment — rumors occasionally circulate that other such analyses are still being conducted.

However fundamentally invasive the rediscovered patent might sound in the context of the prior PR nightmare, a Facebook spokesperson cited by the Independent insisted, “We often seek patents for technology we never implement, and patents should not be taken as an indication of future plans.”

Of course, Facebook users around the world didn’t find comfort in a picture which surfaced last year of Zuckerberg with his web camera taped over — a detail which could now only kindle suspicions the patent technology might already be the subject of a test run.

Whistleblower Edward Snowden’s ominous warning to everyone to cover the cameras and microphones on all devices seems all the more prescient.

Facebook
Image: CB Insights.
Facebook
Image: CB Insights.
Facebook

Google, Facebook are Upset They May No Longer Be Able to Sell Your Data

(Daily Caller News Foundation) Google and Facebook are actively trying to stop a proposed law that would force them to acquire consent from users before collecting their personal information.

Google, Facebook are Upset They May No Longer Be Able to Sell Your DataThe “Browser Act,” introduced May 18 by Republican Rep. Marsha Blackburn of Tennessee, mandates that people must explicitly give permission to internet service providers (ISPs) and websites wanting to use their browsing history and other data for business purposes.

“I think it is necessary to get our consumers the strongest toolbox possible to allow them to control their virtual presence,” Blackburn told The Daily Caller News Foundation (TheDCNF) in an interview. “Individuals in the physical world have the opportunity to hold personal information private and they should have that same opportunity in the virtual space.”

The legislation’s primary focus is sectored into two categories. User information considered sensitive would be subjected to an opt-in approval system, meaning the data would only be permitted for company use if the person gives clear approval. In contrast, user information deemed non-sensitive would be subjected to an opt-out approval system in which data is automatically permitted for business operations unless notified otherwise.

Blackburn said she came up with this arrangement after talking with both members of the affected industry and consumers.

“What I would hear from people was ‘Hey, you know there are some people that I want to have an online relationship with. I do business with them, I want them to have some of my information. There are other people I don’t want to have any of my information,’” Blackburn explained. “So I think this is a way — if we do opt-in on sensitive information — for consumers to have more control of their virtual presence.”

The Internet Association, a political lobbying group representing tech giants like Facebook, Google and Amazon, call the bill misguided.

“We, along with a broad swath of the American economy, are aware of the BROWSER Act and are tracking the proposal,” Noah Theran, vice president of public affairs and communication of the Internet Association, said in an official statement. “This bill has the potential to upend the consumer experience online and stifle innovation.”

Blackburn argues that another important distinction must be made along with the opt-in and opt-out systems, specifically between the jurisdictional power of government agencies.

Enabled by the Congressional Review Act, the congresswoman was one of the 265 members of Congress who voted to reverse a privacy rule implemented by the Federal Communications Commission (FCC) under the Obama administration.

Like the Browser Act, the rule was going to require ISPs to get customers’ consent before sharing their browsing history with other companies. But unlike the Browser Act, the Obama-era FCC rule didn’t include companies like Facebook and Google (also known as edge providers).

Blackburn said she supported repeal of the privacy mandate because “we should have one regulator with one set of standards and rules for the online universe,” adding that the Federal Trade Commission (FTC) is “our nation’s historical enforcement and judgement body when it comes to privacy issues in the commercial space.”

Scott Cleland, chairman of NetCompetition, a pro-competition e-forum, who served in the George H. W. Bush Administration, says such differentiation is critical.

“The BROWSER Act corrects the flaw in the FCC’s broadband rules that created conflicting FCC and FTC privacy regimes by offering one uniform and consistent law to empower users to have control over their privacy,” Cleland told TheDCNF.

Like Cleland, FCC Chairman Ajit Pai has argued several times that the FTC should be the enforcer of such privacy rules as it always was prior to 2015.

Some organizations, like the Electronic Frontier Foundation (EFF), contend that websites (including Facebook and Google) should not necessarily be playing by the same rules as other companies.

The bill “comes up short on a handful of fronts such as preempting state enforcement of consumer privacy and treating websites as if they are the same as cable and telephone companies despite clear differences in the market and choice for consumers,” Ernesto Falcon, legislative counsel for the EFF, told TheDCNF.

The Internet Association agrees to some degree.

“Policymakers must recognize that websites and apps continue to be under strict FTC privacy enforcement and are not in an enforcement gap, unlike other stakeholders in the ecosystem,” Theran continued in his statement.

Google and Facebook’s opposition to the Browser Act — specifically not being allowed to automatically sell “customer web browsing history” — isn’t very surprising since the two companies combined account for roughly 90 percent of the growth in new advertising revenue. Google is expected to make $72.69 billion in ad revenues in 2017, while Facebook is estimated to make $33.76 billion, according to market research company eMarketer. Advertisements, specifically highly targeted and tailored ads, are highly dependent on user data, like personal preferences deciphered from web browsing history and other online activity. (Users of Facebook, for example, presumably notice that the particular product they were just searching for on another website often appear in an ad on the social media’s platform).

Aside from administrative intricacies, there are other factors that could potentially, or at least partially, explain the EFF’s disapproval of the Browser Act. There are at least seven people who have worked for both Google and the EFF in some respect, whether at separate points in their careers or concurrently.

Brad Templeton, former chairman of the EFF, admitted on a blog post that he has “done work for Google advising on software design.”

“I’m a fan of Google, and have been friends with Google’s management since they started the company,” Templeton wrote with some disconcertion, since he was somewhat being critical of the tech conglomerate in his article.

Longtime entrepreneur Joe Kraus was an EFF board member for approximately seven years while also serving as an executive at Google, according to his own LinkedIn page.

Fred von Lohmann, who now works as legal director of copyright for Google, was employed as the EFF’s senior staff attorney for more than eight years.

Dan Auerbach went from working at Google for four years to working at the EFF for 3 years.

Chris Palmer, currently the senior software engineer at Google, had a different trajectory, going from Google to the EFF and then back to Google.

Others, like Erica Portnoy and Derek Slater, also worked at both the advocacy organization and the corporation.

While a “revolving door” relationship, of course, does not inherently imply collusion on matters of policy, it does signify a sort of incestuous relationshipbetween the two entities. While the EFF is generally aggressive on Internet privacy, it seems less so when it comes to Facebook and Google snooping and selling browser histories.

Even though the EFF is technically a 501(c)(3) registered nonprofit with a main focus on digital privacy rights, it’s choice to advocate against privacy protections here raises some questions — like if the EFF is yielding to an ally or if there are legitimate policy nuances that caused it to be for privacy protections just months ago, but against more expansive legislation now.

Representatives of several other organizations, including some similar to the EFF, like the Center for Democracy and Technology and TechFreedom, declined to provide TheDCNF with their analysis of Blackburn’s pending legislation because they said they need more time to look into it.

Facebook and Google (via the Internet Association) may be against Blackburn’s Browser Act, but AT&T, another highly profitable corporation, applauds it for providing a “comprehensive and uniform privacy framework that applies across the entire Internet ecosystem” and not just telecommunications company like itself.

(RELATED: Pelosi, Who Receives Donations From Google And Facebook, Pressures Cable Companies To Support Privacy Regs)

“We support Chairwoman Blackburn for moving the discussion in that direction and we look forward to working with her as this legislation moves forward,” an AT&T spokesperson told TheDCNF.

Revealed: Facebook’s internal rulebook on sex, terrorism and violence

Leaked policies guiding moderators on what content to allow are likely to fuel debate about social media giant’s ethics

Facebook’s secret rules and guidelines for deciding what its 2 billion users can post on the site are revealed for the first time in a Guardian investigation that will fuel the global debate about the role and ethics of the social media giant.

The Guardian has seen more than 100 internal training manuals, spreadsheets and flowcharts that give unprecedented insight into the blueprints Facebook has used to moderate issues such as violence, hate speech, terrorism, pornography, racism and self-harm.

There are even guidelines on match-fixing and cannibalism.

The Facebook Files give the first view of the codes and rules formulated by the site, which is under huge political pressure in Europe and the US.

They illustrate difficulties faced by executives scrabbling to react to new challenges such as “revenge porn” – and the challenges for moderators, who say they are overwhelmed by the volume of work, which means they often have “just 10 seconds” to make a decision.

“Facebook cannot keep control of its content,” said one source. “It has grown too big, too quickly.”

Many moderators are said to have concerns about the inconsistency and peculiar nature of some of the policies. Those on sexual content, for example, are said to be the most complex and confusing.

Facebook revenge porn slide
Pinterest
A slide on Facebook’s revenge porn policy. Photograph: Guardian

One document says Facebook reviews more than 6.5m reports a week relating to potentially fake accounts – known as FNRP (fake, not real person).

Using thousands of slides and pictures, Facebook sets out guidelines that may worry critics who say the service is now a publisher and must do more to remove hateful, hurtful and violent content.

Yet these blueprints may also alarm free speech advocates concerned about Facebook’s de facto role as the world’s largest censor. Both sides are likely to demand greater transparency.

The Guardian has seen documents supplied to Facebook moderators within the last year. The files tell them:

  • Remarks such as “Someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”, or “fuck off and die” because they are not regarded as credible threats.
  • Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness.
  • Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or “actioned” unless there is a sadistic or celebratory element.
  • Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as “disturbing”.
  • All “handmade” art showing nudity and sexual activity is allowed but digitally made art showing sexual activity is not.
  • Videos of abortions are allowed, as long as there is no nudity.
  • Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”.
  • Anyone with more than 100,000 followers on a social media platform is designated as a public figure – which denies them the full protections given to private individuals.

Other types of remarks that can be permitted by the documents include: “Little girl needs to keep to herself before daddy breaks her face,” and “I hope someone kills you.” The threats are regarded as either generic or not credible.

In one of the leaked documents, Facebook acknowledges “people use violent language to express frustration online” and feel “safe to do so” on the site.

It says: “They feel that the issue won’t come back to them and they feel indifferent towards the person they are making the threats about because of the lack of empathy created by communication via devices as opposed to face to face.

Facebook credible violence slide
Pinterest
Facebook’s policy on threats of violence. A tick means something can stay on the site; a cross means it should be deleted. Photograph: Guardian

“We should say that violent language is most often not credible until specificity of language gives us a reasonable ground to accept that there is no longer simply an expression of emotion but a transition to a plot or design. From this perspective language such as ‘I’m going to kill you’ or ‘Fuck off and die’ is not credible and is a violent expression of dislike and frustration.”

It adds: “People commonly express disdain or disagreement by threatening or calling for violence in generally facetious and unserious ways.”

Facebook conceded that “not all disagreeable or disturbing content violates our community standards”.

Monika Bickert, ‎Facebook’s head of global policy management, said the service had almost 2 billion users and that it was difficult to reach a consensus on what to allow.

A Facebook slide on threats of violence.
Pinterest
A Facebook slide on threats of violence. The ticks mean these statements need not be deleted. Photograph: Guardian

“We have a really diverse global community and people are going to have very different ideas about what is OK to share. No matter where you draw the line there are always going to be some grey areas. For instance, the line between satire and humour and inappropriate content is sometimes very grey. It is very difficult to decide whether some things belong on the site or not,” she said.

https://interactive.guim.co.uk/2016/08/explainer-interactive/embed/embed.html?id=388299dc-ebc2-4635-83b8-b1064d6dac80

“We feel responsible to our community to keep them safe and we feel very accountable. It’s absolutely our responsibility to keep on top of it. It’s a company commitment. We will continue to invest in proactively keeping the site safe, but we also want to empower people to report to us any content that breaches our standards.”

She said some offensive comments may violate Facebook policies in some contexts, but not in others.

Facebook’s leaked policies on subjects including violent death, images of non-sexual physical child abuse and animal cruelty show how the site tries to navigate a minefield.

The files say: “Videos of violent deaths are disturbing but can help create awareness. For videos, we think minors need protection and adults need a choice. We mark as ‘disturbing’ videos of the violent deaths of humans.”

Such footage should be “hidden from minors” but not automatically deleted because it can “be valuable in creating awareness for self-harm afflictions and mental illness or war crimes and other important issues”.

Facebook animal abuse policy
Pinterest
A slide on animal cruelty. Photograph: Guardian

Regarding non-sexual child abuse, Facebook says: “We do not action photos of child abuse. We mark as disturbing videos of child abuse. We remove imagery of child abuse if shared with sadism and celebration.”

One slide explains Facebook does not automatically delete evidence of non-sexual child abuse to allow the material to be shared so “the child [can] be identified and rescued, but we add protections to shield the audience”. This might be a warning on the video that the content is disturbing.

Facebook confirmed there are “some situations where we do allow images of non-sexual abuse of a child for the purpose of helping the child”.

Its policies on animal abuse are also explained, with one slide saying: “We allow photos and videos documenting animal abuse for awareness, but may add viewer protections to some content that is perceived as extremely disturbing by the audience.

“Generally, imagery of animal abuse can be shared on the site. Some extremely disturbing imagery may be marked as disturbing.”

Photos of animal mutilation, including those showing torture, can be marked as disturbing rather than deleted. Moderators can also leave photos of abuse where a human kicks or beats an animal.

Facebook said: “We allow people to share images of animal abuse to raise awareness and condemn the abuse but remove content that celebrates cruelty against animals.”

The files show Facebook has issued new guidelines on nudity after last year’s outcry when it removed an iconic Vietnam war photo because the girl in the picture was naked.

It now allows for “newsworthy exceptions” under its “terror of war” guidelines but draws the line at images of “child nudity in the context of the Holocaust”.

Facebook told the Guardian it was using software to intercept some graphic content before it got on the site, but that “we want people to be able to discuss global and current events … so the context in which a violent image is shared sometimes matters”.

Some critics in the US and Europe have demanded that the company be regulated in the same way as mainstream broadcasters and publishers.

A Facebook slide on its Holocaust policy.
Pinterest
A Facebook slide on its Holocaust policy. Photograph: Guardian

But Bickert said Facebook was “a new kind of company. It’s not a traditional technology company. It’s not a traditional media company. We build technology, and we feel responsible for how it’s used. We don’t write the news that people read on the platform.”

A report by British MPs published on 1 May said “the biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal or dangerous content, to implement proper community standards or to keep their users safe”.

Sarah T Roberts, an expert on content moderation, said: “It’s one thing when you’re a small online community with a group of people who share principles and values, but when you have a large percentage of the world’s population and say ‘share yourself’, you are going to be in quite a muddle.

“Then when you monetise that practice you are entering a disaster situation.”

Facebook has consistently struggled to assess the news or “awareness” value of violent imagery. While the company recently faced harsh criticism for failing to remove videos of Robert Godwin being killed in the US and of a father killing his child in Thailand, the platform has also played an important role in disseminating videos of police killings and other government abuses.

In 2016, Facebook removed a video showing the immediate aftermath of the fatal police shooting of Philando Castile but subsequently reinstated the footage, saying the deletion was a “mistake”.

Facebook fined $122 million for misleading EU over WhatsApp deal

Facebook says it couldn’t automatically match WhatsApp accounts; EC disagrees.

Source: (UK) 

Facebook has been smacked with a €110 million fine by the antitrust wing of the European Commission for providing incorrect or misleading information about its acquisition of WhatsApp.

Three years ago, Facebook claimed that it did not have the technical capabilities to match existing Facebook accounts with the WhatsApp accounts it would acquire—a claim that Brussels’ competition chief Margrethe Vestager strongly disagrees with.

“The commission has found that… the technical possibility of automatically matching Facebook and WhatsApp users’ identities already existed in 2014, and that Facebook staff were aware of such a possibility,” the commission said on Thursday.

Back in 2014, upon the pronouncement of their impending nuptials, WhatsApp promised that “nothing” would change for its hundreds of millions of users after being acquired by Facebook. By August 2016, however, the free content ad network had reneged on that claim: WhatsApp rolled out some new terms of service that explicitly allowed Facebook to hoover up user data, ostensibly to provide targeted advertising.

It was these new terms of service that caught the eye of Vestager and her team, and in December 2016 the European Commission announced it would be investigating whether Facebook had provided incorrect or misleading information.

The commission can impose fines of up to one percent of the turnover of a company when it intentionally or negligently provides incorrect information during a merger or acquisition. The fine imposed on Facebook—€110 million or about £94/$122 million—is about 0.5 percent of the company’s reported $27 billion revenues in 2016.

Vestager said Facebook had committed two separate infringements: once when it provided incorrect or misleading information in its paperwork to acquire WhatsApp in 2014, and again when the EC requested further information from the company. By that rationale, the €110m fine for two breaches under EU competition rules is small: it could have been as large as €480 million.

“In setting the amount of a fine, the commission takes into account the nature, the gravity, and duration of the infringement, as well as any mitigating and aggravating circumstances,” the commission said. It added:

the Commission considers that Facebook staff were aware of the user matching possibility and that Facebook was aware of the relevance of user matching for the commission’s assessment, and of its obligations under the Merger Regulation. Therefore, Facebook’s breach of procedural obligations was at least negligent.

The commission has also considered the existence of mitigating circumstances, notably the fact that Facebook cooperated with the commission during the procedural infringement proceedings. In particular, in its reply to the commission’s Statement of Objections, Facebook acknowledged its infringement of the rules and waived its procedural rights to have access to the file and to an oral hearing. This allowed the commission to conduct the investigation more efficiently. The commission has taken Facebook’s cooperation into account in setting the level of the fine.

On the basis of these factors, the commission has concluded that an overall fine of €110 million is both proportionate and deterrent.

Vestager said the “decision sends a clear signal to companies that they must comply with all aspects of EU merger rules, including the obligation to provide correct information. And it imposes a proportionate and deterrent fine on Facebook. The commission must be able to take decisions about mergers’ effects on competition in full knowledge of accurate facts.”

It’s important to note that the fine has no impact on the commission’s authorisation in 2014 of a Facebook–WhatsApp merger; the commission already knew that automated user matching was a possibility and approved the merger anyway.

Facebook claimed it had “acted in good faith” with Vestager’s office and had “sought to provide accurate information at every turn.” It added: “The errors we made in our 2014 filings were not intentional and the commission has confirmed that they did not impact the outcome of the merger review.”

Now read: Why not ban cars, Amber Rudd? It’d be more effective than banning end-to-end encryption

This post originated on Ars Technica UK

Leaked document reveals Facebook conducted research to target emotionally vulnerable and insecure youth

A SECRET document shows in scary detail how Facebook can exploit the insecurities of teenagers using the platform.

FACEBOOK has come under fire over revelations it is targeting potentially vulnerable youths who “need a confidence boost” to facilitate predatory advertising practices.

The allegation was revealed this morning by The Australian which obtained internal documents from the social media giant which reportedly show how Facebook can exploit the moods and insecurities of teenagers using the platform for the potential benefit of advertisers.

The confidential document dated this year detailed how by monitoring posts, comments and interactions on the site, Facebook can figure out when people as young as 14 feel “defeated”, “overwhelmed”, “stressed”, “anxious”, “nervous”, “stupid”, “silly”, “useless”, and a “failure”.

Such information gathered through a system dubbed sentiment analysis could be used by advertisers to target young Facebook users when they are potentially more vulnerable.

While Google is the king of the online advertising world, Facebook is the other major player which dominates the industry worth about $80 billion last year.

But Facebook is not one to rest on its laurels. The leaked document shows it has been honing the covert tools its uses to gain useful psychological insights on young Australian and New Zealanders in high school and tertiary education.

Facebook targeting Australian children

The social media services we use can derive immense insight and personal information about us and our moods from the way we use them, and arguably none is more fastidious in that regard than Facebook which harvests immense data on its users.

The secret document was put together by two Australian Facebook execs and includes information about when young people are likely to feel excited, reflective, as well as other emotions related to overcoming fears.

“Monday-Thursday is about building confidence; the weekend is for broadcasting achievements,” the document said, according to the report.

Facebook did not return attempts by news.com.au to comment on the issue but was quick to issue an apology and told The Australian that it will conduct an investigation into the matter, admitting it was inappropriate to target young children in such a way.

“The data on which this research is based was aggregated and presented consistent with applicable privacy and legal protections, including the removal of any personally identifiable information,” Facebook said in a statement issued to the newspaper.

However there is suggestion that the research could be in breach of Australian guidelines for advertising and marketing towards children.

Facebook CEO Mark Zuckerberg speaks at his company's annual F8 developer conference in San Jose last month. Picture: Noah Berger

Facebook CEO Mark Zuckerberg speaks at his company’s annual F8 developer conference in San Jose last month. Picture: Noah BergerSource:AP

Many commentators have suspected Facebook engaged in this sort of cynical exploitation of the data it gathers but the leaked document is scarce proof.

Mark Zuckerberg’s company has not been shy about exploring ways it can manipulate the data it collects on users.

For one week in 2012, Facebook ran an experiment on some of its users in which it altered the algorithms it used determine which status updates appeared in the news feed of nearly 700,000 randomly selected users based on the post’s emotional content.

Posts were determined to be either negative or positive and Facebook wanted to see if it could make the selected group sad by showing them more negative posts in their feed. It deemed it could.

The results were published in a scientific journal but Facebook was criticised by those concerned about the potential of the company to engage in social engineering for commercial benefit.

Facebook’s Data Use Policy warns users that the company “may use the information we receive about you … for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

Currently information about your relationship status, location, age, number of friends and the manner and frequency with which you access the site is sold to advertisers. But according to the report, Facebook is also seeking to sell ads to users concerned with insights gleaned from posts such as those concerned with body confidence and losing weight.

Facebook murder suspect still at large as cops get ‘dozens’ of tips

April 19, 2017 Leave a comment

Facebook murder suspect still at large as cops get ‘dozens’ of tipsFacebook murder suspect still at large as cops get ‘dozens’ of tipsFacebook murder suspect still at large as cops get ‘dozens’ of tips

A murder suspect who police said posted a video of himself on Facebook shooting an elderly man in Cleveland remained on the loose on Tuesday as authorities appealed to the public for help in the case.

Police said they have received “dozens and dozens” of tips and possible sightings of the suspect, Steve Stephens, and tried to persuade him to turn himself in when they spoke with him via his cellphone on Sunday after the shooting.

But Stephens remained at large as the search for him expanded nationwide, police said.

The shooting marked the latest video clip of a violent crime to turn up on Facebook, raising questions about how the world’s biggest social media network moderates content.

The company on Monday said it would begin reviewing how it monitors violent footage and other objectionable material in response to the killing.

Police said Stephens used Facebook Inc’s service to post video of him killing Robert Godwin Sr., 74.

Stephens is not believed to have known Godwin, a retired foundry worker who media reports said spent Easter Sunday morning with his son and daughter-in-law before he was killed.

Facebook vice president Justin Osofsky said the company was reviewing the procedure that users go through to report videos and other material that violates the social media platform’s standards. The shooting video was visible on Facebook for nearly two hours before it was reported, the company said.

Stephens, who has no prior criminal record, is not suspected in any other murders, police said.

The last confirmed sighting of Stephens was at the scene of the homicide. Police said he might be driving a white or cream-colored Ford Fusion, and asked anyone who spots him or his car to call police or a special FBI hotline (800-CALLFBI).

%d bloggers like this: