Mac Slavo

This information will likely not be a surprise to anyone who has been paying attention to Big Tech’s increasing propensity to violate the privacy of users and use their data for questionable reasons, but here we are.

Two days ago, the tech website Venture Beat noticed an eyebrow-raising bit in the latest update to Apple’s privacy policy:

Apple’s promise of transparency regarding user data means that any new privacy policy update might reveal that it’s doing something new and weird with your data. Alongside yesterday’s releases of iOS 12, tvOS 12, and watchOS 5, Apple quietly updated some of its iTunes Store terms and privacy disclosures, including one standout provision: It’s now using an abstracted summary of your phone calls or emails as an anti-fraud measure.

The provision, which appears in the iTunes Store & Privacy windows of iOS and tvOS devices, says:

To help identify and prevent fraud, information about how you use your device, including the approximate number of phone calls or emails you send and receive, will be used to compute a device trust score when you attempt a purchase. The submissions are designed so Apple cannot learn the real values on your device. The scores are stored for a fixed time on our servers.

Venture Beat points out that this provision is unusual, in part because it includes Apple TVs, which do not have the capability to make calls or send emails.

It is unclear how Apple is going to collect the data, and

It’s equally unclear how recording and tracking the number of calls or emails traversing a user’s iPhone, iPad, or iPod touch would better enable Apple to verify a device’s identity than just checking its unique device identifier. Every one of these devices has both hardcoded serial numbers and advertising identifiers, while iPhones and cellular iPads also have SIM cards with other device-specific codes.

An Apple spokesperson contacted Venture Beat and confirmed that the device trust score is an update included in iOS 12. It was designed to detect fraud in iTunes purchases, as well as to reduce false positives in fraud detection, the rep said. “It apparently gives Apple a better likelihood of accurately determining whether content is being bought by the actual named purchaser,” Venture Beat explains.

Apple claims that they are still going to protect user data and privacy, and said details of calls and emails will not be collected. The data will be kept for a “limited period.”

We already know that Big Tech giants like Facebook and Google collect and store massive amounts of user data. If you still use Facebook and would like to see what kind of information they have on you, here’s how to get that data. And, if you once used Facebook but have since deleted your account, don’t get too comfortable – the social media giant can still track you.

Speaking of Facebook, in August it was revealed that the social media platform rates users based on their “trustworthiness,” as reported by Engadget:

The company’s Tessa Lyons has revealed to the Washington Post that it’s starting to assign users reputation scores on a zero-to-one scale. The system is meant to help Facebook’s fight against fake news by flagging people who routinely make false claims against news outlets, whether it’s due to an ideological disagreement or a personal grudge. This isn’t the only way Facebook gauges credibility, according to Lyons — it’s just one of thousands of behavior markers Facebook is using.

What other criteria does Facebook measure to determine a user’s score? Do all users have a score? How are the scores used? These are questions that remain unanswered.

Facebook won’t reveal exactly how it evaluates users, claiming that to do so might tip off “bad actors” who would then game the system.

As Engadget writer Violet Blue recently pointed out,

The company with the reputation for being the least trustworthy is rating the trustworthiness of its users. We’d be fools to think that it hasn’t been doing this all along in other areas. Some animals are more equal than others. The thing is, Facebook long ago decided who was more trustworthy — its real customers, its advertisers. It only pretended you’d be more trustworthy if you gave them your ID.

People are abandoning social media platforms – particularly, Facebook – in record numbers, in part because so many privacy violations and data collection practices have been exposed.

Hopefully this is not a sign that humanity is heading for a Communist China, Big Brother-style social credit rating system. Just two days ago, we reported that “the Communist Party’s plan in China is for every one of its 1.4 billion citizens to be at the whim of a dystopian social credit system, and it’s on track to be fully operational by the year 2020”.

If you have watched the Netflix series Black Mirror – in particular, the episode called Nosedive – the increasing use of social credit rating systems and “trust scores” will seem eerily familiar.

If it isn’t government spying on us, it is private companies. The surveillance line is becoming more and more blurred, as tech companies increasingly give user data (sometimes forced, sometimes not) to governments upon demand. Edward Snowden exposed the existence of PRISM, under which the National Security Agency (NSA) accesses emails, documents, photographs and other sensitive users’ data stored by major companies. Documents leaked by Snowden revealed that Facebook, Google, Microsoft, Yahoo, PalTalk, AOL, Skype, YouTube and Apple give the NSA direct access to its users’ information.

Going entirely off-grid is looking more and more appealing.