May 20, 2019

Thanks to Facebook, Your Cellphone Company Is Watching You Closer Than...

Among the mega-corporations that surveil you, your cellphone carrier has always been one of the keenest monitors, in constant contact with the one small device you keep on you at almost every moment. A confidential Facebook document reviewed by The Intercept shows that the social network courts carriers, along with phone makers — some 100 different companies in 50 countries — by offering the use of even more surveillance data, pulled straight from your smartphone by Facebook itself.

Offered to select Facebook partners, the data includes not just technical information about Facebook members’ devices and use of Wi-Fi and cellular networks, but also their past locations, interests, and even their social groups. This data is sourced not just from the company’s main iOS and Android apps, but from Instagram and Messenger as well. The data has been used by Facebook partners to assess their standing against competitors, including customers lost to and won from them, but also for more controversial uses like racially targeted ads.

Some experts are particularly alarmed that Facebook has marketed the use of the information — and appears to have helped directly facilitate its use, along with other Facebook data — for the purpose of screening customers on the basis of likely creditworthiness. Such use could potentially run afoul of federal law, which tightly governs credit assessments.

Facebook said it does not provide creditworthiness services and that the data it provides to cellphone carriers and makers does not go beyond what it was already collecting for other uses.

Facebook’s cellphone partnerships are particularly worrisome because of the extensive surveillance powers already enjoyed by carriers like AT&T and T-Mobile: Just as your internet service provider is capable of watching the data that bounces between your home and the wider world, telecommunications companies have a privileged vantage point from which they can glean a great deal of information about how, when, and where you’re using your phone. AT&T, for example, states plainly in its privacy policy that it collects and stores information “about the websites you visit and the mobile applications you use on our networks.” Paired with carriers’ calling and texting oversight, that accounts for just about everything you’d do on your smartphone.

An Inside Look at “Actionable Insights”

You’d think that degree of continuous monitoring would be more than sufficient for a communications mammoth to operate its business — and perhaps for a while it was. But Facebook’s “Actionable Insights,” a corporate data-sharing program, suggests that even the incredible visibility telecoms have into your daily life isn’t enough — and Zuckerberg et al. can do them one better. Actionable Insights was announced last year in an innocuous, easy-to-miss post on Facebook’s engineering blog. The article, titled “Announcing tools to help partners improve connectivity,” strongly suggested that the program was primarily aimed at solving weak cellular data connections around the world. “To address this problem,” the post began, “we are building a diverse set of technologies, products, and partnerships designed to expand the boundaries of existing connectivity quality and performance, catalyze new market segments, and bring better access to the unconnected.” What sort of monster would stand against better access for the unconnected?

The blog post makes only a brief mention of Actionable Insights’ second, less altruistic purpose: “enabling better business decisions” through “analytics tools.” According to materials reviewed by The Intercept and a source directly familiar with the program, the real boon of Actionable Insights lies not in its ability to fix spotty connections, but to help chosen corporations use your personal data to buy more tightly targeted advertising.

The source, who discussed Actionable Insights on the condition of anonymity because they were not permitted to speak to the press, explained that Facebook has offered the service to carriers and phone makers ostensibly of charge, with access to Actionable Insights granted as a sweetener for advertising relationships. According to the source, the underlying value of granting such gratis access to Actionable Insights in these cases isn’t simply to help better service cell customers with weak signals, but also to ensure that telecoms and phone makers keep buying  more and more carefully targeted Facebook ads. It’s exactly this sort of quasi-transactional data access that’s become a hallmark of Facebook’s business, allowing the company to plausibly deny that it ever sells your data while still leveraging it for revenue. Facebook may not be “selling” data through Actionable Insights in the most baldly literal sense of the word — there’s no briefcase filled with hard drives being swapped for one containing cash — but the relationship based on spending and monetization certainly fits the spirit of a sale. A Facebook spokesperson declined to answer whether the company charges for Actionable Insights access.

The confidential Facebook document provides an overview of Actionable Insights and espouses its benefits to potential corporate users. It shows how the program, ostensibly created to help improve underserved cellular customers, is pulling in far more data than how many bars you’re getting. According to one portion of the presentation, the Facebook mobile app harvests and packages eight different categories of information for use by over 100 different telecom companies in over 50 different countries around the world, including usage data from the phones of children as young as 13. These categories include use of video, demographics, location, use of Wi-Fi and cellular networks, personal interests, device information, and friend homophily, an academic term of art. A 2017 article on social media friendship from the Journal of the Society of Multivariate Experimental Psychology defined “homophily” in this context as “the tendency of nodes to form relations with those who are similar to themselves.” In other words, Facebook is using your phone to not only provide behavioral data about you to cellphone carriers, but about your friends as well.

From these eight categories alone, a third party could learn an extraordinary amount about patterns of users’ daily life, and although the document claims that the data collected through the program is “aggregated and anonymized,” academic studies have found time and again that so-called anonymized user data can be easily de-anonymized. Today, such claims of anonymization and aggregation are essentially boilerplate from companies who wager you’ll be comfortable with them possessing a mammoth trove of personal observations and behavioral predictions about your past and future if the underlying data is sufficiently neutered and grouped with your neighbor’s.

A Facebook spokesperson told The Intercept that Actionable Insights doesn’t collect any data from user devices that wasn’t already being collected anyway. Rather, this spokesperson said Actionable Insights repackages the data in novel ways useful to third-party advertisers in the telecom and smartphone industries.

Material reviewed by The Intercept show demographic information presented in a dashboard-style view, with maps showing customer locations at the county and city level. A Facebook spokesperson said they “didn’t think it goes more specific than zip code.” But armed with location data beamed straight from your phone, Facebook could technically provide customer location accurate to a range of several meters, indoors or out.

Targeting By Race and Likely Creditworthiness

Despite Facebook’s repeated assurances that user information is completely anonymized and aggregated, the Actionable Insights materials undermine this claim. One Actionable Insights case study from the overview document promotes how an unnamed North American cellular carrier had previously used its Actionable Insights access to target a specific, unnamed racial group. Facebook’s targeting of “multicultural affinity groups,” as the company formerly referred to race, was discontinued in 2017 after the targeting practice was widely criticized as potentially discriminatory.

Another case study described how Actionable Insights can be used to single out individual customers on the basis of creditworthiness. In this example, Facebook explained how one of its advertising clients, based outside the U.S., wanted to exclude individuals from future promotional offers on the basis of their credit. Using data provided through Actionable Insights, a Data Science Strategist, a role for which Facebook continues to hire, was able to generate profiles of customers with desirable and undesirable credit standings. The advertising client then used these profiles to target or exclude Facebook users who resembled these profiles.

“What they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the Fair Credit Reporting Act. … It’s no different from Equifax providing the data to Chase.”

The use of so-called lookalike audiences is common in digital advertising, allowing marketers to take a list of existing customers and let Facebook match them to users that resemble the original list based on factors like demographics and stated interests. As Facebook puts it in an online guide for advertisers, “a Lookalike Audience is a way to reach new people who are likely to be interested in your business because they’re similar to your best existing customers.”

But these lookalike audiences aren’t just potential new customers — they can also be used to exclude unwanted customers in the future, creating a sort of ad targeting demographic blacklist.

By promoting this technique in its confidential document Facebook markets to future corporate clients, and appears to have worked with the advertising client to enable, the targeting of credit-eligible individuals based at least in part on behavioral data pulled from their phones — in other words, to allow advertisers to decide who deserves to view an ad based only on some invisible and entirely inscrutable mechanism.

There’s no indication of how exactly Facebook’s data could be used by a third party to determine who is creditworthy, nor has there ever been any indication from the company that how you use its products influences whether you’ll be singled out and excluded from certain offers in the future. Perhaps it’s as simple as Facebook enabling companies to say People with bad credit look and act like this on social networks, a case of correlational profiling quite different from our commonsense notions of good personal finance hygiene required to keep our credit scores polished. How consumers would be expected to navigate this invisible, unofficial credit-scoring process, given that they’re never informed of its existence, remains an open question.

This mechanism is also reminiscent of so-called redlining, the historical (and now illegal) practice of denying mortgages and other loans to marginalized groups on the basis of their demographics, according to Ashkan Sultani, a privacy researcher and former chief technologist of the Federal Trade Commission.

The thought of seeing fewer ads from Facebook might strike some as an unalloyed good — it certainly seems to beat the alternative. But credit reporting, profoundly dull as it might sound, is an enormously sensitive practice with profound economic consequences, determining who can and can’t, say, own or rent a home, or get easy financial access to a new cellphone. Facebook here seems to be allowing companies to reach you on the basis of a sort of unofficial credit score, a gray market determination of whether you’re a good consumer based on how much you and your habits resemble a vast pool of strangers.

Facebook here seems to be allowing companies to reach you on the basis of a sort of unofficial credit score, a gray market determination of whether you’re a good consumer based on how much you and your habits resemble a vast pool of strangers.

In an initial conversation with a Facebook spokesperson, they stated that the company does “not provide creditworthiness services, nor is that a feature of Actionable Insights.” When asked if Actionable Insights facilitates the targeting of ads on the basis of creditworthiness, the spokesperson replied, “No, there isn’t an instance where this is used.” It’s difficult to reconcile this claim with the fact that Facebook’s own promotional materials tout how Actionable Insights can enable a company to do exactly this. Asked about this apparent inconsistency between what Facebook tells advertising partners and what it told The Intercept, the company declined to discuss the matter on the record, but provided the following statement: “We do not, nor have we ever, rated people’s credit worthiness for Actionable Insights or across ads, and Facebook does not use people’s credit information in how we show ads.” Crucially, this statement doesn’t contradict the practice of Facebook enabling others to do this kind of credit-based targeting using the data it provides. The fact that Facebook promoted this use of its data as a marketing success story certainly undermines the idea that it does not serve ads targeted on the basis of credit information.

A Facebook spokesperson declined to answer whether the company condones or endorses advertising partners using Facebook user data for this purpose, or whether it audits how Actionable Insights is used by third parties, but noted its partners are only permitted to use Actionable Insights for “internal” purposes and agree not to share the data further. The spokesperson did not answer whether the company believes that this application of Actionable Insights data is compliant with the Fair Credit Reporting Act.

According to Joel Reidenberg, a professor and director of Fordham’s Center on Law and Information Policy, Facebook’s credit-screening business seems to inhabit a fuzzy nether zone with regards to the FCRA, neither matching the legal definition of a credit agency nor falling outside the activities the law was meant to regulate. “It sure smells like the prescreening provisions of the FCRA,” Reidenberg told The Intercept. “From a functional point of view, what they’re doing is filtering Facebook users on creditworthiness criteria and potentially escaping the application of the FCRA.” Reidenberg questioned the potential for Facebook to invisibly incorporate data on race, gender, or marital status in its screening process, exactly the sort of practice that made legislation like the FCRA necessary in the first place. Reidenberg explained that there are “all sorts of discrimination laws in terms of granting credit,” and that Facebook “may also be in a gray area with respect to those laws because they’re not offering credit, they’re offering an advertising space,” a distinction he described as “a very slippery slope.” An academic study published in April found that Facebook’s ad display algorithms were inherently biased with regards to gender and race.

Reidenberg also doubted whether Facebook would be exempt from regulatory scrutiny if it’s providing data to a third party that’s later indirectly used to exclude people based on their credit, rather than doing the credit score crunching itself, à la Equifax or Experian. “If Facebook is providing a consumer’s data to be used for the purposes of credit screening by the third party, Facebook would be a credit reporting agency,” Reidenberg explained. “The [FCRA] statute applies when the data ‘is used or expected to be used or collected in whole or in part for the purpose of serving as a factor in establishing the consumer’s eligibility for … credit.'” If Facebook is providing data about you and your friends that eventually ends up in a corporate credit screening operation, “It’s no different from Equifax providing the data to Chase to determine whether or not to issue a credit card to the consumer,” according to Reidenberg.

An FTC spokesperson declined to comment.

Chris Hoofnagle, a privacy scholar at the University of California, Berkeley School of Law, told The Intercept that this sort of consumer rating scheme has worrying implications for matters far wider than whether T-Mobile et al. will sell you a discounted phone. For those concerned with their credit score, the path to virtue has always been a matter of commonsense personal finance savvy. The jump from conventional wisdom like “pay your bills on time” to completely inscrutable calculations based on Facebook’s observation of your smartphone usage and “friend homophily” isn’t exactly intuitive. “We’re going to move to a world where you won’t know how to act,” said Hoofnagle. “If we think about the world as rational consumers engaged in utility maximalization in the world, what we’re up against is this, this shadow system. How do you compete?”

The post Thanks to Facebook, Your Cellphone Company Is Watching You Closer Than Ever appeared first on The Intercept.

May 9, 2019

Privacy Experts, Senators Demand Investigation of Amazon’s Child Dat...

Last year, a coalition of privacy advocates and child psychologists warned against putting an Amazon Alexa speaker anywhere near your child on the fairly reasonable grounds that developing minds shouldn’t befriend always-on surveillance devices, no matter how cute the packaging. Now, a group of privacy researchers, attorneys, and U.S. senators are calling on the Federal Trade Commission to investigate Amazon’s alleged violations of COPPA, a law protecting the littlest users of all.

COPPA, the Children’s Online Privacy Protection Act, regulates how companies can collect and use data on users who might have trouble spelling “privacy,” let alone understand it enough to consent to relinquishing it. COPPA is the reason why so many sites, like Facebook, simply don’t allow children under 13 to sign up. Amazon, on the other hand, decided to court children for its data collection business, releasing the Amazon Echo Dot Kids Edition, an always-listening “smart speaker” that retains all of the functions of its adult counterpart, but tucks them inside a candy-colored shell. The kiddo speaker also adds child-specific features, like the ability to have Amazon’s virtual assistant Alexa read your child a story in her disembodied robo-voice, or play child-geared content from sources like Cartoon Network and Nickelodeon.

A new complaint drafted by the Campaign for a Commercial-Free Childhood, the consumer privacy group Center for Digital Democracy, and Georgetown University’s Institute for Public Representation says that Amazon is committing a litany of COPPA violations through the Echo Dot Kids Edition, and calls on the FTC to investigate.

Amazon’s COPPA violations, according to the complaint, include failure to provide parental notice and obtain parental consent for online services related to the kids’ Echo Dot, failure to tell parents that they have a right to review personal information submitted by their child, and failure to provide parents a way to delete such information or opt out of its collection.

Across 96 pages, the complaint gets more specific, offering examples of how Amazon dodges, obscures, and otherwise neglects its duties to parents. Given that the attorneys who drafted the complaint were confused by Amazon’s byzantine policies, it’s hard to imagine average parents faring much better:

Even if parents were for some reason motivated to seek out the website version of the Children’s Privacy Disclosure, the hyperlinked Privacy Notice is long, confusingly written, and contains a lot of unrelated material. It is unclear what, if any, parts apply to the Echo Dot Kids Edition, and some of the information seems to contradict the Children’s Privacy Disclosure. For example, the Privacy Notice discloses that Amazon collects “search term and search result information from some searches conducted through the Web search features offered by our subsidiary, Alexa Internet,” but it does not say whether it does so when a child is using the Echo Dot Kids Edition.

Perhaps most troubling is what the complaint says about Amazon’s treatment of child voice recordings, which aren’t supposed to be stored indefinitely, per recent FTC guidance: “The answer is clear: No, the company can’t keep it. Under Section 312.10 of COPPA, you’re allowed to retain children’s personal information ‘for only as long as is reasonably necessary to fulfill the purpose for which the information was collected.'”

But Amazon takes a different approach, the complaint explains: “In response to a Congressional inquiry about how long it keeps recordings and other information collected from children, however, Amazon responded: ‘Voice recordings are retained for the parent’s review until the parent deletes them.'” In other words, Amazon is keeping a child’s Amazon queries stored indefinitely, not “for only as long as is reasonably necessary.”

Worryingly, the researchers found that even if a concerned parent attempted to manually delete their child’s Alexa recordings, the feature is simply broken. In one example, Amazon recorded a young girl asking Alexa to remember a list of personal facts, such as a phone number, home address, and walnut allergy. When this same girl then asked Alexa to forget these things, the robot replied, “Sorry, I’m not sure about that.” Attempts to manually delete these recordings using software designed for parents appeared to have zero effect.

A demonstration of this apparent inability to forget is provided in a video created by the researchers:

In addition to the academics, advocates, and privacy researchers involved in its drafting, four senators are also supporting the complaint: Democrats Edward Markey of Massachusetts, Dick Durbin of Illionis, and Richard Blumenthal of Connecticut, along with Republican Josh Hawley of Missouri. In a letter addressed to FTC Commissioner Christine Wilson co-signed by his fellow senators, Markey wrote, “The Echo Dot Kids Edition captures not only the voice recordings of the children who speak to it, but also vast amounts of their personal information,” citing the new research. More than a dozen advocacy groups have also signed on to the original complaint, including the Consumer Federation of America, the Electronic Privacy Information Center, the Parents Television Council, and the U.S. Public Interest Research Group.”

Privacy policies aren’t generally written to be read by regular people, let alone understood, so none of the above should come as a surprise to anyone who’s tried to decipher one. A company is far likelier to get away with an invasive or otherwise unsavory data collection practice if its disclosure is buried beneath a tangle of legalese and vagaries. So far, there’s no law against asking adults to agree to an incomprehensible policy document before they can use your gadget or app. But there’s little point in having COPPA on the books if child privacy compliance is as hopelessly and inscrutably sloppy as what an adult might be up against on their own.

In a statement provided by an Amazon spokesperson, the company denied that it violates COPPA: “FreeTime on Alexa and Echo Dot Kids Edition are compliant with the Children’s Online Privacy Protection Act (COPPA). Customers can find more information on Alexa and overall privacy practices here: https://www.amazon.com/alexa/voice.”

The post Privacy Experts, Senators Demand Investigation of Amazon’s Child Data Collection Practices appeared first on The Intercept.

May 2, 2019

Peter Thiel’s Palantir Was Used To Bust Hundreds of Relatives of Mig...

Palantir, the CIA-funded data analysis company founded by  billionaire Trump adviser Peter Thiel, provided software at the center of a 2017 operation targeting unaccompanied children and their families, newly released Homeland Security documents show.

The documents undercut prior statements from Palantir, in which the company tried to draw a clean line between the wing of ICE devoted strictly to deportations and the enforcement of immigration laws, and its $38 million contract with Homeland Security Investigations, or HSI, a component of ICE with a far broader criminal enforcement mandate. Asked about the contract renewal by the New York Times, a Palantir spokesperson stated:

“There are two major divisions of ICE with two distinct mandates: Homeland Security Investigations, or H.S.I., is responsible for cross-border criminal investigations. The other major directorate, Enforcement and Removal Operations, or E.R.O., is responsible for interior civil immigration enforcement, including deportation and detention of undocumented immigrants. We do not work for E.R.O.”

Documents obtained through Freedom of Information Act litigation and provided to The Intercept show that this claim, that Palantir software is strictly involved in criminal investigations as opposed to deportations, is false. The discrepancy between the private intelligence firm’s public assertion, and the reality conveyed in the newly-released documents, was first identified by Mijente, an advocacy organization that has closely tracked Palantir’s murky role in immigration enforcement. Far from detached support in “cross-border criminal investigations,” the materials released this week confirm the role Palantir technology played in facilitating hundreds of arrests, only a small fraction of which led to criminal prosecutions.

A May 2017 ICE document on an impending “Unaccompanied Alien Children Human Smuggling Disruption Initiative,” characterized as “a joint effort of ERO and HSI,” makes explicit the fact that ERO used Palantir’s Investigative Case Management software to target the parents and other relatives of unaccompanied minors crossing the border, a precursor to the Trump administration’s family separation policy. In a section on “Coordinating Instructions,” the operational document describes how “the 26 HSI special agents in charge (SAC) will coordinate with their respective 24 ICE Enforcement and Removal Operations (ERO) field office directors (POD) to establish teams of HSI special agents and ERO deportation officers, with the support of the local HSI SAC intelligence program.” The instructions go on to state that “Each SAC will be responsible for determining how to document each UAC arrival in the Investigative Case Management (ICM) system; however, it is recommended that every initial UAC encounter at the border or its functional equivalent be documented.”

As the Intercept reported in 2017, “ICM allows ICE agents to access a vast ‘ecosystem’ of data to facilitate immigration officials in both discovering targets and then creating and administering cases against them,” and provides ICE with “access to intelligence platforms maintained by the Drug Enforcement Administration, the Bureau of Alcohol, Tobacco, Firearms and Explosives, the Federal Bureau of Investigation, and an array of other federal and private law enforcement entities.”

The document makes clear that the operation would directly target the parents and other family members of children apprehended at the border–all with the help of Palantir’s case management app. The document continues to instruct that if “sufficient information on parents or family members is obtained” while investigating an unaccompanied child, “a collateral case will be sent via ICM to the affected AOR’s team for action.” The instructions make clear that ICM-enabled inquires could result in charges against a child’s family: “Teams will be available to immediately conduct database checks and contact suspected sponsor /parent or family members to identify, interview, and, if applicable, seek charges against the individual(s) and administratively arrest the subjects and anybody encountered during the inquiry who is out of status.”

The Palantir-aided campaign to hunt down and arrest family members of children who crossed the border alone was touted by the Trump administration’s top immigration hardliners as a necessary measure to deter asylum seekers from making the journey north. According to figures ICE provided The Intercept on Monday, the 2017 initiative led to 443 arrests, including 35 criminal arrests. Prosecutions, however, were more difficult to come by, with ICE acknowledging that the campaign led to just 38 prosecutions related to “alien smuggling” or “re-entry of removed aliens.”

In a letter to the top oversight officials at DHS in December 2017, a coalition of immigrant rights organization described the so-called “surge initiative” as unconstitutional, and said federal law enforcement was “using children as bait.”

The documents detailing the enforcement campaign were first obtained by the American Immigration Council — in collaboration with the Florence Immigrant and Refugee Rights Project, the National Immigrant Justice Center, Kids in Need of Defense, Women’s Refugee Commission, and Wilmer Cutler Pickering Hale and Dorr LLP — as part of ongoing freedom of information litigation surrounding the Trump administration’s family separation policy.

“The detention and deportation machine is not only driven by hate, but also by profit,” Jesse Franzblau, Senior Policy Analyst for the National Immigrant Justice Center, said in an email to The Intercept. “Palantir profits from its contract with ICE to help the administration target parents and sponsors of children, and also pays Amazon to use its servers in the process. The role of private tech behind immigration enforcement deserves more attention, particularly with the growing influence of Silicon Valley in government policymaking”

Palantir CEO Alex Karp has previously expressed reservations his company’s role in governmental overreach, despite federal contracts serving as some of the firm’s highest profile business. “I didn’t sign up for the government to know when I smoke a joint or have an affair,” he told Forbes in a 2013 interview. This public stance appears to have softened since: Last year, Karp told the New York Times “We’re proud that we’re working with the U.S. government.”

Palantir did not immediately comment.

The post Peter Thiel’s Palantir Was Used To Bust Hundreds of Relatives of Migrant Children, New Documents Show appeared first on The Intercept.

April 4, 2019

Facebook’s Ad Algorithm Is a Race and Gender Stereotyping Machine, N...

How exactly Facebook decides who sees what is one of the great pieces of forbidden knowledge in the information age, hidden away behind nondisclosure agreements, trade secrecy law, and a general culture of opacity. New research from experts at Northeastern University, the University of Southern California, and the public-interest advocacy group Upturn doesn’t reveal how Facebook’s targeting algorithms work, but does show an alarming outcome: They appear to deliver certain ads, including for housing and employment, in a way that aligns with race and gender stereotypes — even when advertisers ask for the ads to be exposed a broad, inclusive audience.

There are two basic steps to advertising on Facebook. The first is taken by advertisers when they choose certain segments of the Facebook population to target: Canadian women who enjoy badminton and Weezer, lacrosse dads over 40 with an interest in white genocide, and so forth. The second is taken by Facebook, when it makes an ad show up on certain peoples’ screens, reconciling the advertiser’s targeting preferences with the flow of people through Facebook’s apps and webpages in a given period of time. Advertisers can see which audiences ended up viewing the ad, but are never permitted to know the underlying logic of how those precise audiences were selected.

The new research focuses on the second step of advertising on Facebook, the process of ad delivery, rather than on ad targeting. Essentially, the researchers created ads without any demographic target at all and watched where Facebook placed them. The results, said the researchers, were disturbing:

Critically, we observe significant skew in delivery along gender and racial lines for “real” ads for employment and housing opportunities despite neutral targeting parameters. Our results demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters to be highly inclusive.

Rather than targeting a demographic niche, the researchers requested only that their ads reach Facebook users in the United States, leaving matters of ethnicity and gender entirely up to Facebook’s black box. As Facebook itself tells potential advertisers, “We try to show people the ads that are most pertinent to them.” What exactly does the company’s ad-targeting black box, left to its own devices, consider pertinent? Are Facebook’s ad-serving algorithms as prone to bias like so many others? The answer will not surprise you.

For one portion of the study, researchers ran ads for a wide variety of job listings in North Carolina, from janitors to nurses to lawyers, without any further demographic targeting options. With all other things being equal, the study found that “Facebook delivered our ads for jobs in the lumber industry to an audience that was 72% white and 90% men, supermarket cashier positions to an audience of 85% women, and jobs with taxi companies to a 75% black audience even though the target audience we specified was identical for all ads.” Ad displays for “artificial intelligence developer” listings also skewed white, while listings for secretarial work overwhelmingly found their way to female Facebook users.

Although Facebook doesn’t permit advertisers to view the racial composition of an ad’s viewers, the researchers said they were able to confidently infer these numbers by cross-referencing the indicators Facebook does provide, particularly regions where users live, which in some states can be cross-referenced with race data held in voter registration records.

In the case of housing ads — an area Facebook has already shown in the past has potential for discriminatory abuse — the results were also heavily skewed along racial lines. “In our experiments,” the researchers wrote, “Facebook delivered our broadly targeted ads for houses for sale to audiences of 75% white users, when ads for rentals were shown to a more demographically balanced audience.” In other cases, the study found that “Facebook delivered some of our housing ads to audiences of over 85% white users while they delivered other ads to over 65% Black users (depending on the content of the ad) even though the ads were targeted identically.”

Facebook appeared to algorithmically reinforce stereotypes even in the case of simple, rather boring stock photos, indicating that not only does Facebook automatically scan and classify images on the site as being more “relevant” to men or women, but changes who sees the ad based on whether it includes a picture of, say, a football or a flower. The research took a selection of stereotypically gendered images — a military scene and an MMA fight on the stereotypically male side, a rose as stereotypically female — and altered them so that they would be invisible to the human eye (marking the images as transparent “alpha” channels, in technical terms). They then used these invisible pictures in ads run without any gender-based targeting, yet found Facebook, presumably after analyzing the images with software, made retrograde, gender-based decisions on how to deliver them: Ads with stereotypical macho images were shown mostly to men, even though the men had no idea what they were looking at. The study concluded that “Facebook has an automated image classification mechanism in place that is used to steer different ads towards different subsets of the user population.” In other words, the bias was on Facebook’s end, not in the eye of the beholder.

The report comes at an inconvenient time for Facebook, now facing charges from the Department of Housing and Urban Development over its potential to enable advertisers to illegally exclude certain groups. And although the study is careful to note that “our results only speak to how our particular ads are delivered (i.e., we cannot say how housing or employment ads in general are delivered),” it still concludes that “the significant skew we observe even on a small set of ads suggests that real-world housing and employment ads are likely to experience the same fate.” In other words, even in the absence of bigoted landlords, the advertising platform itself appears inherently prejudiced.

A Facebook spokesperson provided the following comment:

We stand against discrimination in any form. We’ve made important changes to our ad targeting tools and know that this is only a first step. We’ve been looking at our ad delivery system and have engaged industry leaders, academics, and civil rights experts on this very topic – and we’re exploring more changes.

It’s a familiar refrain at this point, and one that will likely do little to reassure those who just want to know that they’ll be provided with the same opportunities as everyone else, even in the context of ubiquitous advertising. The old apologia for targeted advertising is generally that it’s a favor to the consumer, sparing them “irrelevant” ads and instead providing them with opportunities to browse goods and services that are “pertinent” to them. What this shallow reasoning misses is that decisions about pertinence can become self-reinforcing; it’s foolish at best to think that women are more interested in secretarial work because they keep clicking the secretary ads, rather than that they click secretarial ads because it’s all Facebook will show them.

The post Facebook’s Ad Algorithm Is a Race and Gender Stereotyping Machine, New Study Suggests appeared first on The Intercept.

March 25, 2019

Pentagon Says All of Google’s Work on Drones Is Exempt From the Free...

In September 2017, Aileen Black wrote an email to her colleagues at Google. Black, who led sales to the U.S. government, worried that details of the company’s work to help the military guide lethal drones would become public through the Freedom of Information Act. “We will call tomorrow to reinforce the need to keep Google under the radar,” Black wrote.

According to a Pentagon memo signed last year, however, no one at Google needed worry: All 5,000 pages of documents about Google’s work on the drone effort, known as Project Maven, are barred from public disclosure, because they constitute “critical infrastructure security information.”

One government transparency advocate said the memo is part of a recent wave of federal decisions that  keep sensitive documents secret on that same basis — thus allowing agencies to quickly deny document requests.

“It is the path of least resistance that enables the agency to avoid detailed review of records.”

It’s been a full year since the first reports of Google’s work on Project Maven, and the public still knows precious little beyond the basic gist of the story: that Maven would use artificial intelligence to help pick out drone targets faster and more easily, and that Google backed out of its Maven contract amid staff outcry. (Maven is now linked to defense startup Anduril Industries.) Black’s email was obtained and partially published by The Intercept last year.

Was Google’s work for the Pentagon really not intended to be used for lethal purposes, as the company later claimed ? What exactly were Project Maven’s “38 classes of objects that represent the kinds of things the [Pentagon] needs to detect,” as cited by the Defense Department in a news release? And how accurate is Project Maven? In other words, what is its rate of false positives?

Neither the Pentagon nor Google is known for its dedication to institutional transparency, and so it’s not surprising that these questions remain open. Luckily, there’s a federal law designed to force the government to divulge information in the public interest, even when a given agency would rather keep its secrets. The Freedom of Information Act is a vital tool for journalism, watchdog groups, academics, and anyone else hoping to bring news to the public about what its government is doing in its name. But the government says Project Maven is immune.

In response to a Freedom of Information Act request I filed more than a year ago, seeking documents related to Project Maven’s use of Google technology, the Defense Department said that it had discovered 5,000 pages of relevant material — and that every single page was exempt from disclosure. Some of the pages included trade secrets, sensitive internal deliberations, and private personal information about some individuals, the department said. Such information can be withheld under the act. But it said all of the material could be kept private under “Exemption 3” of the act, which allows the government to withhold records under a grab bag of other federal statutes.

The Pentagon specifically cited a law permitting government agencies to block the disclosure of records that pertain to “critical infrastructure security information.” This designation requires an official explanation from the Pentagon, which The Intercept received and is publishing below. The memo, signed by Defense Department Acting Chief Management Officer Lisa Hershman, makes the argument that Project Maven is so sensitive that disclosing essentially any facts about it could cause death and destruction. It is dated December 2018, nine months after I made my request.

“Although there is value in the public release of this information,” wrote Hershman, “because the risk of harm that would reasonably result from its disclosure is extremely significant, I have determined that the public interest does not outweigh its protection. Therefore, it should be exempt from disclosure.” Hershman claimed that releasing “information about Project Maven, individually or in the aggregate, would enable an adversary to identify capabilities and vulnerabilities in the Department’s approach to artificial intelligence development and implementation” and that “this would further provide an adversary with the information necessary to disrupt, destroy, or damage DoD, technology, military operations, facilities, and endanger the lives of personnel.”

If this sounds like an extreme set of consequences for releasing details of software research, that perhaps helps explain why “critical infrastructure security information” is largely defined by law as applying not to code but to real-world property, “including information regarding the securing and safeguarding of explosives, hazardous chemicals, or pipelines” and “explosives safety information (including storage and handling), and other site-specific information on or relating to installation security.” The argument that there could be disastrous, unintended consequences from publicizing too much information about the inner workings of a toxic chemical storage facility is plausible. The idea that the same could be caused by documents describing software development, even military software, strains credulity.

According to Steven Aftergood, director of the Federation of American Scientists Project on Government Secrecy, the FOIA request denial is “a disappointing move” by the Defense Department. Aftergood added that it’s “doubtful that there is really nothing about Project Maven in your request that could be released” among those 5,000 pages. The “blanket use of the [critical infrastructure information] exemption represents a continuing temptation for agencies since it can be a ‘simple,’ expeditious way to close out FOIA cases,” he said. “In a way, it is the path of least resistance that enables the agency to avoid time-consuming, expensive, and detailed review of records.”

Kay Murray, The Intercept’s deputy general counsel, said the blanket denial of the request appears unjustified under FOIA’s expansive disclosure language and policy. “Project Maven is undeniably of interest to the public,” Murray said. “We are exploring all options to gain access to the information about the program that under FOIA can and should be disclosed.”

The post Pentagon Says All of Google’s Work on Drones Is Exempt From the Freedom of Information Act appeared first on The Intercept.

File-This Feb. 29, 2019, file photo shows Senate Armed Services Committee member, Sen. Elizabeth Warren, D-Mass., during a Senate Armed Services Committee hearing on "Nuclear Policy and Posture" on Capitol Hill in Washington. Sen. Warren, who is seeking the Democratic nomination for president in 2020, did not call for the minimum wage to be $22 an hour, as posts circulating on social media suggest. However, she did discuss the findings of a study that showed if minimum wage had been tied to productivity between 1960 and 2013, it would be $22 an hour, during a March 2013 Senate Committee on Health, Education, Labor and Pensions hearing. (AP Photo/Carolyn Kaster, File)
March 8, 2019

Elizabeth Warren’s Big Tech Beatdown Will Spark a Vital and Unpreced...

File-This Feb. 29, 2019, file photo shows Senate Armed Services Committee member, Sen. Elizabeth Warren, D-Mass., during a Senate Armed Services Committee hearing on "Nuclear Policy and Posture" on Capitol Hill in Washington. Sen. Warren, who is seeking the Democratic nomination for president in 2020, did not call for the minimum wage to be $22 an hour, as posts circulating on social media suggest. However, she did discuss the findings of a study that showed if minimum wage had been tied to productivity between 1960 and 2013, it would be $22 an hour, during a March 2013 Senate Committee on Health, Education, Labor and Pensions hearing. (AP Photo/Carolyn Kaster, File)

Sen. Elizabeth Warren during a Senate Armed Services Committee hearing in Washington, D.C., on Feb. 29, 2019.

Photo: Carolyn Kaster/AP

It’s imperfect, it’s vague in parts, and it will face a conflagration of opposition from the tech lobbying freight train and congressional conservatives for whom antitrust efforts are anathema. But presidential candidate Elizabeth Warren’s new plan — well, for now it’s just a Medium post — to break up some of the world’s biggest tech firms provides the rarest sign that someone seeking power wants to use that power to weaken Silicon Valley.

Warren’s plan to “break up Big Tech,” as she described it, begins on a strong premise that is uncontroversial outside of Silicon Valley boardrooms: “Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

While it’s hard to choke up too much at the thought of “stifled innovation” at a time when we seem to be suffering from a glut of it, the rest rings true. Facebook, Google, and Amazon have become so large in terms of both revenue and their ability to collect and process data that they’ve come to resemble quasi-governmental entities — uncanny hybrids of private capital and public policy. In its current form, with the ability to control what information reaches over 2 billion people around the world, Facebook is too big to govern, from within or without. With an obvious monopoly on search and its own mammoth, opaque data-harvesting business, Google is also peerless and entirely out of the range of competition.

Facebook is too big to govern.

Warren’s solution is twofold. One component would essentially hit “undo” on various tech acquisitions that have helped Facebook, in particular, build an enormous moat between itself and potential competitors. Warren’s plan would see Instagram and WhatsApp spun off from the mothership, so that photo-sharing on Facebook proper would be forced to compete with photo-sharing on Instagram. This much seems cut and dry and obviously in the spirit of trust-busting, though the attack on Google is more muddled: Warren says she’d have Google divest DoubleClick, the online advertising company it acquired over a decade ago, even though it essentially no longer functions as an independent entity.

The other component of Warren’s plan would be “passing legislation that requires large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” — meaning that “these companies would be prohibited from owning both the platform utility and any participants on that platform.” “Platform utilities would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users,” Warren wrote. “A company found to violate these requirements would also have to pay a fine of 5 percent of annual revenue.” Finally, some teeth.

Warren says this move would require Google’s entire search business to be spun off from the rest of the company, but further industry implications are unclear. Would Apple have to divest iTunes and the App Store? Would Android phones still come bundled with apps that beam data back to Google? How is Warren going to define “a platform for connecting third parties”? There are countless other questions and quibbles. Using Medium posts to develop far-reaching policy proposals has its limits, I guess.

Warren’s plan also falls short in tackling these companies’ ability to unilaterally control the dissemination of information. Facebook can decide who, out of over 2 billion souls, sees, reads, and hears what. The argument that no company, tech or otherwise, should ever have that capacity, remains unaddressed. But this is a start — a very hopeful start — and it’s been a very long time since we’ve seen anything that suggests that wresting power from Facebook et al. is an idea being taken seriously outside of advocacy groups, academia, and opinion columns.

Whether or not Warren wins the Democratic nomination or the presidency, we can expect to see powerful people hoping to become infinitely more powerful forced to discuss whether Facebook should be required to divest itself of Instagram. That’s more than we, the humble data-mined, have ever known.

The post Elizabeth Warren’s Big Tech Beatdown Will Spark a Vital and Unprecedented Debate appeared first on The Intercept.

File-This Feb. 29, 2019, file photo shows Senate Armed Services Committee member, Sen. Elizabeth Warren, D-Mass., during a Senate Armed Services Committee hearing on "Nuclear Policy and Posture" on Capitol Hill in Washington. Sen. Warren, who is seeking the Democratic nomination for president in 2020, did not call for the minimum wage to be $22 an hour, as posts circulating on social media suggest. However, she did discuss the findings of a study that showed if minimum wage had been tied to productivity between 1960 and 2013, it would be $22 an hour, during a March 2013 Senate Committee on Health, Education, Labor and Pensions hearing. (AP Photo/Carolyn Kaster, File)
March 8, 2019

Elizabeth Warren’s Big Tech Beatdown Will Spark a Vital and Unpreced...

File-This Feb. 29, 2019, file photo shows Senate Armed Services Committee member, Sen. Elizabeth Warren, D-Mass., during a Senate Armed Services Committee hearing on "Nuclear Policy and Posture" on Capitol Hill in Washington. Sen. Warren, who is seeking the Democratic nomination for president in 2020, did not call for the minimum wage to be $22 an hour, as posts circulating on social media suggest. However, she did discuss the findings of a study that showed if minimum wage had been tied to productivity between 1960 and 2013, it would be $22 an hour, during a March 2013 Senate Committee on Health, Education, Labor and Pensions hearing. (AP Photo/Carolyn Kaster, File)

Sen. Elizabeth Warren during a Senate Armed Services Committee hearing in Washington, D.C., on Feb. 29, 2019.

Photo: Carolyn Kaster/AP

It’s imperfect, it’s vague in parts, and it will face a conflagration of opposition from the tech lobbying freight train and congressional conservatives for whom antitrust efforts are anathema. But presidential candidate Elizabeth Warren’s new plan — well, for now it’s just a Medium post — to break up some of the world’s biggest tech firms provides the rarest sign that someone seeking power wants to use that power to weaken Silicon Valley.

Warren’s plan to “break up Big Tech,” as she described it, begins on a strong premise that is uncontroversial outside of Silicon Valley boardrooms: “Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

While it’s hard to choke up too much at the thought of “stifled innovation” at a time when we seem to be suffering from a glut of it, the rest rings true. Facebook, Google, and Amazon have become so large in terms of both revenue and their ability to collect and process data that they’ve come to resemble quasi-governmental entities — uncanny hybrids of private capital and public policy. In its current form, with the ability to control what information reaches over 2 billion people around the world, Facebook is too big to govern, from within or without. With an obvious monopoly on search and its own mammoth, opaque data-harvesting business, Google is also peerless and entirely out of the range of competition.

Facebook is too big to govern.

Warren’s solution is twofold. One component would essentially hit “undo” on various tech acquisitions that have helped Facebook, in particular, build an enormous moat between itself and potential competitors. Warren’s plan would see Instagram and WhatsApp spun off from the mothership, so that photo-sharing on Facebook proper would be forced to compete with photo-sharing on Instagram. This much seems cut and dry and obviously in the spirit of trust-busting, though the attack on Google is more muddled: Warren says she’d have Google divest DoubleClick, the online advertising company it acquired over a decade ago, even though it essentially no longer functions as an independent entity.

The other component of Warren’s plan would be “passing legislation that requires large tech platforms to be designated as ‘Platform Utilities’ and broken apart from any participant on that platform” — meaning that “these companies would be prohibited from owning both the platform utility and any participants on that platform.” “Platform utilities would be required to meet a standard of fair, reasonable, and nondiscriminatory dealing with users,” Warren wrote. “A company found to violate these requirements would also have to pay a fine of 5 percent of annual revenue.” Finally, some teeth.

Warren says this move would require Google’s entire search business to be spun off from the rest of the company, but further industry implications are unclear. Would Apple have to divest iTunes and the App Store? Would Android phones still come bundled with apps that beam data back to Google? How is Warren going to define “a platform for connecting third parties”? There are countless other questions and quibbles. Using Medium posts to develop far-reaching policy proposals has its limits, I guess.

Warren’s plan also falls short in tackling these companies’ ability to unilaterally control the dissemination of information. Facebook can decide who, out of over 2 billion souls, sees, reads, and hears what. The argument that no company, tech or otherwise, should ever have that capacity, remains unaddressed. But this is a start — a very hopeful start — and it’s been a very long time since we’ve seen anything that suggests that wresting power from Facebook et al. is an idea being taken seriously outside of advocacy groups, academia, and opinion columns.

Whether or not Warren wins the Democratic nomination or the presidency, we can expect to see powerful people hoping to become infinitely more powerful forced to discuss whether Facebook should be required to divest itself of Instagram. That’s more than we, the humble data-mined, have ever known.

The post Elizabeth Warren’s Big Tech Beatdown Will Spark a Vital and Unprecedented Debate appeared first on The Intercept.

congress-ai-final-1551736571
March 7, 2019

Should We Trust Artificial Intelligence Regulation by Congress If Face...

congress-ai-final-1551736571

Photo illustration: Soohee Cho/The Intercept, Getty Images

Try to imagine for a moment a declaration from Congress to the effect that safeguarding the environment is important, that the effects of pollution on the environment ought to be monitored, and that special care should be taken to protect particularly vulnerable and marginalized communities from toxic waste. So far, so good! Now imagine this resolution is enthusiastically endorsed by ExxonMobil and the American Coal Council. You would have good reason to be suspicious. Keep that in mind while you consider the newly announced House Resolution 153.

Last week, several members of Congress began pushing the resolution with the aim of “supporting the development of guidelines for ethical development of artificial intelligence.” It was introduced by Reps. Brenda Lawrence and Ro Khanna — the latter of whom, crucially, represents Silicon Valley, which is to the ethical development of software what West Virginia is to the rollout of clean energy. This has helped make Khanna a national figure, in part because, far from being a tech industry cheerleader, he’s publicly supported cracking down on the data Wild West his home district helped create. For example, he has criticized the wrist-slaps Google and Facebook receive in the wakes of their regular privacy scandals and called for congressional action against Amazon’s labor practices.

The resolution, co-sponsored by seven other representatives, has some strange fans. Its starting premises are unimpeachable: “Whereas the far-reaching societal impacts of AI necessitates its safe, responsible, and democratic development,” the resolution “supports the development of guidelines for the ethical development of artificial intelligence (AI), in consultation with diverse stakeholders.” It also supports adherence to a list of crucial values in the development of any kind of machine or algorithmic intelligence, including “[i]nformation privacy and the protection of one’s personal data”; “[a]ccountability and oversight for all automated decision making”; and “[s]afety, security, and control of AI systems now and in the future.”

These are laudable goals, if a little inexact: Key terms like “control” and “oversight” are left entirely undefined. Are we talking about self-regulation here — which algorithmic software companies want because of its ineffectiveness — or real, governmental regulation? When the resolution mentions accountability, are Khanna and company envisioning harsh penalties for AI mishaps, or is this a call for more public relations mea culpas after the fact?

It’s hard to square the track records of Facebook and IBM with many of the values listed in the AI resolution.

Details in the press release that accompanied the resolution might explain the wiggle room — or make one question the whole spiel. H.R. 153 “has been endorsed by the Future of Life Institute, BSA | The Software Alliance, IBM, and Facebook,” the release says.

The Future of Life Institute is a loose organization of concerned academics, as well as Elon Musk and, inexplicably, actors Alan Alda and Morgan Freeman. Those guys aren’t the problem, though. The real cause for concern is not that a resolution expresses a desire to rein in artificial intelligence, but that it does so with endorsements from Facebook and IBM — two fantastic examples of why such reining in is crucial. It’s hard to square the track records of either company with many of the values listed in the resolution.

Facebook — the world’s largest advertising network that happens to include social sharing features — is already leveraging artificial intelligence in earnest, and not just to track and purge extremist content, as touted by CEO Mark Zuckerberg. According to a confidential Facebook document obtained and reported on last year by The Intercept, the company is courting corporate partners with a new machine learning ability that makes explicit the goal of all marketing: to predict the future choices of consumers and invisibly change their decision without any forewarning. Using a technology called FBLearner Flow, the company boasts of its ability to “predict future behavior”; this allows it offer corporations the ability to target advertisements at users who are “at risk” of making choices that are considered unfavorable to such and such brand, ideally changing users’ decision before they even know they are going to make it. The company is also facing a class-action lawsuit over its controversial facial tagging feature, which uses machine intelligence to automatically identify and pair a Facebook user’s likeness with the company’s existing trove of personal information. The feature was rolled out without notice or anything resembling informed consent.

IBM’s machine intelligence adventures so far have been arguably more disquieting. Watson, the firm’s flagship AI product formerly known for its “Jeopardy!” victories, was found last year to have “often spit out erroneous cancer treatment advice,” according to a report in Stat. Last year, The Intercept revealed that the New York Police Department was sharing troves of surveillance camera footage with IBM to develop software that would allow other police departments to search for people by hair color, facial hair, and skin tone. Another 2018 Intercept report revealed that IBM was one of several tech firms lining up for a crack at aiding the Trump administration’s algorithmic “extreme vetting” program for immigrants — perhaps unsurprising, given that IBM CEO Ginni Rometty personally offered the company’s services to Trump following his election and later sat on a private-sector advisory board supporting the White House.

Although it’s true that AI has yet to be developed and perhaps never will, its precursors — lesser machine-learning or self-training algorithms — are already powerful instruments and growing more so every day. It’s hard to imagine two firms who should be farther from the oversight of such wide-reaching technology. For Facebook, a company that keeps the functionality of its intelligent software secret with a fervor rarely seen outside of the Pentagon, to endorse a resolution that calls for “[a]ccountability and oversight for all automated decision making” is absurd. That Facebook co-signed a resolution that hailed “[i]nformation privacy and the protection of one’s personal data” is something worse than absurd. So, too, is the fact that IBM, which sought the opportunity to build software to support the Trump administration’s immigration policies, would endorse a resolution to “empower … underrepresented or marginalized populations” through technology.

“It would be foolish to not involve some of the leading thinkers who happen to be at these companies.”

In a phone interview with The Intercept, Khanna defended the endorsements as being little more than the proverbial thumbs-up, and insisted that Facebook and IBM should have a seat at the table if and when Congress tackles meaningful federal regulation of AI. Such legislation, he thinks, must be “crafted by experts,” if not outright drafted by them. “I think the leaders of Silicon Valley are very concerned about an ethical framework for artificial intelligence,” Khanna said, “whether it’s Facebook or Sheryl Sandberg. That doesn’t mean they’ve been perfect actors.” 

Khanna was careful to reject the notion of “self-regulation,” which tech firms have favored for its total meaninglessness. “The past few years have showed self-regulation doesn’t work,” said Khanna. Although he rejected the idea that tech firms could help directly shape future AI regulation, Khanna added, “It would be foolish to not involve some of the leading thinkers who happen to be at these companies.”

Asked if he imagined future AI “oversight,” as mentioned in the resolution, including independent audits of corporate black-box algorithms, Khanna replied that it “depends for what” — as long as it doesn’t mean that Facebook has to run every one of its algorithms before a regulatory agency, which would “stifle innovation.” Khanna, however, suggested that there are scenarios where government involvement would be necessary, if “it were periodic checks on algorithms.” He said, “If, for example, the FTC” — Federal Trade Commission— “received a complaint that an algorithm was systematically showing bias and there was some standard of probable cause, that should trigger an audit.”

Yet hashing out these and countless other specifics on the how, when, and who of algorithmic oversight will be a long slog, with or without Facebook’s endorsement.

The post Should We Trust Artificial Intelligence Regulation by Congress If Facebook Supports It? appeared first on The Intercept.

The silhouette of Mark Zuckerberg, chief executive officer and co-founder of Facebook Inc., watches a demonstration during the Oculus Connect 5 product launch event in San Jose, California, U.S., on Wednesday, Sept. 26, 2018. Facebook unveiled a wireless virtual-reality headset called Oculus Quest, an attempt to help popularize the developing technology with a more mainstream audience. Photographer: David Paul Morris/Bloomberg via Getty Images
March 6, 2019

Mark Zuckerberg Is Trying to Play You — Again

The silhouette of Mark Zuckerberg, chief executive officer and co-founder of Facebook Inc., watches a demonstration during the Oculus Connect 5 product launch event in San Jose, California, U.S., on Wednesday, Sept. 26, 2018. Facebook unveiled a wireless virtual-reality headset called Oculus Quest, an attempt to help popularize the developing technology with a more mainstream audience. Photographer: David Paul Morris/Bloomberg via Getty Images

Mark Zuckerberg watches a demonstration during the Oculus Connect 5 product launch event in San Jose, Calif., on Sept. 26, 2018.

Photo: David Paul Morris/Bloomberg via Getty Images

If you click enough times through the website of Saudi Aramco, the largest oil producer in the world, you’ll reach a quiet section called “Addressing the climate challenge.” In this part of the website, the fossil fuel monolith claims, “Our contributions to the climate challenge are tangible expressions of our ethos, supported by company policies, of conducting our business in a way that addresses the climate challenge.” This is meaningless, of course — as is the announcement Mark Zuckerberg made today about his newfound “privacy-focused vision for social networking.” Don’t be fooled by either.

Like Saudi Aramco, Facebook inhabits a world in which it is constantly screamed at, with good reason, for being a contributor to the world’s worsening state. Writing a vague blog post, however, is far easier than completely restructuring the way your enormous corporation does business and reckoning with the damage it’s caused.

Promising to someday soon forfeit to your ability to eavesdrop on over 2 billion people doesn’t exactly make you eligible for sainthood in 2019.

And so here we are: “As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms,” Zuckerberg writes in his road-to-Damascus revelation about personal privacy. The roughly 3,000-word manifesto reads as though Facebook is fundamentally realigning itself as a privacy champion — a company that will no longer track what you read, buy, see, watch, and hear in order to sell companies the opportunity to intervene in your future acts. But, it turns out, the new “privacy-focused” Facebook involves only one change: the enabling of end-to-end encryption across the company’s instant messaging services. Such a tech shift would prevent anyone, even Facebook, outside of chat participants from reading your messages.

That’s it.

Although the move is laudable — and will be a boon for dissident Facebook chatters in countries where government surveillance is a real, perpetual risk — promising to someday soon forfeit to your ability to eavesdrop on over 2 billion people doesn’t exactly make you eligible for sainthood in 2019. It doesn’t help that Zuckerberg’s post is completely absent of details beyond a plan to implement these encryption changes “over the next few years” — which is particularly silly considering Facebook has yet to implement privacy features promised in the wake of its previous mega-scandals.

“I understand that many people don’t think Facebook can or would even want to build this kind of privacy-focused platform,” reads Zuckerberg’s awakening. Count me into “many people,” just like I’m a skeptic of Saudi Aramco’s attempt to pre-empt criticism: “For some, the idea of an oil and gas company positively contributing to the climate challenge is a contradiction. We don’t think so.”

The skepticism of Facebook is warranted. To pick just one of many examples, the company, as The Intercept recently reported, is involved behind the scenes in fighting attempts to pass more stringent privacy laws in California.

What’s more, this is a dramatic 3,000-word opus, but only about one new privacy feature, to be released at some unknown future point. On the other hand, Facebook has a long history to consider: It’s a company whose business model relies entirely on worldwide data mining. Facebook may someday offer end-to-end chats between WhatsApp and Messenger users — which would be great! — but there’s no sign the company would ever expand such encryption beyond instant messages, because it would destroy the company. For everything Facebook protects with end-to-end encryption, that’s one less thing Facebook can comb for behavioral data, consumer preferences, and so forth.

Your chats may be secure, but that will do virtually nothing to change how Facebook follows and monitors your life, on and offline. Facebook could, say, encrypt the contents of your profile or your photo albums so that no one but your friends could decrypt that information — but then how would they sell ads against it?

The unblogged truth, which Zuckerberg knows as well as anyone else, is that a “privacy-focused vision for social networking” looks nothing like Facebook; more to the point, it would resemble Facebook’s negative image. The company will wave its arms around this “announcement” and point to it whenever its next privacy screw-up occurs — likely sometime later today.

Don’t mistake this attempt at pantomiming contrition and techno-progress as anything more than theater. And don’t mistake a long blog post about privacy for anything more than many, many words from a man who knows he’s in trouble.

The post Mark Zuckerberg Is Trying to Play You — Again appeared first on The Intercept.

February 14, 2019

Amazon’s Home Surveillance Chief Declared War on “Dirtbag Criminal...

On March 17, 2016, Ring CEO Jamie Siminoff emailed out a company-wide declaration of war. The message, under the subject line “Going to war,” made two things clear to the home surveillance company’s hundreds of employees: Everyone was getting free camouflage-print T-shirts (“They look awesome,” assured Siminoff), and the company’s new mission was to use consumer electronics to fight crime. “We are going to war with anyone that wants to harm a neighborhood,” Siminoff wrote — and indeed Ring made it easier for police and worried neighbors to get their hands on footage from Ring home cameras. Internal documents and video reviewed by The Intercept show why this merging of private Silicon Valley business and public law enforcement has troubling privacy implications.

This first declaration of startup militancy — which Siminoff would later refer to as “Ring War I” or simply “RW1” — would be followed by more, equally clumsy attempts at corporate galvanization, some aimed at competitors or lackluster customer support. But the RW1 email is striking in how baldly it lays out the priorities and values of Ring, a company now owned by Amazon and facing strident criticism over its mishandling of customer data, as previously reported by The Intercept and The Information.

Ring and Siminoff, who still leads the company, haven’t been shy about their focus on crime-fighting. In fact, Ring’s emphasis not only on personal peace of mind, but also active crime-fighting has been instrumental in differentiating its cloud-connected doorbell and household surveillance gear from those made by its competitors. Ring products come with access to a social app called Neighbors that allows customers to not just to keep tabs on their own property, but also to share information about suspicious-looking individuals and alleged criminality with the rest of the block. In other words, Ring’s cameras aren’t just for keeping tabs on your own stoop or garage — they work to create a private-sector security bubble around entire residential areas, a neighborhood watch for the era of the so-called smart home.

“Dirtbag criminals that steal our packages … your time is numbered.”

Forming decentralized 19th-century vigilance committees with 21st-century technology has been a toxic move, as shown by apps like Citizen, which encourages users to go out and personally document reported 911 calls, and Nextdoor, which tends to foster lively discussions about nonwhite people strolling through various suburbs. But Ring stands alone as a tech company for which hyperconnected vigilance isn’t just a byproduct, but the product itself — an avowed attempt to merge 24/7 video, ubiquitous computer sensors, and facial recognition, and deliver it to local police on a platter. It’s no surprise then that police departments from Bradenton, Florida, to Los Angeles have leapt to “partner” with Ring. Research showing that Ring’s claims of criminal deterrence are at the very least overblown don’t seem to have hampered sales or police enthusiasm for such partnerships.

But what does it mean when a wholly owned Amazon subsidiary teams up with local law enforcement? What kind of new creature is this, and what does it mean to live in its shadow? In a recent overview of Ring’s privacy risks, the Washington Post’s Geoffrey Fowler asked the company about its data-sharing relationship with police and was told, “Our customers are in control of who views their footage. Period. We do not have any plans to change this.” Fowler wrote: “But would Ring draw an ethical line at sharing footage directly with police, even if there was consent? It wouldn’t say.” The answer is that no such line, ethical or otherwise, exists.

A Ring video that appears to have been produced for police reveals that the company has gone out of its way to build a bespoke portal for law enforcement officers who want access to the enormous volume of residential surveillance footage generated by customers’ cameras.

The site, known as the Ring Neighborhoods Portal, is described in the video as a “community crime-fighting tool for law enforcement,” providing police with “all the crime-related neighborhood alerts that are posted within their jurisdiction, in real time.” Ring also allows police to monitor postings by users in the Neighbors app that are categorized as crime-related “neighborhood alerts” and to see the group conversations around those postings — a feature left unmentioned in Ring’s public descriptions of the software. “It’s like having thousands of eyes and ears on the street,” said the video. A Ring spokesperson clarified that police are not given the real names of users chatting through the Neighbors app.

Not only does this portal allow police to view Ring customers on a handy, Google-powered map, but it also makes requesting customer surveillance video a matter of several clicks. “Here, you can enter an address and time frame of interest and see a map of active cameras in your chosen area and time,” the narrator of the video said. Police can select the homes they’re interested in, and Ring takes it from there, creating an auto-generated form letter that prompts users to provide access to their footage. “No more going door to door to look for cameras and asking for footage,” the video said. A Ring spokesperson told The Intercept “When using the Neighbors portal, law enforcement officials see the same interface that all users see: the content is the same, the locations of posts are obfuscated, and no personal information is shared.” It’s unclear how placing Ring owners on a map is considered an obfuscation of their locations.

“Consent here is a smokescreen.”

Although Ring owners must opt in to the Neighbors program and appear free to deny law enforcement access to the cameras they own, the mere ability to ask introduces privacy and civil liberties quandaries that haven’t previously existed. In an interview with The Intercept, Matt Cagle, an attorney at the American Civil Liberties Union of Northern California, said “the portal blurs the line between corporate and government surveillance,” making it unclear where the Silicon Valley initiative ends and constitutional issues begin. With Ring marketing Neighbors as an attractive, brand-defining feature (“The Neighbors App is the new neighborhood watch that brings your community together to help create safer neighborhoods”), it’s not as if the company can treat this as some sort of little experimental pilot program. In response to a question about why the company doesn’t publicize the special enforcement portal on the Ring website, a spokesperson pointed to language on its website about how users can “get alerts from the Ring team and updates from local law enforcement, so you and your community can stay safe and in the know,” which makes no mention of the law enforcement portal or the access it permits. The spokesperson added that “Video Requests [from police] must include a case or incident number, a specific area of interest and must be confined to a specific time range and date” and that “users can choose to share some, none, or all of the videos, and can opt out of future requests.”

Even for those who’ve opted in to Neighbors, the power dynamics of receiving an unsolicited digital knock on the door from a local police officer muddies the nature of any consent a camera owner might provide through the portal, which Cagle believes gives law enforcement “coercive power over customers” by virtue of its design. “Many people are not going to feel like they have a choice when law enforcement asks for access to their footage,” said Cagle. Indeed, the auto-generated message shown in the Ring demo video contains essentially zero details about the request, beyond the fact that an officer is “investigating an incident that happened near you.” Imagine receiving a remote request from a police officer you’ve never met about a crime you know nothing about, all because you happened to buy a particular brand of doorbell and activated an app. Are you implicated in this “incident”? What happens if you refuse? Will you merely be a bad Ring Neighbor, or an uncooperative witness?

Consider as well the fact that Ring cameras are designed and sold to be placed not only outside you front door or garage, but inside your home too. What if a Ring owner provides footage from their camera to assist with a nearby “incident” that inadvertently reveals them smoking pot or violating their parole? When asked how people who live or pass by Ring cameras but are not Ring users can opt out of being recorded and having their image sent to police, the Ring spokesperson told The Intercept, “Our devices are not intended to be and should not be installed where the camera is recording someone else’s property without prior consent nor public areas.” It’s difficult if not impossible to reconcile this claim with the fact that Ring’s flagship product is a doorbell camera that points straight outward and captures anything or anyone who passes by a home’s entrance.

The video ends on an eerie note, adding that “in future versions we will also be enabling Ring’s smart search functionality that will allow for suspicious activity detection and person recognition.” What constitutes “suspicious activity” is anyone’s guess, as is how Ring will “detect” it. Given that the company still uses a team of clickers in the Ukraine to help tell the difference between cars and dogs, there’s little reason to have confidence in Ring’s ability to detect something worthy of suspicion, however it’s defined.

Even with the consent of owners, Cagle worries that the simple existence of a program like the Neighbors Portal threatens to blur, if not eliminate, the distinction between private-sector surveillance services and the government’s role as enforcer of the law. With regards to the latter, we have powerful constitutional safeguards, while with the former we have only terms of service and privacy policy agreements that no one reads. “Consent here is a smokescreen,” said Cagle. “Folks online consent to policies all the time without being meaningfully explained what is happening with our data, and the stakes are much higher here: Under guise of consent, this could invite needless surveillance of private lives.”

These possibilities don’t seem to have concerned Siminoff, whose giddiness about Ring’s future as a law enforcement asset is palpable throughout internal emails. Indeed, it’s clear that the anti-crime push wasn’t just an aspect of Ring according to its chief executive, but integral to its identity and fundamental to its company culture. In the March 2016 internal email, Siminoff added a special message to “the dirtbag criminals that steal our packages and rob our houses … your time is numbered because Ring is now officially declaring war on you!” In a November 2017 email announcing a third “Ring War” against alarm company ADT, Siminoff declared that Ring “will still become the largest security company in the world.” Another internal email from earlier in 2017 (subject: “Why We Are Here”) includes a message from Sgt. John Massi of the Philadelphia Police Department, thanking the company for its assistance with a recent string of thefts. “Wish I had some better wording for this,” wrote Siminoff, “but to put it bluntly, this is just FUCKING AWESOME!” In his message, Massi wrote that Ring’s “assistance allowed our detectives to secure an arrest & search warrant for our target, resulting in (7) counts of theft and related charges,” adding that the company “has demonstrated that they are a supportive partner in the fight against crime!”

The Intercept provided Ring with a list of detailed questions about the access it provides to police, but the company’s response left many of these unanswered. Ring did not address the consequences of bypassing the judicial system to obtain customer videos (albeit with consent), nor did the company answer how it defines or identifies “suspicious activity” or answer whether there are any guidelines in place regarding the handling or retention of customer videos by law enforcement. Without clear answers to these and other questions, Ring owners will simply have to trust Amazon and their local police to do the right thing.

The post Amazon’s Home Surveillance Chief Declared War on “Dirtbag Criminals” as Company Got Closer to Police appeared first on The Intercept.

AgeOfSurveillance-4-1548713060
February 2, 2019

“A Fundamentally Illegitimate Choice”: Shoshana Zuboff on the Age ...

Shoshana Zuboff’s “The Age of Surveillance Capitalism” is already drawing comparisons to seminal socioeconomic investigations like Rachel Carson’s “Silent Spring” and Karl Marx’s “Capital.” Zuboff’s book deserves these comparisons and more: Like the former, it’s an alarming exposé about how business interests have poisoned our world, and like the latter, it provides a framework to understand and combat that poison. But “The Age of Surveillance Capitalism,” named for the now-popular term Zuboff herself coined five years ago, is also a masterwork of horror. It’s hard to recall a book that left me as haunted as Zuboff’s, with its descriptions of the gothic algorithmic daemons that follow us at nearly every instant of every hour of every day to suck us dry of metadata. Even those who’ve made an effort to track the technology that tracks us over the last decade or so will be chilled to their core by Zuboff, unable to look at their surroundings the same way.

AgeOfSurveillance-4-1548713060

Cover: Public Affairs Books

An unavoidable takeaway of “The Age of Surveillance Capitalism” is, essentially, that everything is even worse than you thought. Even if you’ve followed the news items and historical trends that gird Zuboff’s analysis, her telling takes what look like privacy overreaches and data blunders, and recasts them as the intentional movements of a global system designed to violate you as a revenue stream. “The result is that both the world and our lives are pervasively rendered as information,” Zuboff writes. “Whether you are complaining about your acne or engaging in political debate on Facebook, searching for a recipe or sensitive health information on Google, ordering laundry soap or taking photos of your nine-year-old, smiling or thinking angry thoughts, watching TV or doing wheelies in the parking lot, all of it is raw material for this burgeoning text.”

Tech’s privacy scandals, which seem to appear with increasing frequency both in private industry and in government, aren’t isolated incidents, but rather brief glimpses at an economic and social logic that’s overtaken the planet while we were enjoying Gmail and Instagram. The cliched refrain that if you’re “not paying for a product, you are the product”? Too weak, says Zuboff. You’re not technically the product, she explains over the course of several hundred tense pages, because you’re something even more degrading: an input for the real product, predictions about your future sold to the highest bidder so that this future can be altered. “Digital connection is now a means to others’ commercial ends,” writes Zuboff. “At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience.”

Zuboff recently took a moment to walk me through the implications of her urgent and crucial book. This interview was condensed and edited for clarity.

I was hoping you could say something about whatever semantic games Facebook and other similar data brokers are doing when they say they don’t sell data.

I remember sitting at my desk in my study early in 2012, and I was listening to a speech that [Google’s then-Executive Chair] Eric Schmidt gave somewhere. He was bragging about how privacy conscious Google is, and he said, “We don’t sell your data.” I got on the phone and started calling these various data scientists that I know and saying, “How can Eric Schmidt say we don’t sell your data, in public, knowing that it’s recorded? How does he get away with that?” It’s exactly the question I was trying to answer at the beginning of all this.

Let’s say you’re browsing, or you’re on Facebook putting stuff in a post. They’re not taking your words and going into some marketplace and selling your words. Those words, or if they’ve got you walking across the park or whatever, that’s the raw material. They’re just secretly scraping your private experience as raw material, and they’re stockpiling that raw material, constantly flowing through the pipes. They sell prediction products into a new marketplace. What are those guys really buying? They’re buying predictions of what you’re gonna do. There are a lot of businesses that want to know what you’re going to do, and they’re willing to pay for those predictions. That’s how they get away with saying, “We’re not selling your personal information.” That’s how they get away also with saying, as in the case of [recently implemented European privacy law] GDPR, “Yeah, you can have access to your data.” Because the data they’re going to give you access to is the data you already gave them. They’re not giving you access to everything that happens when the raw material goes into the sausage machine, to the prediction products.

Do you see that as substantively different than selling the raw material?

Why would they sell the raw material? Without the raw material, they’ve got nothing. They don’t want to sell raw material, they want to collect all of the raw material on earth and have it as proprietary. They sell the value added on the raw material.

It seems like what they’re actually selling is way more problematic and way more valuable.

That’s the whole point. Now we have markets of business customers that are selling and buying predictions of human futures. I believe in the values of human freedom and human autonomy as the necessary elements of a democratic society. As the competition of these prediction products heats up, it’s clear that surveillance capitalists have discovered that the most predictive sources of data are when they come in and intervene in our lives, in our real-time actions, to shape our action in a certain direction that aligns with the kind of outcomes they want to guarantee to their customers. That’s where they’re making their money. These are bald-faced interventions in the exercise of human autonomy, what I call the “right to the future tense.” The very idea that I can decide what I want my future to be and design the actions that get me from here to there, that’s the very material essence of the idea of free will.

“These are bald-faced interventions in the exercise of human autonomy.”

I write about the Senate committee back in the ’70s that reviewed behavioral modification from the point of view of federal funding, and found behavioral mod a reprehensible threat to the values of human autonomy and democracy. And here we are, these years later, like, La-di-da, please pass the salt. This thing is growing all around us, this new means of behavioral modification, under the auspices of private capital, without constitutional protections, done in secret, specifically designed to keep us ignorant of its operations.

When you put it like that, it sure makes the question of whether Facebook is selling our phone number and email address kind of quaint.

Indeed. And that’s exactly the kind of misdirection that they rely on.

This made me reflect, not totally kindly, on the years I spent working at Gizmodo covering consumer tech. No matter how skeptical I tried to remain then, I look back on all the Google and Facebook product announcements that we covered just as “product news.”

[The press is] up against this massive juggernaut of private capital aiming to confuse, bamboozle, and misdirect. A long time ago, I think it was 2007, I was already researching this topic and I was at a conference with a bunch of Google people. Over lunch I was sitting with some other Google executives and I asked the question, “How do I opt out of Google Earth?” All of a sudden, the whole room goes silent. Marissa Mayer, [a Google vice president at the time], was sitting at a different table, but she turned around and looked at me and said “Shoshana, do you really want to get in the way of organizing and making accessible the world’s information?” It took me a few minutes to realize she was reciting the Google mission statement.

Zuboff-author-photo--1548713646

Author Shoshana Zuboff.

Photo: Michael D. Wilson

The other day, I was looking through the section of my Facebook account that actually lists the interests that Facebook has ascribed to you, the things it believes you’re into. I did the same with Twitter — and I was struck in both cases by how wrong they were. I wonder if you find it reassuring that a lot of this stuff seems to be pretty clunky and inaccurate right now.

I think there’s a range here. Some of it still feels clunky and irrelevant and produces in us perhaps a sigh of relief. But then on the other end, there are things that are uncannily precise, really hitting their mark at the moment they should be. And because we only have access to what they let us see, it’s still quite difficult for us to judge precisely what the range of that [accuracy] is.

What about the risk of behavioral intervention based on false premises? I don’t want a company trying to intervene in the course of my daily life based on the mistaken belief that I’m into fly fishing any more than I want them to intervene based on a real interest I have.

This is why I’m arguing we’ve got to look at these operations and break them down. They all derive from a fundamental premise that’s illegitimate: that our private experience is free for the taking as raw material. So it’s almost secondary if their conclusions are right or wrong about us. They’ve got no right to intervene in my behavior in the first place. They have no right to my future tense.

“It is a fundamentally illegitimate choice we are forced to make: To get the help I need, I’ve got to march through surveillance capitalism.”

Is there such a thing as a good ad in 2019? Is it even possible to implement a form of online advertising that isn’t invasive and compromising of our rights?

An analogy I would draw would be negotiating how many hours a day a 7-year-old can work in a factory.

I take that as a no.

We’re supposed to be contesting the very legitimacy of child labor.

I’ve been surprised by the number of people I know, who I consider very savvy as far as technology, interested and concerned about technology, concerned by Facebook, who still have purchased an Alexa or Google Assistant device for their living room. It’s this weird mismatch of knowing better and surrendering to the convenience of it all. What would you say to someone like that?

Surveillance capitalism in general has been so successful because most of us feel so beleaguered, so unsupported by our real-world institutions, whether it’s health care, the educational system, the bank … It’s just a tale of woe wherever you go. The economic and political institutions right now leave us feeling so frustrated. We’ve all been driven in this way toward the internet, toward these services, because we need help. And no one else is helping us. That’s how we got hooked.

You think we turned to Alexa in despair?

Obviously there’s a range here. For some people, the sort of caricature of “We just want convenience, we’re so lazy” — for some people that caricature holds. But I feel much more forgiving of these needs than the caricature would lead us to believe. We do need help. We shouldn’t need so much help because our institutions in the real world need to be fixed. But to the extent that we do need help and we do look to the internet, it is a fundamentally illegitimate choice that we are now forced to make as 21st century citizens. In order to get the help I need, I’ve got to march through surveillance capitalism supply chains. Because Alexa and Google Home and every other gewgaw that has the word “smart” in front of it, every service that has “personalized” in front of it is nothing but supply chain interfaces for the flow of raw material to be translated into data, to be fashioned into prediction products, to be sold in behavioral futures markets so that we end up funding our own domination. If we’re gonna fix this, no matter how much we feel like we need this stuff, we’ve got to get to a place where we are willing to say no.

“The Age of Surveillance Capitalism” is available at bookstores everywhere, though you may cringe a bit after finishing it if you ordered from Amazon.

The post “A Fundamentally Illegitimate Choice”: Shoshana Zuboff on the Age of Surveillance Capitalism appeared first on The Intercept.

ring-redacted-1547070465
January 10, 2019

For Owners of Amazon’s Ring Security Cameras, Strangers May Have Bee...

The “smart home” of the 21st century isn’t just supposed to be a monument to convenience, we’re told, but also to protection, a Tony Stark-like bubble of vigilant algorithms and internet-connected sensors working ceaselessly to watch over us. But for some who’ve welcomed in Amazon’s Ring security cameras, there have been more than just algorithms watching through the lens, according to sources alarmed by Ring’s dismal privacy practices.

Ring has a history of lax, sloppy oversight when it comes to deciding who has access to some of the most precious, intimate data belonging to any person: a live, high-definition feed from around — and perhaps inside — their house. The company has marketed its line of miniature cameras, designed to be mounted as doorbells, in garages, and on bookshelves, not only as a means of keeping tabs on your home while you’re away, but of creating a sort of privatized neighborhood watch, a constellation of overlapping camera feeds that will help police detect and apprehend burglars (and worse) as they approach. “Our mission to reduce crime in neighborhoods has been at the core of everything we do at Ring,” founder and CEO Jamie Siminoff wrote last spring to commemorate the company’s reported $1 billion acquisition payday from Amazon, a company with its own recent history of troubling facial recognition practices. The marketing is working; Ring is a consumer hit and a press darling.

Despite its mission to keep people and their property secure, the company’s treatment of customer video feeds has been anything but, people familiar with the company’s practices told The Intercept. Beginning in 2016, according to one source, Ring provided its Ukraine-based research and development team virtually unfettered access to a folder on Amazon’s S3 cloud storage service that contained every video created by every Ring camera around the world. This would amount to an enormous list of highly sensitive files that could be easily browsed and viewed. Downloading and sharing these customer video files would have required little more than a click. The Information, which has aggressively covered Ring’s security lapses, reported on these practices last month.

At the time the Ukrainian access was provided, the video files were left unencrypted, the source said, because of Ring leadership’s “sense that encryption would make the company less valuable,” owing to the expense of implementing encryption and lost revenue opportunities due to restricted access. The Ukraine team was also provided with a corresponding database that linked each specific video file to corresponding specific Ring customers.

“If I knew a reporter or competitor’s email address, I could view all their cameras.”

At the same time, the source said, Ring unnecessarily provided executives and engineers in the U.S. with highly privileged access to the company’s technical support video portal, allowing unfiltered, round-the-clock live feeds from some customer cameras, regardless of whether they needed access to this extremely sensitive data to do their jobs. For someone who’d been given this top-level access — comparable to Uber’s infamous “God mode” map that revealed the movements of all passengers — only a Ring customer’s email address was required to watch cameras from that person’s home. Although the source said they never personally witnessed any egregious abuses, they told The Intercept “I can say for an absolute fact if I knew a reporter or competitor’s email address, I could view all their cameras.” The source also recounted instances of Ring engineers “teasing each other about who they brought home” after romantic dates. Although the engineers in question were aware that they were being surveilled by their co-workers in real time, the source questioned whether their companions were similarly informed.

Ring’s decision to grant this access to its Ukraine team was spurred in part by the weaknesses of its in-house facial and object recognition software. Neighbors, the company’s disarming name for its distributed residential surveillance platform, is now a marquee feature for Ring’s cameras, billed as a “proactive” neighborhood watch. This real-time crime-fighting requires more than raw video — it requires the ability to make sense, quickly and at a vast scale, of what’s actually happening in these household video streams. Is that a dog or your husband? Is that a burglar or a tree? Ring’s software has for years struggled with these fundamentals of object recognition. According to the most recent Information report, “Users routinely complained to customer support about receiving alerts when nothing noteworthy was happening at their front door; instead, the system seemed to be detecting a car driving by on the street or a leaf falling from a tree in the front yard.”

Computer vision has made incredible strides in recent years, but creating software that can categorize objects from scratch is often expensive and time-consuming. To jump-start the process, Ring used its Ukrainian “data operators” as a crutch for its lackluster artificial intelligence efforts, manually tagging and labeling objects in a given video as part of a “training” process to teach software with the hope that it might be able to detect such things on its own in the near future. This process is still apparently underway years later: Ring Labs, the name of the Ukrainian operation, is still employing people as data operators, according to LinkedIn, and posting job listings for vacant video-tagging gigs: “You must be able to recognize and tag all moving objects in the video correctly with high accuracy,” reads one job ad. “Be ready for rapid changes in tasks in the same way as be ready for long monotonous work.”

ring-redacted-1547070465

Image: Ring

A never-before-published image from an internal Ring document pulls back the veil of the company’s lofty security ambitions: Behind all the computer sophistication was a team of people drawing boxes around strangers, day in and day out, as they struggled to grant some semblance of human judgment to an algorithm. (The Intercept redacted a face from the image.)

A second source, with direct knowledge of Ring’s video-tagging efforts, said that the video annotation team watches footage not only from the popular outdoor and doorbell camera models, but from household interiors. The source said that Ring employees at times showed each other videos they were annotating and described some of the things they had witnessed, including people kissing, firing guns, and stealing.

Ring spokesperson Yassi Shahmiri would not answer any questions about the company’s past data policies and how they might be different today, electing instead to provide the following statement:

We take the privacy and security of our customers’ personal information extremely seriously. In order to improve our service, we view and annotate certain Ring videos. These videos are sourced exclusively from publicly shared Ring videos from the Neighbors app (in accordance with our terms of service), and from a small fraction of Ring users who have provided their explicit written consent to allow us to access and utilize their videos for such purposes.

We have strict policies in place for all our team members. We implement systems to restrict and audit access to information. We hold our team members to a high ethical standard and anyone in violation of our policies faces discipline, including termination and potential legal and criminal penalties. In addition, we have zero tolerance for abuse of our systems and if we find bad actors who have engaged in this behavior, we will take swift action against them.

It’s not clear that the current standards for which Ring videos are accessed in Ukraine, as described in Ring’s statement, have always been in place, nor is there any indication of how (or if) they’re enforced. The Information quoted former employees saying the standards have not always been in place, and indicated that efforts to more tightly control video were put in place by Amazon only this past May after Amazon visited the Ukraine office. Even then, The Information added, staffers in Ukraine worked around the controls.

Furthermore, Ring’s overview of its Neighbors system provides zero mention of image or facial recognition, and no warning that those who use the feature are opting in to have their homes watched by individuals in a Ukrainian R&D lab. Mentions of Ring’s facial recognition practices are buried in its privacy policy, which said merely that “you may choose to use additional functionality in your Ring product that, through video data from your device, can recognize facial characteristics of familiar visitors.” Neither Ring’s terms of service nor its privacy policy mention any manual video annotation being conducted by humans, nor does either document mention of the possibility that Ring staffers could access this video at all. Even with suitably strong policies in place, the question of whether Ring owners should trust a company that ever considered the above permissible will remain an open one.

The post For Owners of Amazon’s Ring Security Cameras, Strangers May Have Been Watching Too appeared first on The Intercept.