February 14, 2019

Amazon’s Home Surveillance Chief Declared War on “Dirtbag Criminal...

On March 17, 2016, Ring CEO Jamie Siminoff emailed out a company-wide declaration of war. The message, under the subject line “Going to war,” made two things clear to the home surveillance company’s hundreds of employees: Everyone was getting free camouflage-print T-shirts (“They look awesome,” assured Siminoff), and the company’s new mission was to use consumer electronics to fight crime. “We are going to war with anyone that wants to harm a neighborhood,” Siminoff wrote — and indeed Ring made it easier for police and worried neighbors to get their hands on footage from Ring home cameras. Internal documents and video reviewed by The Intercept show why this merging of private Silicon Valley business and public law enforcement has troubling privacy implications.

This first declaration of startup militancy — which Siminoff would later refer to as “Ring War I” or simply “RW1” — would be followed by more, equally clumsy attempts at corporate galvanization, some aimed at competitors or lackluster customer support. But the RW1 email is striking in how baldly it lays out the priorities and values of Ring, a company now owned by Amazon and facing strident criticism over its mishandling of customer data, as previously reported by The Intercept and The Information.

Ring and Siminoff, who still leads the company, haven’t been shy about their focus on crime-fighting. In fact, Ring’s emphasis not only on personal peace of mind, but also active crime-fighting has been instrumental in differentiating its cloud-connected doorbell and household surveillance gear from those made by its competitors. Ring products come with access to a social app called Neighbors that allows customers to not just to keep tabs on their own property, but also to share information about suspicious-looking individuals and alleged criminality with the rest of the block. In other words, Ring’s cameras aren’t just for keeping tabs on your own stoop or garage — they work to create a private-sector security bubble around entire residential areas, a neighborhood watch for the era of the so-called smart home.

“Dirtbag criminals that steal our packages … your time is numbered.”

Forming decentralized 19th-century vigilance committees with 21st-century technology has been a toxic move, as shown by apps like Citizen, which encourages users to go out and personally document reported 911 calls, and Nextdoor, which tends to foster lively discussions about nonwhite people strolling through various suburbs. But Ring stands alone as a tech company for which hyperconnected vigilance isn’t just a byproduct, but the product itself — an avowed attempt to merge 24/7 video, ubiquitous computer sensors, and facial recognition, and deliver it to local police on a platter. It’s no surprise then that police departments from Bradenton, Florida, to Los Angeles have leapt to “partner” with Ring. Research showing that Ring’s claims of criminal deterrence are at the very least overblown don’t seem to have hampered sales or police enthusiasm for such partnerships.

But what does it mean when a wholly owned Amazon subsidiary teams up with local law enforcement? What kind of new creature is this, and what does it mean to live in its shadow? In a recent overview of Ring’s privacy risks, the Washington Post’s Geoffrey Fowler asked the company about its data-sharing relationship with police and was told, “Our customers are in control of who views their footage. Period. We do not have any plans to change this.” Fowler wrote: “But would Ring draw an ethical line at sharing footage directly with police, even if there was consent? It wouldn’t say.” The answer is that no such line, ethical or otherwise, exists.

A Ring video that appears to have been produced for police reveals that the company has gone out of its way to build a bespoke portal for law enforcement officers who want access to the enormous volume of residential surveillance footage generated by customers’ cameras.

The site, known as the Ring Neighborhoods Portal, is described in the video as a “community crime-fighting tool for law enforcement,” providing police with “all the crime-related neighborhood alerts that are posted within their jurisdiction, in real time.” Ring also allows police to monitor postings by users in the Neighbors app that are categorized as crime-related “neighborhood alerts” and to see the group conversations around those postings — a feature left unmentioned in Ring’s public descriptions of the software. “It’s like having thousands of eyes and ears on the street,” said the video. A Ring spokesperson clarified that police are not given the real names of users chatting through the Neighbors app.

Not only does this portal allow police to view Ring customers on a handy, Google-powered map, but it also makes requesting customer surveillance video a matter of several clicks. “Here, you can enter an address and time frame of interest and see a map of active cameras in your chosen area and time,” the narrator of the video said. Police can select the homes they’re interested in, and Ring takes it from there, creating an auto-generated form letter that prompts users to provide access to their footage. “No more going door to door to look for cameras and asking for footage,” the video said. A Ring spokesperson told The Intercept “When using the Neighbors portal, law enforcement officials see the same interface that all users see: the content is the same, the locations of posts are obfuscated, and no personal information is shared.” It’s unclear how placing Ring owners on a map is considered an obfuscation of their locations.

“Consent here is a smokescreen.”

Although Ring owners must opt in to the Neighbors program and appear free to deny law enforcement access to the cameras they own, the mere ability to ask introduces privacy and civil liberties quandaries that haven’t previously existed. In an interview with The Intercept, Matt Cagle, an attorney at the American Civil Liberties Union of Northern California, said “the portal blurs the line between corporate and government surveillance,” making it unclear where the Silicon Valley initiative ends and constitutional issues begin. With Ring marketing Neighbors as an attractive, brand-defining feature (“The Neighbors App is the new neighborhood watch that brings your community together to help create safer neighborhoods”), it’s not as if the company can treat this as some sort of little experimental pilot program. In response to a question about why the company doesn’t publicize the special enforcement portal on the Ring website, a spokesperson pointed to language on its website about how users can “get alerts from the Ring team and updates from local law enforcement, so you and your community can stay safe and in the know,” which makes no mention of the law enforcement portal or the access it permits. The spokesperson added that “Video Requests [from police] must include a case or incident number, a specific area of interest and must be confined to a specific time range and date” and that “users can choose to share some, none, or all of the videos, and can opt out of future requests.”

Even for those who’ve opted in to Neighbors, the power dynamics of receiving an unsolicited digital knock on the door from a local police officer muddies the nature of any consent a camera owner might provide through the portal, which Cagle believes gives law enforcement “coercive power over customers” by virtue of its design. “Many people are not going to feel like they have a choice when law enforcement asks for access to their footage,” said Cagle. Indeed, the auto-generated message shown in the Ring demo video contains essentially zero details about the request, beyond the fact that an officer is “investigating an incident that happened near you.” Imagine receiving a remote request from a police officer you’ve never met about a crime you know nothing about, all because you happened to buy a particular brand of doorbell and activated an app. Are you implicated in this “incident”? What happens if you refuse? Will you merely be a bad Ring Neighbor, or an uncooperative witness?

Consider as well the fact that Ring cameras are designed and sold to be placed not only outside you front door or garage, but inside your home too. What if a Ring owner provides footage from their camera to assist with a nearby “incident” that inadvertently reveals them smoking pot or violating their parole? When asked how people who live or pass by Ring cameras but are not Ring users can opt out of being recorded and having their image sent to police, the Ring spokesperson told The Intercept, “Our devices are not intended to be and should not be installed where the camera is recording someone else’s property without prior consent nor public areas.” It’s difficult if not impossible to reconcile this claim with the fact that Ring’s flagship product is a doorbell camera that points straight outward and captures anything or anyone who passes by a home’s entrance.

The video ends on an eerie note, adding that “in future versions we will also be enabling Ring’s smart search functionality that will allow for suspicious activity detection and person recognition.” What constitutes “suspicious activity” is anyone’s guess, as is how Ring will “detect” it. Given that the company still uses a team of clickers in the Ukraine to help tell the difference between cars and dogs, there’s little reason to have confidence in Ring’s ability to detect something worthy of suspicion, however it’s defined.

Even with the consent of owners, Cagle worries that the simple existence of a program like the Neighbors Portal threatens to blur, if not eliminate, the distinction between private-sector surveillance services and the government’s role as enforcer of the law. With regards to the latter, we have powerful constitutional safeguards, while with the former we have only terms of service and privacy policy agreements that no one reads. “Consent here is a smokescreen,” said Cagle. “Folks online consent to policies all the time without being meaningfully explained what is happening with our data, and the stakes are much higher here: Under guise of consent, this could invite needless surveillance of private lives.”

These possibilities don’t seem to have concerned Siminoff, whose giddiness about Ring’s future as a law enforcement asset is palpable throughout internal emails. Indeed, it’s clear that the anti-crime push wasn’t just an aspect of Ring according to its chief executive, but integral to its identity and fundamental to its company culture. In the March 2016 internal email, Siminoff added a special message to “the dirtbag criminals that steal our packages and rob our houses … your time is numbered because Ring is now officially declaring war on you!” In a November 2017 email announcing a third “Ring War” against alarm company ADT, Siminoff declared that Ring “will still become the largest security company in the world.” Another internal email from earlier in 2017 (subject: “Why We Are Here”) includes a message from Sgt. John Massi of the Philadelphia Police Department, thanking the company for its assistance with a recent string of thefts. “Wish I had some better wording for this,” wrote Siminoff, “but to put it bluntly, this is just FUCKING AWESOME!” In his message, Massi wrote that Ring’s “assistance allowed our detectives to secure an arrest & search warrant for our target, resulting in (7) counts of theft and related charges,” adding that the company “has demonstrated that they are a supportive partner in the fight against crime!”

The Intercept provided Ring with a list of detailed questions about the access it provides to police, but the company’s response left many of these unanswered. Ring did not address the consequences of bypassing the judicial system to obtain customer videos (albeit with consent), nor did the company answer how it defines or identifies “suspicious activity” or answer whether there are any guidelines in place regarding the handling or retention of customer videos by law enforcement. Without clear answers to these and other questions, Ring owners will simply have to trust Amazon and their local police to do the right thing.

The post Amazon’s Home Surveillance Chief Declared War on “Dirtbag Criminals” as Company Got Closer to Police appeared first on The Intercept.

AgeOfSurveillance-4-1548713060
February 2, 2019

“A Fundamentally Illegitimate Choice”: Shoshana Zuboff on the Age ...

Shoshana Zuboff’s “The Age of Surveillance Capitalism” is already drawing comparisons to seminal socioeconomic investigations like Rachel Carson’s “Silent Spring” and Karl Marx’s “Capital.” Zuboff’s book deserves these comparisons and more: Like the former, it’s an alarming exposé about how business interests have poisoned our world, and like the latter, it provides a framework to understand and combat that poison. But “The Age of Surveillance Capitalism,” named for the now-popular term Zuboff herself coined five years ago, is also a masterwork of horror. It’s hard to recall a book that left me as haunted as Zuboff’s, with its descriptions of the gothic algorithmic daemons that follow us at nearly every instant of every hour of every day to suck us dry of metadata. Even those who’ve made an effort to track the technology that tracks us over the last decade or so will be chilled to their core by Zuboff, unable to look at their surroundings the same way.

AgeOfSurveillance-4-1548713060

Cover: Public Affairs Books

An unavoidable takeaway of “The Age of Surveillance Capitalism” is, essentially, that everything is even worse than you thought. Even if you’ve followed the news items and historical trends that gird Zuboff’s analysis, her telling takes what look like privacy overreaches and data blunders, and recasts them as the intentional movements of a global system designed to violate you as a revenue stream. “The result is that both the world and our lives are pervasively rendered as information,” Zuboff writes. “Whether you are complaining about your acne or engaging in political debate on Facebook, searching for a recipe or sensitive health information on Google, ordering laundry soap or taking photos of your nine-year-old, smiling or thinking angry thoughts, watching TV or doing wheelies in the parking lot, all of it is raw material for this burgeoning text.”

Tech’s privacy scandals, which seem to appear with increasing frequency both in private industry and in government, aren’t isolated incidents, but rather brief glimpses at an economic and social logic that’s overtaken the planet while we were enjoying Gmail and Instagram. The cliched refrain that if you’re “not paying for a product, you are the product”? Too weak, says Zuboff. You’re not technically the product, she explains over the course of several hundred tense pages, because you’re something even more degrading: an input for the real product, predictions about your future sold to the highest bidder so that this future can be altered. “Digital connection is now a means to others’ commercial ends,” writes Zuboff. “At its core, surveillance capitalism is parasitic and self-referential. It revives Karl Marx’s old image of capitalism as a vampire that feeds on labor, but with an unexpected turn. Instead of labor, surveillance capitalism feeds on every aspect of every human’s experience.”

Zuboff recently took a moment to walk me through the implications of her urgent and crucial book. This interview was condensed and edited for clarity.

I was hoping you could say something about whatever semantic games Facebook and other similar data brokers are doing when they say they don’t sell data.

I remember sitting at my desk in my study early in 2012, and I was listening to a speech that [Google’s then-Executive Chair] Eric Schmidt gave somewhere. He was bragging about how privacy conscious Google is, and he said, “We don’t sell your data.” I got on the phone and started calling these various data scientists that I know and saying, “How can Eric Schmidt say we don’t sell your data, in public, knowing that it’s recorded? How does he get away with that?” It’s exactly the question I was trying to answer at the beginning of all this.

Let’s say you’re browsing, or you’re on Facebook putting stuff in a post. They’re not taking your words and going into some marketplace and selling your words. Those words, or if they’ve got you walking across the park or whatever, that’s the raw material. They’re just secretly scraping your private experience as raw material, and they’re stockpiling that raw material, constantly flowing through the pipes. They sell prediction products into a new marketplace. What are those guys really buying? They’re buying predictions of what you’re gonna do. There are a lot of businesses that want to know what you’re going to do, and they’re willing to pay for those predictions. That’s how they get away with saying, “We’re not selling your personal information.” That’s how they get away also with saying, as in the case of [recently implemented European privacy law] GDPR, “Yeah, you can have access to your data.” Because the data they’re going to give you access to is the data you already gave them. They’re not giving you access to everything that happens when the raw material goes into the sausage machine, to the prediction products.

Do you see that as substantively different than selling the raw material?

Why would they sell the raw material? Without the raw material, they’ve got nothing. They don’t want to sell raw material, they want to collect all of the raw material on earth and have it as proprietary. They sell the value added on the raw material.

It seems like what they’re actually selling is way more problematic and way more valuable.

That’s the whole point. Now we have markets of business customers that are selling and buying predictions of human futures. I believe in the values of human freedom and human autonomy as the necessary elements of a democratic society. As the competition of these prediction products heats up, it’s clear that surveillance capitalists have discovered that the most predictive sources of data are when they come in and intervene in our lives, in our real-time actions, to shape our action in a certain direction that aligns with the kind of outcomes they want to guarantee to their customers. That’s where they’re making their money. These are bald-faced interventions in the exercise of human autonomy, what I call the “right to the future tense.” The very idea that I can decide what I want my future to be and design the actions that get me from here to there, that’s the very material essence of the idea of free will.

“These are bald-faced interventions in the exercise of human autonomy.”

I write about the Senate committee back in the ’70s that reviewed behavioral modification from the point of view of federal funding, and found behavioral mod a reprehensible threat to the values of human autonomy and democracy. And here we are, these years later, like, La-di-da, please pass the salt. This thing is growing all around us, this new means of behavioral modification, under the auspices of private capital, without constitutional protections, done in secret, specifically designed to keep us ignorant of its operations.

When you put it like that, it sure makes the question of whether Facebook is selling our phone number and email address kind of quaint.

Indeed. And that’s exactly the kind of misdirection that they rely on.

This made me reflect, not totally kindly, on the years I spent working at Gizmodo covering consumer tech. No matter how skeptical I tried to remain then, I look back on all the Google and Facebook product announcements that we covered just as “product news.”

[The press is] up against this massive juggernaut of private capital aiming to confuse, bamboozle, and misdirect. A long time ago, I think it was 2007, I was already researching this topic and I was at a conference with a bunch of Google people. Over lunch I was sitting with some other Google executives and I asked the question, “How do I opt out of Google Earth?” All of a sudden, the whole room goes silent. Marissa Mayer, [a Google vice president at the time], was sitting at a different table, but she turned around and looked at me and said “Shoshana, do you really want to get in the way of organizing and making accessible the world’s information?” It took me a few minutes to realize she was reciting the Google mission statement.

Zuboff-author-photo--1548713646

Author Shoshana Zuboff.

Photo: Michael D. Wilson

The other day, I was looking through the section of my Facebook account that actually lists the interests that Facebook has ascribed to you, the things it believes you’re into. I did the same with Twitter — and I was struck in both cases by how wrong they were. I wonder if you find it reassuring that a lot of this stuff seems to be pretty clunky and inaccurate right now.

I think there’s a range here. Some of it still feels clunky and irrelevant and produces in us perhaps a sigh of relief. But then on the other end, there are things that are uncannily precise, really hitting their mark at the moment they should be. And because we only have access to what they let us see, it’s still quite difficult for us to judge precisely what the range of that [accuracy] is.

What about the risk of behavioral intervention based on false premises? I don’t want a company trying to intervene in the course of my daily life based on the mistaken belief that I’m into fly fishing any more than I want them to intervene based on a real interest I have.

This is why I’m arguing we’ve got to look at these operations and break them down. They all derive from a fundamental premise that’s illegitimate: that our private experience is free for the taking as raw material. So it’s almost secondary if their conclusions are right or wrong about us. They’ve got no right to intervene in my behavior in the first place. They have no right to my future tense.

“It is a fundamentally illegitimate choice we are forced to make: To get the help I need, I’ve got to march through surveillance capitalism.”

Is there such a thing as a good ad in 2019? Is it even possible to implement a form of online advertising that isn’t invasive and compromising of our rights?

An analogy I would draw would be negotiating how many hours a day a 7-year-old can work in a factory.

I take that as a no.

We’re supposed to be contesting the very legitimacy of child labor.

I’ve been surprised by the number of people I know, who I consider very savvy as far as technology, interested and concerned about technology, concerned by Facebook, who still have purchased an Alexa or Google Assistant device for their living room. It’s this weird mismatch of knowing better and surrendering to the convenience of it all. What would you say to someone like that?

Surveillance capitalism in general has been so successful because most of us feel so beleaguered, so unsupported by our real-world institutions, whether it’s health care, the educational system, the bank … It’s just a tale of woe wherever you go. The economic and political institutions right now leave us feeling so frustrated. We’ve all been driven in this way toward the internet, toward these services, because we need help. And no one else is helping us. That’s how we got hooked.

You think we turned to Alexa in despair?

Obviously there’s a range here. For some people, the sort of caricature of “We just want convenience, we’re so lazy” — for some people that caricature holds. But I feel much more forgiving of these needs than the caricature would lead us to believe. We do need help. We shouldn’t need so much help because our institutions in the real world need to be fixed. But to the extent that we do need help and we do look to the internet, it is a fundamentally illegitimate choice that we are now forced to make as 21st century citizens. In order to get the help I need, I’ve got to march through surveillance capitalism supply chains. Because Alexa and Google Home and every other gewgaw that has the word “smart” in front of it, every service that has “personalized” in front of it is nothing but supply chain interfaces for the flow of raw material to be translated into data, to be fashioned into prediction products, to be sold in behavioral futures markets so that we end up funding our own domination. If we’re gonna fix this, no matter how much we feel like we need this stuff, we’ve got to get to a place where we are willing to say no.

“The Age of Surveillance Capitalism” is available at bookstores everywhere, though you may cringe a bit after finishing it if you ordered from Amazon.

The post “A Fundamentally Illegitimate Choice”: Shoshana Zuboff on the Age of Surveillance Capitalism appeared first on The Intercept.

ring-redacted-1547070465
January 10, 2019

For Owners of Amazon’s Ring Security Cameras, Strangers May Have Bee...

The “smart home” of the 21st century isn’t just supposed to be a monument to convenience, we’re told, but also to protection, a Tony Stark-like bubble of vigilant algorithms and internet-connected sensors working ceaselessly to watch over us. But for some who’ve welcomed in Amazon’s Ring security cameras, there have been more than just algorithms watching through the lens, according to sources alarmed by Ring’s dismal privacy practices.

Ring has a history of lax, sloppy oversight when it comes to deciding who has access to some of the most precious, intimate data belonging to any person: a live, high-definition feed from around — and perhaps inside — their house. The company has marketed its line of miniature cameras, designed to be mounted as doorbells, in garages, and on bookshelves, not only as a means of keeping tabs on your home while you’re away, but of creating a sort of privatized neighborhood watch, a constellation of overlapping camera feeds that will help police detect and apprehend burglars (and worse) as they approach. “Our mission to reduce crime in neighborhoods has been at the core of everything we do at Ring,” founder and CEO Jamie Siminoff wrote last spring to commemorate the company’s reported $1 billion acquisition payday from Amazon, a company with its own recent history of troubling facial recognition practices. The marketing is working; Ring is a consumer hit and a press darling.

Despite its mission to keep people and their property secure, the company’s treatment of customer video feeds has been anything but, people familiar with the company’s practices told The Intercept. Beginning in 2016, according to one source, Ring provided its Ukraine-based research and development team virtually unfettered access to a folder on Amazon’s S3 cloud storage service that contained every video created by every Ring camera around the world. This would amount to an enormous list of highly sensitive files that could be easily browsed and viewed. Downloading and sharing these customer video files would have required little more than a click. The Information, which has aggressively covered Ring’s security lapses, reported on these practices last month.

At the time the Ukrainian access was provided, the video files were left unencrypted, the source said, because of Ring leadership’s “sense that encryption would make the company less valuable,” owing to the expense of implementing encryption and lost revenue opportunities due to restricted access. The Ukraine team was also provided with a corresponding database that linked each specific video file to corresponding specific Ring customers.

“If I knew a reporter or competitor’s email address, I could view all their cameras.”

At the same time, the source said, Ring unnecessarily provided executives and engineers in the U.S. with highly privileged access to the company’s technical support video portal, allowing unfiltered, round-the-clock live feeds from some customer cameras, regardless of whether they needed access to this extremely sensitive data to do their jobs. For someone who’d been given this top-level access — comparable to Uber’s infamous “God mode” map that revealed the movements of all passengers — only a Ring customer’s email address was required to watch cameras from that person’s home. Although the source said they never personally witnessed any egregious abuses, they told The Intercept “I can say for an absolute fact if I knew a reporter or competitor’s email address, I could view all their cameras.” The source also recounted instances of Ring engineers “teasing each other about who they brought home” after romantic dates. Although the engineers in question were aware that they were being surveilled by their co-workers in real time, the source questioned whether their companions were similarly informed.

Ring’s decision to grant this access to its Ukraine team was spurred in part by the weaknesses of its in-house facial and object recognition software. Neighbors, the company’s disarming name for its distributed residential surveillance platform, is now a marquee feature for Ring’s cameras, billed as a “proactive” neighborhood watch. This real-time crime-fighting requires more than raw video — it requires the ability to make sense, quickly and at a vast scale, of what’s actually happening in these household video streams. Is that a dog or your husband? Is that a burglar or a tree? Ring’s software has for years struggled with these fundamentals of object recognition. According to the most recent Information report, “Users routinely complained to customer support about receiving alerts when nothing noteworthy was happening at their front door; instead, the system seemed to be detecting a car driving by on the street or a leaf falling from a tree in the front yard.”

Computer vision has made incredible strides in recent years, but creating software that can categorize objects from scratch is often expensive and time-consuming. To jump-start the process, Ring used its Ukrainian “data operators” as a crutch for its lackluster artificial intelligence efforts, manually tagging and labeling objects in a given video as part of a “training” process to teach software with the hope that it might be able to detect such things on its own in the near future. This process is still apparently underway years later: Ring Labs, the name of the Ukrainian operation, is still employing people as data operators, according to LinkedIn, and posting job listings for vacant video-tagging gigs: “You must be able to recognize and tag all moving objects in the video correctly with high accuracy,” reads one job ad. “Be ready for rapid changes in tasks in the same way as be ready for long monotonous work.”

ring-redacted-1547070465

Image: Ring

A never-before-published image from an internal Ring document pulls back the veil of the company’s lofty security ambitions: Behind all the computer sophistication was a team of people drawing boxes around strangers, day in and day out, as they struggled to grant some semblance of human judgment to an algorithm. (The Intercept redacted a face from the image.)

A second source, with direct knowledge of Ring’s video-tagging efforts, said that the video annotation team watches footage not only from the popular outdoor and doorbell camera models, but from household interiors. The source said that Ring employees at times showed each other videos they were annotating and described some of the things they had witnessed, including people kissing, firing guns, and stealing.

Ring spokesperson Yassi Shahmiri would not answer any questions about the company’s past data policies and how they might be different today, electing instead to provide the following statement:

We take the privacy and security of our customers’ personal information extremely seriously. In order to improve our service, we view and annotate certain Ring videos. These videos are sourced exclusively from publicly shared Ring videos from the Neighbors app (in accordance with our terms of service), and from a small fraction of Ring users who have provided their explicit written consent to allow us to access and utilize their videos for such purposes.

We have strict policies in place for all our team members. We implement systems to restrict and audit access to information. We hold our team members to a high ethical standard and anyone in violation of our policies faces discipline, including termination and potential legal and criminal penalties. In addition, we have zero tolerance for abuse of our systems and if we find bad actors who have engaged in this behavior, we will take swift action against them.

It’s not clear that the current standards for which Ring videos are accessed in Ukraine, as described in Ring’s statement, have always been in place, nor is there any indication of how (or if) they’re enforced. The Information quoted former employees saying the standards have not always been in place, and indicated that efforts to more tightly control video were put in place by Amazon only this past May after Amazon visited the Ukraine office. Even then, The Information added, staffers in Ukraine worked around the controls.

Furthermore, Ring’s overview of its Neighbors system provides zero mention of image or facial recognition, and no warning that those who use the feature are opting in to have their homes watched by individuals in a Ukrainian R&D lab. Mentions of Ring’s facial recognition practices are buried in its privacy policy, which said merely that “you may choose to use additional functionality in your Ring product that, through video data from your device, can recognize facial characteristics of familiar visitors.” Neither Ring’s terms of service nor its privacy policy mention any manual video annotation being conducted by humans, nor does either document mention of the possibility that Ring staffers could access this video at all. Even with suitably strong policies in place, the question of whether Ring owners should trust a company that ever considered the above permissible will remain an open one.

The post For Owners of Amazon’s Ring Security Cameras, Strangers May Have Been Watching Too appeared first on The Intercept.

December 6, 2018

Artificial Intelligence Experts Issue Urgent Warning Against Facial Sc...

Facial recognition has quickly shifted from techno-novelty to fact of life for many, with millions around the world at least willing to put up with their faces scanned by software at the airport, their iPhones, or Facebook’s server farms. But researchers at New York University’s AI Now Institute have issued a strong warning against not only ubiquitous facial recognition, but its more sinister cousin: so-called affect recognition, technology that claims it can find hidden meaning in the shape of your nose, the contours of your mouth, and the way you smile. If that sounds like something dredged up from the 19th century, that’s because it sort of is.

AI Now’s 2018 report is a 56-page record of how “artificial intelligence” — an umbrella term that includes a myriad of both scientific attempts to simulate human judgment and marketing nonsense — continues to spread without oversight, regulation, or meaningful ethical scrutiny. The report covers a wide expanse of uses and abuses, including instances of racial discrimination, police surveillance, and how trade secrecy laws can hide biased code from an AI-surveilled public. But AI Now, which was established last year to grapple with the social implications of artificial intelligence, expresses in the document particular dread over affect recognition, “a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces.” The thought of your boss watching you through a camera that uses machine learning to constantly assess your mental state is bad enough, while the prospect of police using “affect recognition” to deduce your future criminality based on “micro-expressions” is exponentially worse.

“The ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims.”

That’s because “affect recognition,” the report explains, is little more than the computerization of physiognomy, a thoroughly disgraced and debunked strain of pseudoscience from another era that claimed a person’s character could be discerned from their bodies — and their faces, in particular. There was no reason to believe this was true in the 1880s, when figures like the discredited Italian criminologist Cesare Lombroso promoted the theory, and there’s even less reason to believe it today. Still, it’s an attractive idea, despite its lack of grounding in any science, and data-centric firms have leapt at the opportunity to not only put names to faces, but to ascribe entire behavior patterns and predictions to some invisible relationship between your eyebrow and nose that can only be deciphered through the eye of a computer. Two years ago, students at a Shanghai university published a report detailing what they claimed to be a machine learning method for determining criminality based on facial features alone. The paper was widely criticized, including by AI Now’s Kate Crawford, who told The Intercept it constituted “literal phrenology … just using modern tools of supervised machine learning instead of calipers.”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads. “The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments, even though the scientific justifications for such claims are highly questionable, and the history of their discriminatory purposes well-documented.”

In an email to The Intercept, Crawford, AI Now’s co-founder and distinguished research professor at NYU, along with Meredith Whittaker, co-founder of AI Now and a distinguished research scientist at NYU, explained why affect recognition is more worrying today than ever, referring to two companies that use appearances to draw big conclusions about people. “From Faception claiming they can ‘detect’ if someone is a terrorist from their face to HireVue mass-recording job applicants to predict if they will be a good employee based on their facial ‘micro-expressions,’ the ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims,” said Crawford.

Faception has purported to determine from appearance if someone is “psychologically unbalanced,” anxious, or charismatic, while HireVue has ranked job applicants on the same basis.

As with any computerized system of automatic, invisible judgment and decision-making, the potential to be wrongly classified, flagged, or tagged is immense with affect recognition, particularly given its thin scientific basis: “How would a person profiled by these systems contest the result?,” Crawford added. “What happens when we rely on black-boxed AI systems to judge the ‘interior life’ or worthiness of human beings? Some of these products cite deeply controversial theories that are long disputed in the psychological literature, but are are being treated by AI startups as fact.”

What’s worse than bad science passing judgment on anyone within camera range is that the algorithms making these decisions are kept private by the firms that develop them, safe from rigorous scrutiny behind a veil of trade secrecy. AI Now’s Whittaker singles out corporate secrecy as confounding the already problematic practices of affect recognition: “Because most of these technologies are being developed by private companies, which operate under corporate secrecy laws, our report makes a strong recommendation for protections for ethical whistleblowers within these companies.” Such whistleblowing will continue to be crucial, wrote Whittaker, because so many data firms treat privacy and transparency as a liability, rather than a virtue: “The justifications vary, but mostly [AI developers] disclaim all responsibility and say it’s up to the customers to decide what to do with it.” Pseudoscience paired with state-of-the-art computer engineering and placed in a void of accountability. What could go wrong?

The post Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History” appeared first on The Intercept.

December 6, 2018

Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Ha...

In 2015, rising star, Stanford University graduate, winner of the 13th season of “Survivor,” and Facebook executive Yul Kwon was profiled by the news outlet Fusion, which described him as “the guy standing between Facebook and its next privacy disaster,” guiding the company’s engineers through the dicey territory of personal data collection. Kwon described himself in the piece as a “privacy sherpa.” But the day it published, Kwon was apparently chatting with other Facebook staffers about how the company could vacuum up the call logs of its users without the Android operating system getting in the way by asking for the user for specific permission, according to confidential Facebook documents released today by the British Parliament.

“This would allow us to upgrade users without subjecting them to an Android permissions dialog.”

The document, part of a larger 250-page parliamentary trove, shows what appears to be a copied-and-pasted recap of an internal chat conversation between various Facebook staffers and Kwon, who was then the company’s deputy chief privacy officer and is currently working as a product management director, according to his LinkedIn profile.

The conversation centered around an internal push to change which data Facebook’s Android app had access to, to grant the software the ability to record a user’s text messages and call history, to interact with bluetooth beacons installed by physical stores, and to offer better customized friend suggestions and news feed rankings . This would be a momentous decision for any company, to say nothing of one with Facebook’s privacy track record and reputation, even in 2015, of sprinting through ethical minefields. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it,” Michael LeBeau, a Facebook product manager, is quoted in the document as saying of the change.

Crucially, LeBeau commented, according to the document, such a privacy change would require Android users to essentially opt in; Android, he said, would present them with a permissions dialog soliciting their approval to share call logs when they were to upgrade to a version of the app that collected the logs and texts. Furthermore, the Facebook app itself would prompt users to opt in to the feature, through a notification referred to by LeBeau as “an in-app opt-in NUX,” or new user experience. The Android dialog was especially problematic; such permission dialogs “tank upgrade rates,” LeBeau stated.

But Kwon appeared to later suggest that the company’s engineers might be able to upgrade users to the log-collecting version of the app without any such nagging from the phone’s operating system. He also indicated that the plan to obtain text messages had been dropped, according to the document. “Based on [the growth team’s] initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,”  he stated. Users would have to click to effect the upgrade, he added, but, he reiterated, “no permissions dialog screen.”

It’s not clear if Kwon’s comment about “no permissions dialog screen” applied to the opt-in notification within the Facebook app. But even if the Facebook app still sought permission to share call logs, such in-app notices are generally designed expressly to get the user to consent and are easy to miss or misinterpret. Android users rely on standard, clear dialogs from the operating system to inform them of serious changes in privacy. There’s good reason Facebook would want to avoid “subjecting” its users to a screen displaying exactly what they’re about to hand over to the company:

It’s not clear how this specific discussion was resolved, but Facebook did eventually begin obtaining call logs and text messages from users of its Messenger and Facebook Lite apps for Android. This proved highly controversial when revealed in press accounts and by individuals posting on Twitter after receiving data Facebook had collected on them; Facebook insisted it had obtained permission for the phone log and text massage collection, but some users and journalists said it had not.

It’s Facebook’s corporate stance that the documents released by Parliament “are presented in a way that is very misleading without additional context.” The Intercept has asked both Facebook and Kwon personally about what context is missing here, if any, and will update with their response.

The post Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Harm Your Facebook Privacy appeared first on The Intercept.

Screen-Shot-2018-11-14-at-3.55.39-PM-1542234254
December 3, 2018

Homeland Security Will Let Computers Predict Who Might Be a Terrorist ...

You’re rarely allowed to know exactly what’s keeping you safe. When you fly, you’re subject to secret rules, secret watchlists, hidden cameras, and other trappings of a plump, thriving surveillance culture. The Department of Homeland Security is now complicating the picture further by paying a private Virginia firm to build a software algorithm with the power to flag you as someone who might try to blow up the plane.

The new DHS program will give foreign airports around the world free software that teaches itself who the bad guys are, continuing society’s relentless swapping of human judgment for machine learning. DataRobot, a northern Virginia-based automated machine learning firm, won a contract from the department to develop “predictive models to enhance identification of high risk passengers” in software that should “make real-time prediction[s] with a reasonable response time” of less than one second, according to a technical overview that was written for potential contractors and reviewed by The Intercept. The contract assumes the software will produce false positives and requires that the terrorist-predicting algorithm’s accuracy should increase when confronted with such mistakes. DataRobot is currently testing the software, according to a DHS news release.

The contract also stipulates that the software’s predictions must be able to function “solely” using data gleaned from ticket records and demographics — criteria like origin airport, name, birthday, gender, and citizenship. The software can also draw from slightly more complex inputs, like the name of the associated travel agent, seat number, credit card information, and broader travel itinerary. The overview document describes a situation in which the software could “predict if a passenger or a group of passengers is intended to join the terrorist groups overseas, by looking at age, domestic address, destination and/or transit airports, route information (one-way or round trip), duration of the stay, and luggage information, etc., and comparing with known instances.”

DataRobot’s bread and butter is turning vast troves of raw data, which all modern businesses accumulate, into predictions of future action, which all modern companies desire. Its clients include Monsanto and the CIA’s venture capital arm, In-Q-Tel. But not all of DataRobot’s clients are looking to pad their revenues; DHS plans to integrate the code into an existing DHS offering called the Global Travel Assessment System, or GTAS, a toolchain that has been released as open source software and which is designed to make it easy for other countries to quickly implement no-fly lists like those used by the U.S.

According to the technical overview, DHS’s predictive software contract would “complement the GTAS rule engine and watch list matching features with predictive models to enhance identification of high risk passengers.” In other words, the government has decided that it’s time for the world to move beyond simply putting names on a list of bad people and then checking passengers against that list. After all, an advanced computer program can identify risky fliers faster than humans could ever dream of and can also operate around the clock, requiring nothing more than electricity. The extent to which GTAS is monitored by humans is unclear. The overview document implies a degree of autonomy, listing as a requirement that the software should “automatically augment Watch List data with confirmed ‘positive’ high risk passengers.”

The document does make repeated references to “targeting analysts” reviewing what the system spits out, but the underlying data-crunching appears to be almost entirely the purview of software, and it’s unknown what ability said analysts would have to check or challenge these predictions. In an email to The Intercept, Daniel Kahn Gillmor, a senior technologist with the American Civil Liberties Union, expressed concern with this lack of human touch: “Aside from the software developers and system administrators themselves (which no one yet knows how to automate away), the things that GTAS aims to do look like they could be run mostly ‘on autopilot’ if the purchasers/deployers choose to operate it in that manner.” But Gillmor cautioned that even including a human in the loop could be a red herring when it comes to accountability: “Even if such a high-quality human oversight scheme were in place by design in the GTAS software and contributed modules (I see no indication that it is), it’s free software, so such a constraint could be removed. Countries where labor is expensive (or controversial, or potentially corrupt, etc) might be tempted to simply edit out any requirement for human intervention before deployment.”

“Countries where labor is expensive might be tempted to simply edit out any requirement for human intervention.”

For the surveillance-averse, consider the following: Would you rather a group of government administrators, who meet in secret and are exempt from disclosure, decide who is unfit to fly? Or would it be better for a computer, accountable only to its own code, to make that call? It’s hard to feel comfortable with the very concept of profiling, a practice that so easily collapses into prejudice rather than vigilance. But at least with uniformed government employees doing the eyeballing, we know who to blame when, say, a woman in a headscarf is needlessly hassled, or a man with dark skin is pulled aside for an extra pat-down.

If you ask DHS, this is a categorical win-win for all parties involved. Foreign governments are able to enjoy a higher standard of security screening; the United States gains some measure of confidence about the millions of foreigners who enter the country each year; and passengers can drink their complimentary beverage knowing that the person next to them wasn’t flagged as a terrorist by DataRobot’s algorithm. But watchlists, among the most notorious features of post-9/11 national security mania, are of questionable efficacy and dubious legality. A 2014 report by The Intercept pegged the U.S. Terrorist Screening Database, an FBI data set from which the no-fly list is excerpted, at roughly 680,000 entries, including some 280,000 individuals with “no recognized terrorist group affiliation.” That same year, a U.S. district court judge ruled in favor of an ACLU lawsuit, declaring the no-fly list unconstitutional. The list could only be used again if the government improved the mechanism through which people could challenge their inclusion on it — a process that, at the very least, involved human government employees, convening and deliberating in secret.

Screen-Shot-2018-11-14-at-3.55.39-PM-1542234254

Diagram from a Department of Homeland Security technical document illustrating how GTAS might visualize a potential terrorist onboard during the screening process.

Document: DHS

But what if you’re one of the inevitable false positives? Machine learning and behavioral prediction is already widespread; The Intercept reported earlier this year that Facebook is selling advertisers on its ability to forecast and pre-empt your actions. The consequences of botching consumer surveillance are generally pretty low: If a marketing algorithm mistakenly predicts your interest in fly fishing where there is none, the false positive is an annoying waste of time. The stakes at the airport are orders of magnitude higher.

What happens when DHS’s crystal ball gets it wrong — when the machine creates a prediction with no basis in reality and an innocent person with no plans to “join a terrorist group overseas” is essentially criminally defamed by a robot? Civil liberties advocates not only worry that such false positives are likely, possessing a great potential to upend lives, but also question whether such a profoundly damning prediction is even technologically possible. According to  DHS itself, its predictive software would have relatively little information upon which to base a prognosis of impending terrorism.

Even from such mundane data inputs, privacy watchdogs cautioned that prejudice and biases always follow — something only worsened under the auspices of self-teaching artificial intelligence. Faiza Patel, co-director of the Brennan Center’s Liberty and National Security Program, told The Intercept that giving predictive abilities to watchlist software will present only the veneer of impartiality. “Algorithms will both replicate biases and produce biased results,” Patel said, drawing a parallel to situations in which police are algorithmically allocated to “risky” neighborhoods based on racially biased crime data, a process that results in racially biased arrests and a checkmark for the computer. In a self-perpetuating bias machine like this, said Patel, “you have all the data that’s then affirming what the algorithm told you in the first place,” which creates “a kind of cycle of reinforcement just through the data that comes back.” What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

Indeed, DHS’s system stands to deliver a computerized turbocharge to the bias that is already endemic to the American watchlist system. The overview document for the the Delphic profiling tool made repeated references to the fact that it will create a feedback loop of sorts. The new system “shall automatically augment Watch List data with confirmed ‘positive’ high risk passengers,” one page read, with quotation marks doing some very real work. The software’s predictive abilities “shall be able to improve over time as the system feeds actual disposition results, such as true and false positives,” said another section. Given that the existing watchlist framework has ensnared countless thousands of innocent people , the notion of “feeding” such “positives” into a machine that will then search even harder for that sort of person is downright dangerous. It also becomes absurd: When the criteria for who is “risky” and who isn’t are kept secret, it’s quite literally impossible for anyone on the outside to tell what is a false positive and what isn’t. Even for those without civil libertarian leanings, the notion of an automatic “bad guy” detector that uses a secret definition of “bad guy” and will learn to better spot “bad guys” with every “bad guy” it catches would be comical were it not endorsed by the federal government.

For those troubled by the fact that this system is not only real but currently being tested by an American company, the fact that neither the government nor DataRobot will reveal any details of the program is perhaps the most troubling of all. When asked where the predictive watchlist prototype is being tested, the DHS tech directorate spokesperson, John Verrico, told The Intercept, “I don’t believe that has been determined yet,” and stressed that the program was meant for use with foreigners. Verrico referred further questions about test location and which “risk criteria” the algorithm will be trained to look for back to DataRobot. Libby Botsford, a DataRobot spokesperson, initially told The Intercept that she had “been trying to track down the info you requested from the government but haven’t been successful,” and later added, “I’m not authorized to speak about this. Sorry!” Subsequent requests sent to both DHS and DataRobot were ignored.

Verrico’s assurance — that the watchlist software is an outward-aiming tool provided to foreign governments, not a means of domestic surveillance — is an interesting feint given that Americans fly through non-American airports in great numbers every single day. But it obscures ambitions much larger than GTAS itself: The export of opaque, American-style homeland security to the rest of the world and the hope of bringing every destination in every country under a single, uniform, interconnected surveillance framework. Why go through the trouble of sifting through the innumerable bodies entering the United States in search of “risky” ones when you can move the whole haystack to another country entirely? A global network of terrorist-scanning predictive robots at every airport would spare the U.S. a lot of heavy, politically ugly lifting.

“Automation will exacerbate all of the worst aspects of the watchlisting system.”

Predictive screening further shifts responsibility. The ACLU’s Gillmor explained that making these tools available to other countries may mean that those external agencies will prevent people from flying so that they never encounter DHS at all, which makes DHS less accountable for any erroneous or damaging flagging, a system he described as “a quiet way of projecting U.S. power out beyond U.S. borders.” Even at this very early stage, DHS seems eager to wipe its hands of the system it’s trying to spread around the world: When Verrico brushed off questions of what the system would consider “risky” attributes in a person, he added in his email that “the risk criteria is being defined by other entities outside the U.S., not by us. I would imagine they don’t want to tell the bad guys what they are looking for anyway. ;-)” DHS did not answer when asked whether there were any plans to implement GTAS within the United States.

Then there’s the question of appeals. Those on DHS’s current watchlists may seek legal redress; though the appeals system is generally considered inadequate by civil libertarians, it offers at least a theoretical possibility of removal. The documents surrounding DataRobot’s predictive modeling contract make no mention of an appeals system for those deemed risky by an algorithm, nor is there any requirement in the DHS overview document that the software must be able to explain how it came to its conclusions. Accountability remains a fundamental problem in the fields of machine learning and computerized prediction, with some computer scientists adamant that an ethical algorithm must be able to show its work, and others objecting on the grounds that such transparency compromises the accuracy of the predictions.

Gadeir Abbas, an attorney with the Council on American-Islamic Relations, who has spent years fighting the U.S. government in court over watchlists, saw the DHS software as only more bad news for populations already unfairly surveilled. The U.S. government is so far “not able to generate a single set of rules that have any discernible level of effectiveness,” said Abbas, and so “the idea that they’re going to automate the process of evolving those rules is another example of the technology fetish that drives some amount of counterterrorism policy.”

The entire concept of making watchlist software capable of terrorist predictions is mathematically doomed, Abbas added, likening the system to a “crappy Minority report. … Even if they make a really good robot, and it’s 99 percent accurate,” the fact that terror attacks are “exceedingly rare events” in terms of naked statistics means you’re still looking at “millions of false positives. … Automation will exacerbate all of the worst aspects of the watchlisting system.”

The ACLU’s Gillmor agreed that this mission is simply beyond what computers are even capable of:

For very-low-prevalence outcomes like terrorist activity, predictive systems are simply likely to get it wrong. When a disease is a one-in-a-million likelihood, the surest bet is a negative diagnosis. But that’s not what these systems are designed to do. They need to “diagnose” some instances positively to justify their existence. So, they’ll wrongly flag many passengers who have nothing to do with terrorism, and they’ll do it on the basis of whatever meager data happens to be available to them.

Predictive software is not just the future, but the present. Its expansion into the way we shop, the way we’re policed, and the way we fly will soon be commonplace, even if we’re never aware of it. Designating enemies of the state based on a crystal ball locked inside a box represents a grave, fundamental leap in how societies appraise danger. The number of active, credible terrorists-in-waiting is an infinitesimal slice of the world’s population. The number of people placed on watchlists and blacklists is significant. Letting software do the sorting — no matter how smart and efficient we tell ourselves it will be — will likely do much to worsen this inequity.

The post Homeland Security Will Let Computers Predict Who Might Be a Terrorist on Your Plane — Just Don’t Ask How It Works appeared first on The Intercept.

November 2, 2018

Facebook Allowed Advertisers to Target Users Interested in “White Ge...

Apparently fueled by anti-Semitism and the bogus narrative that outside forces are scheming to exterminate the white race, Robert Bowers murdered 11 Jewish congregants as they gathered inside their Pittsburgh synagogue, federal prosecutors allege. But despite long-running international efforts to debunk the idea of a “white genocide,” Facebook was still selling advertisers the ability to market to those with an interest in that myth just days after the bloodshed.

Earlier this week, The Intercept was able to select “white genocide conspiracy theory” as a pre-defined “detailed targeting” criterion on the social network to promote two articles to an interest group that Facebook pegged at 168,000 users large and defined as “people who have expressed an interest or like pages related to White genocide conspiracy theory.” The paid promotion was approved by Facebook’s advertising wing. After we contacted the company for comment, Facebook promptly deleted the targeting category, apologized, and said it should have never existed in the first place.

Our reporting technique was the same as one used by the investigative news outlet ProPublica to report, just over one year ago, that in addition to soccer dads and Arianna Grande fans, “the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or, ‘History of “why jews ruin the world.”’” The report exposed how little Facebook was doing to vet marketers, who pay the company to leverage personal information and inclinations in order to gain users’ attention — and who provide the foundation for its entire business model. At the time, ProPublica noted that Facebook “said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.” Rob Leathern, a Facebook product manager, assured the public, “We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Leathern’s “new guardrails” don’t seem to have prevented Facebook from manually approving our ad buy the same day it was submitted, despite its explicit labeling as “White Supremacy – Test.”

 

 

From the outside, it’s impossible to tell exactly how Facebook decides who among its 2 billion users might fit into the “white genocide” interest group or any other cohort available for “detailed targeting.” The company’s own documentation is very light on details, saying only that these groups are based on indicators like “Pages [users] engage with” or “Activities people engage in on and off Facebook related to things like their device usage, purchase behaviors or intents and travel preferences.” It remains entirely possible that some people lumped into the “white genocide conspiracy theory” fandom are not, in fact, true believers, but may have interacted with content critical of this myth, such as a news report, a fact check, or academic research on the topic.

But there are some clues as to who exactly is counted among the 168,000. After selecting “white genocide conspiracy theory” as an ad target, Facebook provided “suggestions” of other, similar criteria, including interest in the far-right-wing news outlets RedState and the Daily Caller — the latter of which, co-founded by right-wing commentator Tucker Carlson, has repeatedly been criticized for cozy connections to white nationalists and those sympathetic to them. Other suggested ad targets included mentions of South Africa;  a common trope among advocates of the “white genocide” myth is the so-called plight of white South African farmers, who they falsely claim are being systematically murdered and pushed off their land. The South African hoax is often used as a cautionary tale for American racists — like, by all evidence, Robert Bowers, the Pittsburgh shooter — who fear a similar fate is in store for them, whether from an imagined global Jewish conspiracy or a migrant “caravan.” But the “white genocide” myth appears to have a global appeal, as well: About 157,000 of the accounts with the interest are outside of the U.S., concentrated in Africa and Asia, although it’s not clear how many of these might be bots.

A simple search of Facebook pages also makes plain that there are tens of thousands of users with a very earnest interest in “white genocide,” shown through the long list of groups with names like “Stop White South African Genocide,” “White Genocide Watch,” and “The last days of the white man.” Images with captions like “Don’t Be A Race Traitor” and “STOP WHITE GENOCIDE IN SOUTH AFRICA” are freely shared in such groups, providing a natural target for anyone who might want to pay to promote deliberately divisive and incendiary hate-based content.

A day after Facebook confirmed The Intercept’s “white genocide” ad buy, the company deleted the category and canceled the ads. Facebook spokesperson Joe Osborne provided The Intercept with the following statement, similar to the one he gave ProPublica over a year ago: “This targeting option has been removed, and we’ve taken down these ads. It’s against our advertising principles and never should have been in our system to begin with. We deeply apologize for this error.” Osborne added that the “white genocide conspiracy theory” category had been  “generated through a mix of automated and human reviews, but any newly added interests are ultimately approved by people. We are ultimately responsible for the segments we make available in our systems.” Osborne also confirmed that the ad category had been used by marketers, but cited only “reasonable” ad buys targeting “white genocide” enthusiasts, such as news coverage.

Facebook draws a distinction between the hate-based categories ProPublica discovered, which were based on terms users entered into their own profiles, versus the “white genocide conspiracy theory” category, which Facebook itself created via algorithm. The company says that it’s taken steps to make sure the former is no longer possible, although this clearly did nothing to deter the latter. Interestingly, Facebook said that technically the white genocide ad buy didn’t violate its ad policies, because it was based on a category Facebook itself created. However, this doesn’t square with the automated email The Intercept received a day after the ad buy was approved, informing us that “We have reviewed some of your ads more closely and have determined they don’t comply with our Advertising Policies.”

Still, the company conceded that such ad buys should have never been possible in the first place. Vice News and Business Insider also bought Facebook ads this week to make a different point about a related problem: that Facebook does not properly verify the identities of people who take out political ads. It’s unclear whether the “guardrails” Leathern spoke of a year ago will simply take more time to construct, or whether Facebook’s heavy reliance on algorithmic judgment simply careened through them.

The post Facebook Allowed Advertisers to Target Users Interested in “White Genocide” — Even in Wake of Pittsburgh Massacre appeared first on The Intercept.

October 30, 2018

Never Trust a Reporter Who Bounces in His Chair With Glee...

You wouldn’t trust a music critic who’s buddies with the band, nor should you trust a tech reporter who hoots and hollers whenever Tim Cook takes the stage. And you definitely, absolutely should be suspicious of a political reporter who sits down with President Donald Trump and looks as if he’s meeting his favorite baseball player.

Axios and HBO gave viewers the first look at a new television show by teaming up with the White House to unveil a new entry in its xenophobic domestic policy lineup.

Along these lines, Tuesday morning held a sort of public relations convergence of interests that typifies the worst of political reporting: Axios and HBO gave viewers the first look at a new television show by teaming up with the White House to unveil a new entry in its xenophobic domestic policy lineup.

This sort of journalism is among the most obsequious — perhaps tied with tech coverage, at times — but the new video clip debuted today by Axios may be the ne plus ultra of media toadying. Axios has become a political media sensation in a very short amount of time, excelling at both cranking out access-based White House scoops and servility, like some sort of 1600 Pennsylvania Avenue-based Roomba.

Today’s video interview snippet, plucked from the upcoming Axios show on HBO, put the website’s bright star Jonathan Swan in a chair across from Trump. Prompted by Swan, Trump announced an innovative plan to bar nonwhite infants from attaining U.S. citizenship. It was, in Swan’s words, an “exciting” moment to behold:

“Excited to share” is usually how one begins a sentence about a pregnancy or a promotion, not the revelation of a plot to deny citizenship to newborns. The families affected by this attempt to subvert the 14th Amendment might have other words for the announcement, but not Swan, who took the news (and his ability to report it first) as a big, shiny win — merely another dose of stellar exclusive digital content to be consumed, a brilliant bit of multimedia cross-promotion.

The video itself, however, is somehow even worse than the tweet. We see firsthand just how pumped up Swan is to discuss Trump’s long-term ethnic exclusion strategies with the big man himself. At one point, Swan cajoles him into explaining just how Trump might actually execute this unilateral change to the Constitution, prompting Trump to speculate that he might use an executive order. “Exactly!” exclaims Swan, so amped up that he is literally unable to stay in his seat. Palpably thrilled, Swan points an eager finger at the president. “Tell me more!” he says next, all too cheerily, as if he’s conducting a Q&A with The Avengers at Comic-Con — and not being given the opportunity to interrogate the president of the United States. Swan is literally grinning throughout: The feeling that a high-five is imminent is hard to shake off.

This is grotesque on the face of it. Politics — particularly the politics of the day — aren’t supposed to be fun, nor exciting, nor any other chipper keywords you might feed into Netflix on a rainy evening. American politics in our present day are anguishing, alienating, bitter, bleak, cynical, and hateful. To take this opportunity to challenge Trump on his immigration policies at this time — when, in just one example, there’s a very good argument to be made that these policies just led to the worst act of anti-Semitic carnage in American history — and not only squander it but enjoy it, that’s something worse than monstrous. “What a revolting display,” Splinter’s Libby Watson remarked. Watson also noted that “when the president says other countries don’t have birthright citizenship, which is a lie, Swan says nothing, and Axios’ story was only updated after publication to reflect that reality.” Revolting, indeed.

It’s not that this is just terrible journalism or that Swan should consider nurturing his gifts for public relations in another sphere. What we’re watching here is a perverse amalgam of news, social media, entertainment, and the White House. It is truly a cross-promotional tour de force, but one that leaves a sour taste, a worse example of a familiar genre: It is a new kind of product launch. We’re watching the residue left behind as media industry stability evaporates, when “scoops” at all costs is one of the few currencies left, where shame is a luxury. This is truly the Trump effect at its greatest strength: The president’s lack of shame has always been his biggest selling point for fans, and Axios, in its bid for its own fans, is cribbing not only his style, but his politics as well. Axios wholesale adopted the big right-wing unveil as an audience-building tool.

Perhaps it’s too much to ask that our colleagues in D.C. not count themselves among enthusiasts for this brand of far-right politics, but, please, at least feel ashamed enough to stay in your chair.

The post Never Trust a Reporter Who Bounces in His Chair With Glee appeared first on The Intercept.

October 12, 2018

Some Silicon Valley Superstars Ditching Saudi Advisory Board After Kha...

While the world is grappling with the apparent grisly murder of Saudi dissident and Washington Post journalist Jamal Khashoggi, the Saudi government decided to announce a new band of influential Western allies, some plucked from the uppermost echelon of Silicon Valley, who would serve on an advisory board for NEOM, the Saudi government’s improbable, exorbitant plan to build a “megacity” in the desert.

But almost as soon as his participation was revealed, Sam Altman, head of famed venture capital firm Y Combinator, announced that he is “suspending” his role with NEOM, while two others on the star-studded list denied that they were participating.

Altman, along with legendary tech investor Marc Andreessen, notorious Uber founder (and ousted ex-CEO) Travis Kalanick, IDEO CEO Tim Brown, and Dan Doctoroff of Sidewalk Labs, a subsidiary of Google-owner Alphabet, were among those listed as members of the new board. Given that the United States itself is now forced into a momentarily uncomfortable spot given its longtime affection for and deep political ties to the Saudis, this was a less than ideal time for Americans to come out as friends of the Kingdom.

Despite Saudi Arabia’s vast history of human rights abuses and violent foreign policy, in a statement to The Intercept, Altman announced that this reported assassination and dismemberment was a step too far:

“I am suspending my involvement with the NEOM advisory board until the facts regarding Jamal Khashoggi’s disappearance are known. This is well out of my area of expertise, so I don’t plan to comment on the case until the investigation is finished. I remain a huge believer in the importance of building smart cities.”

A source close to members of the advisory board who spoke on the condition of anonymity described to The Intercept recent conversations with other board members in which they expressed that they were “inclined to just stay on the board” and continue helping plan the Saudis’ fantastical oasis megacity, despite Khashoggi’s reported assassination. “I’m always surprised by what ends up being a red line for people and what doesn’t,” this source added, though they reserved praise for Saudi’s crown prince Mohammad bin Salman, commonly known as MBS. Silicon Valley figures “have been cautiously optimistic” about the Saudi government, they explained. “MBS cares about technology, wants to invest in technology in way other world leaders aren’t, and has a boldness that is exciting to people. People want to believe.” Asked if they thought the murder of a dissident journalist might change this admiration for MBS, the source replied that “if this is true as alleged, it could change many peoples’ temperature on that.”

For his part, IDEO’s Tim Brown “has chosen not to participate in the advisory board at this time,” according to IDEO spokesperson Sara Blask, who would provide no further comment about why he was listed as an advisory board member and why he is now declining to participate.

Dan Levitan, a spokesperson for Sidewalk Labs’ Dan Doctoroff, told The Intercept that Doctoroff’s “inclusion on that list is incorrect, “and that “he is not a member of the NEOM advisory board,” but would not answer whether Doctoroff was ever a member of the advisory board, or whether he has discussed the NEOM project with the Saudi government in any other capacity in the past.

Requests for comment sent to Kalanick, Andreessen, and fellow NEOM board member Masayoshi Son of Japanese software mammoth SoftBank were not answered.

Top photo: Y Combinator President Sam Altman speaks onstage during TechCrunch Disrupt SF 2017 in San Francisco, Calif., on Sept. 19, 2017.

The post Some Silicon Valley Superstars Ditching Saudi Advisory Board After Khashoggi Disappearance, Some Stay Silent appeared first on The Intercept.

October 9, 2018

Government Report: “An Entire Generation” of American Weapons Is W...

A new report from the U.S. Government Accountability Office brings both good and bad news. For governments around the would that might like to sabotage America’s military technology, the good news is that this would be all too easy to do: Testers at the Department of Defense “routinely found mission-critical cyber vulnerabilities in nearly all weapon systems that were under development” over a five-year period, the report said. For Americans, the bad news is that up until very recently, no one seemed to care enough to fix these security holes.

In 1991, the report noted, the U.S. National Research Council warned that “system disruptions will increase” as the use of computers and networks grows and as adversaries attack them. The Pentagon more or less ignored this and at least five subsequent warnings on the subject, according to the GAO, and hasn’t made a serious effort to safeguard the vast patchwork of software that controls planes, ships, missiles, and other advanced ordnance against hackers.

The sweeping report drew on nearly 30 years of published research, including recent assessments of the cybersecurity of specific weapon systems, as well as interviews with personnel from the Department of Defense, the National Security Agency, and weapons-testing bodies. It covered a broad span of American weapons, examining systems at all of the service branches and in space.

The report found that “mission-critical cyber vulnerabilities” cropped up routinely during weapons development and that test teams “easily” took over real systems without detection “using relatively simple tools and techniques,” exploiting “basic issues such as poor password management and unencrypted communications.” Testers could also download and delete data, in one cases exfiltrating 100 gigabytes of material, and could tap into operators’ terminals, in one instance popping up computer dialogs asking the operators “to insert two quarters to continue.” But a malicious attacker could pull off much worse than jokes about quarters, warns the GAO: “In one case, the test team took control of the operators’ terminals. They could see, in real-time, what the operators were seeing on their screens and could manipulate the system.”

Posing as surrogates for, say, Russian or Chinese military hackers, testers sometimes found easy victories. “In some cases,” the GAO found, “simply scanning a system caused parts of the system to shut down,” while one “test team was able to guess an administrator password in nine seconds.” The testers found embarrassing, elementary screw-ups of the sort that would get a middle school computer lab administrator in trouble, to say nothing of someone safeguarding lethal weapon systems. For example, “multiple weapon systems used commercial or open source software, but did not change the default password when the software was installed, which allowed test teams to look up the password on the Internet.”

“In some cases, simply scanning a system caused parts of the system to shut down.”

Asked how she thought a culture of cyber-insecurity could flourish at an institution as guarded as the military, Cristina Chaplain, a director at the GAO, explained that the problem may be that the armed services overestimated the value of secrecy. “For the past 20 years, their focus has been on [networking] systems together,” at the expense of connecting them securely, because it was simply assumed that “security by obscurity” would be all that was needed — that, say, a classified bomb designed and built in secret is impervious to outside threats by virtue of being kept hidden. The whole culture of military secrecy, the belief that “they’re so standalone and so stovepiped that they’re almost secure just by virtue of that,” as Chaplain put it, is much to blame.

The findings are all the more disturbing given that the GAO said they “likely represent a fraction of total vulnerabilities” due to limitations in how the Defense Department tests for cybersecurity.

Although the GAO analyzed real weapon systems used by the Pentagon, the report is light on specifics for security and classification purposes. It lacks findings about, say, a particular missile or a particular ship, and Chaplain would not comment on whether vulnerabilities were found in nuclear weapon systems, citing classification issues.But the document nonetheless reveals colossal negligence in the broader process of building and buying weapons. For years, the Department of Defense did not prioritize cybersecurity when acquiring weapon systems, even as it sought to further automate such systems, the GAO said. Up until about three years ago, some in the department avoided cybersecurity assessments, saying requirements were not clearly spelled out, asserting that they “did not believe cybersecurity applied to weapon systems,” according to the report, complaining that “cybersecurity tests would interfere with operations,” or rejecting the tests as “unrealistic” because the simulated attackers had an unfair amount of insider information — an objection the NSA itself dismissed as unrealistic.

Even when weapons program officials were aware of problems, the issues were often ignored. In one case, an assessment found 19 of 20 vulnerabilities unearthed in a previous assessment had not been fixed. When asked why, “program officials said they had identified a solution, but for some reason it had not been implemented,” the GAO said. In other cases, weapons operators were so used to a broken product that warnings of a simulated breach didn’t even register. “Warnings were so common that operators were desensitized to them,” the report found.

Today, cybersecurity audits of weapons are of increasing importance to the Pentagon, according to the report. But it’s incredibly hard to fix security holes after the fact. “Bolting on cybersecurity late in the development cycle or after a system has been deployed is more difficult and costly than designing it in from the beginning,” the GAO noted. One weapons program needed months to apply patches that were supposed to be applied within three weeks, the report said, because of all the testing required. Other programs are deployed around the world, further slowing the spread of fixes. “Some weapon systems are operating, possibly for extended periods, with known vulnerabilities,” the report stated.

This, then, is the crisis: The U.S. has created a computerized global military using complex, interconnected, and highly vulnerable tools — “an entire generation of systems that were designed and built without adequately considering cybersecurity,” as the GAO put it. And now it must fix it. This is nothing less than an engineering nightmare — but far preferable to what will happen if one of these software flaws is exploited by someone other than a friendly government tester.

Top photo: A U.S. Air Force crew chief conducts preflight checks on Sept. 20, 2018 during Combat Archer, a two-week, air-to-air Weapons System Evaluation Program to prepare and evaluate operational fighter squadrons’ readiness for combat operations, at Tyndall Air Force Base, Fla.

The post Government Report: “An Entire Generation” of American Weapons Is Wide Open to Hackers appeared first on The Intercept.

September 26, 2018

The Government Wants Airlines to Delay Your Flight So They Can Scan Yo...

Omnipresent facial recognition has become a golden goose for law enforcement agencies around the world. In the United States, few are as eager as the Department of Homeland Security. American airports are currently being used as laboratories for a new tool that would automatically scan your face — and confirm your identity with U.S. Customs and Border Protection — as you prepare to board a flight, despite the near-unanimous objections from privacy advocates and civil libertarians, who call such scans invasive and pointless.

According to a new report on the Biometric Entry-Exit Program by DHS itself, we can add another objection: Your flight could be late.

Although the new report, published by Homeland Security’s Office of the Inspector General, is overwhelmingly supportive in its evaluation of airport-based biometric surveillance — the practice of a computer detecting your face and pairing it with everything else in the system — the agency notes some hurdles from a recent test code-named “Sprint 8.” Among them, the report notes with palpable frustration, was that airlines insist on letting their passengers depart on time, rather than subjecting them to a Homeland Security surveillance prototype plagued by technical issues and slowdowns:

Demanding flight departure schedules posed other operational problems that significantly hampered biometric matching of passengers during the pilot in 2017. Typically, when incoming flights arrived behind schedule, the time allotted for boarding departing flights was reduced. In these cases, CBP allowed airlines to bypass biometric processing in order to save time. As such, passengers could proceed with presenting their boarding passes to gate agents without being photographed and biometrically matched by CBP first. We observed this scenario at the Atlanta Hartsfield-Jackson International Airport when an airline suspended the biometric matching process early to avoid a flight delay. This resulted in approximately 120 passengers boarding the flight without biometric confirmation.

“Repeatedly permitting airlines to revert to standard flight-boarding procedures without biometric processing may become a habit that is difficult to break.”

The report goes on to again bemoan “airlines’ recurring tendency to bypass the biometric matching process in favor of boarding flights for an on-time departure.” DHS, apparently, is worried that it could be habit-forming for the airlines: “Repeatedly permitting airlines to revert to standard flight-boarding procedures without biometric processing may become a habit that is difficult to break.”

These concerns, however, are difficult to square with a later assurance that “airline officials we interviewed indicated the processing time was generally acceptable and did not contribute to departure delays.”

The report ends up concluding that this and other logistical issues “pose significant risks to CBP scaling up the biometric program to process 100 percent of all departing passengers by 2021.” And it has some ideas to do something about it, namely “enforcement mechanisms or back-up procedures to prevent airlines from  bypassing biometric processing prior to flight boarding.”

As the success of biometric-reliant line-skipping services — like TSA Pre-Check and Clear — have shown, many flyers are happy to swap their irreplaceable biometrics in the name of convenience. The prospect of missing a connecting flight, however, could bring out the pitchforks.

Top photo: Station Manager Chad Shane, right, of SAS airlines, ushers a boarding passenger through the process as Dulles airport officials unveil new biometric facial recognition scanners on Sept. 6, 2018.

The post The Government Wants Airlines to Delay Your Flight So They Can Scan Your Face appeared first on The Intercept.

September 22, 2018

Facebook Brushed Off the U.N. Five Separate Times Over Calls For Murde...

Facebook’s complete and total inability to keep itself from being a convenient tool for genocidal incitement in Myanmar has been well-covered, now a case study in how a company with such immense global power can so completely fail to use it for good. But a new report released this week by the United Nations fact-finding mission in Myanmar, where calls for the slaughter of Muslims have enjoyed all the convenience of a modern Facebook signal boost, makes clear just how unprepared and uninterested the company was for its role in an ethnic massacre.

In a recent New Yorker profile of Facebook founder and CEO Mark Zuckerberg, he responds to his company’s role in the crisis — which the U.N. has described as “determining” — with all the urgency and guilt of a botched restaurant order: “I think, fundamentally, we’ve been slow at the same thing in a number of areas, because it’s actually the same problem. But, yeah, I think the situation in Myanmar is terrible.” Zuckerberg added that the company needs to “move from what is fundamentally a reactive model” when it comes to blocking content that’s fueled what the U.N. described last year as a “textbook example of ethnic cleansing.”

The new report reveals just how broken this “reactive model” truly is.

According to the 479-page document, and as flagged in a broader Guardian story this week, “the Mission itself experienced a slow and ineffective response from Facebook when it used the standard reporting mechanism to alert the company to a post targeting a human rights defender for his alleged cooperation with the Mission.” What follows is the most clear-cut imaginable violation of Facebook’s rules, followed by the most abject failure to enforce them when it mattered most:

The post described the individual as a “national traitor”, consistently adding the adjective “Muslim”. It was shared and re-posted over 1,000 times. Numerous comments to the post explicitly called for the person to be killed, in unequivocal terms: “Beggar-dog species. As long as we are feeling sorry for them, our country is not at peace. These dogs need to be completely removed.” “If this animal is still around, find him and kill him. There needs to be government officials in NGOs.” “Wherever they are, Muslim animals don’t know to be faithful to the country.” “He is a Muslim. Muslims are dogs and need to be shot.” “Don’t leave him alive. Remove his whole race. Time is ticking.” The Mission reported this post to Facebook on four occasions; in each instance the response received was that the post was examined but “doesn’t go against one of [Facebook’s] specific Community Standards”. The Mission subsequently sent a message to an official Facebook email account about the matter but did not receive a response. The post was finally removed several weeks later but only through the support of a contact at Facebook, not through the official channel. Several months later, however, the Mission found at least 16 re-posts of the original post still circulating on Facebook. In the weeks and months after the post went online, the human rights defender received multiple death threats from Facebook users, warnings from neighbours, friends, taxi drivers and other contacts that they had seen his photo and the posts on Facebook, and strong suggestions that the post was an early warning. His family members were also threatened. The Mission has seen many similar cases where individuals, usually human rights defenders or journalists, become the target of an online hate campaign that incites or threatens violence.

This is a portrait of a system of rules by a company that oversees the online life of roughly two billion people that is completely broken, not merely flawed. Had someone at the Mission not had a “contact at Facebook” who could help, it’s easy to imagine that the post in question would have never been taken down — not that it mattered, given that it was soon re-posted and shared with impunity. Facebook’s typical mea culpa talking point has been that it regrets being “too slow” to curb these posts, when it fact it had done something worse by creating the illusion of meaningful rules and regulations.

It says everything about Facebook’s priorities that it would work so hard to penetrate poorer, “emerging” markets while creating conditions under which an “unequivocal” call to murder “Muslim animals” would be considered in compliance with its rules. The company, which reportedly had fewer than five Burmese-speaking moderators in 2015, now says it’s hiring a fleet of new contractors with language skills sufficient to field such reports — perhaps the second or third time, if not the first — but Zuckerberg et al have done little to convince the world that it’s learned anything from Myanmar. As usual, Facebook will slowly clean up this mess only after it’s been sufficiently yelled at.

Top photo: Facebook’s corporate headquarters in Menlo Park, Calif., on March 21, 2018.

The post Facebook Brushed Off the U.N. Five Separate Times Over Calls For Murder of Human Rights Worker appeared first on The Intercept.