December 6, 2018

Artificial Intelligence Experts Issue Urgent Warning Against Facial Sc...

Facial recognition has quickly shifted from techno-novelty to fact of life for many, with millions around the world at least willing to put up with their faces scanned by software at the airport, their iPhones, or Facebook’s server farms. But researchers at New York University’s AI Now Institute have issued a strong warning against not only ubiquitous facial recognition, but its more sinister cousin: so-called affect recognition, technology that claims it can find hidden meaning in the shape of your nose, the contours of your mouth, and the way you smile. If that sounds like something dredged up from the 19th century, that’s because it sort of is.

AI Now’s 2018 report is a 56-page record of how “artificial intelligence” — an umbrella term that includes a myriad of both scientific attempts to simulate human judgment and marketing nonsense — continues to spread without oversight, regulation, or meaningful ethical scrutiny. The report covers a wide expanse of uses and abuses, including instances of racial discrimination, police surveillance, and how trade secrecy laws can hide biased code from an AI-surveilled public. But AI Now, which was established last year to grapple with the social implications of artificial intelligence, expresses in the document particular dread over affect recognition, “a subclass of facial recognition that claims to detect things such as personality, inner feelings, mental health, and ‘worker engagement’ based on images or video of faces.” The thought of your boss watching you through a camera that uses machine learning to constantly assess your mental state is bad enough, while the prospect of police using “affect recognition” to deduce your future criminality based on “micro-expressions” is exponentially worse.

“The ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims.”

That’s because “affect recognition,” the report explains, is little more than the computerization of physiognomy, a thoroughly disgraced and debunked strain of pseudoscience from another era that claimed a person’s character could be discerned from their bodies — and their faces, in particular. There was no reason to believe this was true in the 1880s, when figures like the discredited Italian criminologist Cesare Lombroso promoted the theory, and there’s even less reason to believe it today. Still, it’s an attractive idea, despite its lack of grounding in any science, and data-centric firms have leapt at the opportunity to not only put names to faces, but to ascribe entire behavior patterns and predictions to some invisible relationship between your eyebrow and nose that can only be deciphered through the eye of a computer. Two years ago, students at a Shanghai university published a report detailing what they claimed to be a machine learning method for determining criminality based on facial features alone. The paper was widely criticized, including by AI Now’s Kate Crawford, who told The Intercept it constituted “literal phrenology … just using modern tools of supervised machine learning instead of calipers.”

Crawford and her colleagues are now more opposed than ever to the spread of this sort of culturally and scientifically regressive algorithmic prediction: “Although physiognomy fell out of favor following its association with Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect recognition applications,” the report reads. “The idea that AI systems might be able to tell us what a student, a customer, or a criminal suspect is really feeling or what type of person they intrinsically are is proving attractive to both corporations and governments, even though the scientific justifications for such claims are highly questionable, and the history of their discriminatory purposes well-documented.”

In an email to The Intercept, Crawford, AI Now’s co-founder and distinguished research professor at NYU, along with Meredith Whittaker, co-founder of AI Now and a distinguished research scientist at NYU, explained why affect recognition is more worrying today than ever, referring to two companies that use appearances to draw big conclusions about people. “From Faception claiming they can ‘detect’ if someone is a terrorist from their face to HireVue mass-recording job applicants to predict if they will be a good employee based on their facial ‘micro-expressions,’ the ability to use machine vision and massive data analysis to find correlations is leading to some very suspect claims,” said Crawford.

Faception has purported to determine from appearance if someone is “psychologically unbalanced,” anxious, or charismatic, while HireVue has ranked job applicants on the same basis.

As with any computerized system of automatic, invisible judgment and decision-making, the potential to be wrongly classified, flagged, or tagged is immense with affect recognition, particularly given its thin scientific basis: “How would a person profiled by these systems contest the result?,” Crawford added. “What happens when we rely on black-boxed AI systems to judge the ‘interior life’ or worthiness of human beings? Some of these products cite deeply controversial theories that are long disputed in the psychological literature, but are are being treated by AI startups as fact.”

What’s worse than bad science passing judgment on anyone within camera range is that the algorithms making these decisions are kept private by the firms that develop them, safe from rigorous scrutiny behind a veil of trade secrecy. AI Now’s Whittaker singles out corporate secrecy as confounding the already problematic practices of affect recognition: “Because most of these technologies are being developed by private companies, which operate under corporate secrecy laws, our report makes a strong recommendation for protections for ethical whistleblowers within these companies.” Such whistleblowing will continue to be crucial, wrote Whittaker, because so many data firms treat privacy and transparency as a liability, rather than a virtue: “The justifications vary, but mostly [AI developers] disclaim all responsibility and say it’s up to the customers to decide what to do with it.” Pseudoscience paired with state-of-the-art computer engineering and placed in a void of accountability. What could go wrong?

The post Artificial Intelligence Experts Issue Urgent Warning Against Facial Scanning With a “Dangerous History” appeared first on The Intercept.

December 6, 2018

Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Ha...

In 2015, rising star, Stanford University graduate, winner of the 13th season of “Survivor,” and Facebook executive Yul Kwon was profiled by the news outlet Fusion, which described him as “the guy standing between Facebook and its next privacy disaster,” guiding the company’s engineers through the dicey territory of personal data collection. Kwon described himself in the piece as a “privacy sherpa.” But the day it published, Kwon was apparently chatting with other Facebook staffers about how the company could vacuum up the call logs of its users without the Android operating system getting in the way by asking for the user for specific permission, according to confidential Facebook documents released today by the British Parliament.

“This would allow us to upgrade users without subjecting them to an Android permissions dialog.”

The document, part of a larger 250-page parliamentary trove, shows what appears to be a copied-and-pasted recap of an internal chat conversation between various Facebook staffers and Kwon, who was then the company’s deputy chief privacy officer and is currently working as a product management director, according to his LinkedIn profile.

The conversation centered around an internal push to change which data Facebook’s Android app had access to, to grant the software the ability to record a user’s text messages and call history, to interact with bluetooth beacons installed by physical stores, and to offer better customized friend suggestions and news feed rankings . This would be a momentous decision for any company, to say nothing of one with Facebook’s privacy track record and reputation, even in 2015, of sprinting through ethical minefields. “This is a pretty high-risk thing to do from a PR perspective but it appears that the growth team will charge ahead and do it,” Michael LeBeau, a Facebook product manager, is quoted in the document as saying of the change.

Crucially, LeBeau commented, according to the document, such a privacy change would require Android users to essentially opt in; Android, he said, would present them with a permissions dialog soliciting their approval to share call logs when they were to upgrade to a version of the app that collected the logs and texts. Furthermore, the Facebook app itself would prompt users to opt in to the feature, through a notification referred to by LeBeau as “an in-app opt-in NUX,” or new user experience. The Android dialog was especially problematic; such permission dialogs “tank upgrade rates,” LeBeau stated.

But Kwon appeared to later suggest that the company’s engineers might be able to upgrade users to the log-collecting version of the app without any such nagging from the phone’s operating system. He also indicated that the plan to obtain text messages had been dropped, according to the document. “Based on [the growth team’s] initial testing, it seems this would allow us to upgrade users without subjecting them to an Android permissions dialog at all,”  he stated. Users would have to click to effect the upgrade, he added, but, he reiterated, “no permissions dialog screen.”

It’s not clear if Kwon’s comment about “no permissions dialog screen” applied to the opt-in notification within the Facebook app. But even if the Facebook app still sought permission to share call logs, such in-app notices are generally designed expressly to get the user to consent and are easy to miss or misinterpret. Android users rely on standard, clear dialogs from the operating system to inform them of serious changes in privacy. There’s good reason Facebook would want to avoid “subjecting” its users to a screen displaying exactly what they’re about to hand over to the company:

It’s not clear how this specific discussion was resolved, but Facebook did eventually begin obtaining call logs and text messages from users of its Messenger and Facebook Lite apps for Android. This proved highly controversial when revealed in press accounts and by individuals posting on Twitter after receiving data Facebook had collected on them; Facebook insisted it had obtained permission for the phone log and text massage collection, but some users and journalists said it had not.

It’s Facebook’s corporate stance that the documents released by Parliament “are presented in a way that is very misleading without additional context.” The Intercept has asked both Facebook and Kwon personally about what context is missing here, if any, and will update with their response.

The post Here’s Facebook’s Former “Privacy Sherpa” Discussing How to Harm Your Facebook Privacy appeared first on The Intercept.

Screen-Shot-2018-11-14-at-3.55.39-PM-1542234254
December 3, 2018

Homeland Security Will Let Computers Predict Who Might Be a Terrorist ...

You’re rarely allowed to know exactly what’s keeping you safe. When you fly, you’re subject to secret rules, secret watchlists, hidden cameras, and other trappings of a plump, thriving surveillance culture. The Department of Homeland Security is now complicating the picture further by paying a private Virginia firm to build a software algorithm with the power to flag you as someone who might try to blow up the plane.

The new DHS program will give foreign airports around the world free software that teaches itself who the bad guys are, continuing society’s relentless swapping of human judgment for machine learning. DataRobot, a northern Virginia-based automated machine learning firm, won a contract from the department to develop “predictive models to enhance identification of high risk passengers” in software that should “make real-time prediction[s] with a reasonable response time” of less than one second, according to a technical overview that was written for potential contractors and reviewed by The Intercept. The contract assumes the software will produce false positives and requires that the terrorist-predicting algorithm’s accuracy should increase when confronted with such mistakes. DataRobot is currently testing the software, according to a DHS news release.

The contract also stipulates that the software’s predictions must be able to function “solely” using data gleaned from ticket records and demographics — criteria like origin airport, name, birthday, gender, and citizenship. The software can also draw from slightly more complex inputs, like the name of the associated travel agent, seat number, credit card information, and broader travel itinerary. The overview document describes a situation in which the software could “predict if a passenger or a group of passengers is intended to join the terrorist groups overseas, by looking at age, domestic address, destination and/or transit airports, route information (one-way or round trip), duration of the stay, and luggage information, etc., and comparing with known instances.”

DataRobot’s bread and butter is turning vast troves of raw data, which all modern businesses accumulate, into predictions of future action, which all modern companies desire. Its clients include Monsanto and the CIA’s venture capital arm, In-Q-Tel. But not all of DataRobot’s clients are looking to pad their revenues; DHS plans to integrate the code into an existing DHS offering called the Global Travel Assessment System, or GTAS, a toolchain that has been released as open source software and which is designed to make it easy for other countries to quickly implement no-fly lists like those used by the U.S.

According to the technical overview, DHS’s predictive software contract would “complement the GTAS rule engine and watch list matching features with predictive models to enhance identification of high risk passengers.” In other words, the government has decided that it’s time for the world to move beyond simply putting names on a list of bad people and then checking passengers against that list. After all, an advanced computer program can identify risky fliers faster than humans could ever dream of and can also operate around the clock, requiring nothing more than electricity. The extent to which GTAS is monitored by humans is unclear. The overview document implies a degree of autonomy, listing as a requirement that the software should “automatically augment Watch List data with confirmed ‘positive’ high risk passengers.”

The document does make repeated references to “targeting analysts” reviewing what the system spits out, but the underlying data-crunching appears to be almost entirely the purview of software, and it’s unknown what ability said analysts would have to check or challenge these predictions. In an email to The Intercept, Daniel Kahn Gillmor, a senior technologist with the American Civil Liberties Union, expressed concern with this lack of human touch: “Aside from the software developers and system administrators themselves (which no one yet knows how to automate away), the things that GTAS aims to do look like they could be run mostly ‘on autopilot’ if the purchasers/deployers choose to operate it in that manner.” But Gillmor cautioned that even including a human in the loop could be a red herring when it comes to accountability: “Even if such a high-quality human oversight scheme were in place by design in the GTAS software and contributed modules (I see no indication that it is), it’s free software, so such a constraint could be removed. Countries where labor is expensive (or controversial, or potentially corrupt, etc) might be tempted to simply edit out any requirement for human intervention before deployment.”

“Countries where labor is expensive might be tempted to simply edit out any requirement for human intervention.”

For the surveillance-averse, consider the following: Would you rather a group of government administrators, who meet in secret and are exempt from disclosure, decide who is unfit to fly? Or would it be better for a computer, accountable only to its own code, to make that call? It’s hard to feel comfortable with the very concept of profiling, a practice that so easily collapses into prejudice rather than vigilance. But at least with uniformed government employees doing the eyeballing, we know who to blame when, say, a woman in a headscarf is needlessly hassled, or a man with dark skin is pulled aside for an extra pat-down.

If you ask DHS, this is a categorical win-win for all parties involved. Foreign governments are able to enjoy a higher standard of security screening; the United States gains some measure of confidence about the millions of foreigners who enter the country each year; and passengers can drink their complimentary beverage knowing that the person next to them wasn’t flagged as a terrorist by DataRobot’s algorithm. But watchlists, among the most notorious features of post-9/11 national security mania, are of questionable efficacy and dubious legality. A 2014 report by The Intercept pegged the U.S. Terrorist Screening Database, an FBI data set from which the no-fly list is excerpted, at roughly 680,000 entries, including some 280,000 individuals with “no recognized terrorist group affiliation.” That same year, a U.S. district court judge ruled in favor of an ACLU lawsuit, declaring the no-fly list unconstitutional. The list could only be used again if the government improved the mechanism through which people could challenge their inclusion on it — a process that, at the very least, involved human government employees, convening and deliberating in secret.

Screen-Shot-2018-11-14-at-3.55.39-PM-1542234254

Diagram from a Department of Homeland Security technical document illustrating how GTAS might visualize a potential terrorist onboard during the screening process.

Document: DHS

But what if you’re one of the inevitable false positives? Machine learning and behavioral prediction is already widespread; The Intercept reported earlier this year that Facebook is selling advertisers on its ability to forecast and pre-empt your actions. The consequences of botching consumer surveillance are generally pretty low: If a marketing algorithm mistakenly predicts your interest in fly fishing where there is none, the false positive is an annoying waste of time. The stakes at the airport are orders of magnitude higher.

What happens when DHS’s crystal ball gets it wrong — when the machine creates a prediction with no basis in reality and an innocent person with no plans to “join a terrorist group overseas” is essentially criminally defamed by a robot? Civil liberties advocates not only worry that such false positives are likely, possessing a great potential to upend lives, but also question whether such a profoundly damning prediction is even technologically possible. According to  DHS itself, its predictive software would have relatively little information upon which to base a prognosis of impending terrorism.

Even from such mundane data inputs, privacy watchdogs cautioned that prejudice and biases always follow — something only worsened under the auspices of self-teaching artificial intelligence. Faiza Patel, co-director of the Brennan Center’s Liberty and National Security Program, told The Intercept that giving predictive abilities to watchlist software will present only the veneer of impartiality. “Algorithms will both replicate biases and produce biased results,” Patel said, drawing a parallel to situations in which police are algorithmically allocated to “risky” neighborhoods based on racially biased crime data, a process that results in racially biased arrests and a checkmark for the computer. In a self-perpetuating bias machine like this, said Patel, “you have all the data that’s then affirming what the algorithm told you in the first place,” which creates “a kind of cycle of reinforcement just through the data that comes back.” What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

What kind of people should get added to a watchlist? The ones who resemble those on the watchlist.

Indeed, DHS’s system stands to deliver a computerized turbocharge to the bias that is already endemic to the American watchlist system. The overview document for the the Delphic profiling tool made repeated references to the fact that it will create a feedback loop of sorts. The new system “shall automatically augment Watch List data with confirmed ‘positive’ high risk passengers,” one page read, with quotation marks doing some very real work. The software’s predictive abilities “shall be able to improve over time as the system feeds actual disposition results, such as true and false positives,” said another section. Given that the existing watchlist framework has ensnared countless thousands of innocent people , the notion of “feeding” such “positives” into a machine that will then search even harder for that sort of person is downright dangerous. It also becomes absurd: When the criteria for who is “risky” and who isn’t are kept secret, it’s quite literally impossible for anyone on the outside to tell what is a false positive and what isn’t. Even for those without civil libertarian leanings, the notion of an automatic “bad guy” detector that uses a secret definition of “bad guy” and will learn to better spot “bad guys” with every “bad guy” it catches would be comical were it not endorsed by the federal government.

For those troubled by the fact that this system is not only real but currently being tested by an American company, the fact that neither the government nor DataRobot will reveal any details of the program is perhaps the most troubling of all. When asked where the predictive watchlist prototype is being tested, the DHS tech directorate spokesperson, John Verrico, told The Intercept, “I don’t believe that has been determined yet,” and stressed that the program was meant for use with foreigners. Verrico referred further questions about test location and which “risk criteria” the algorithm will be trained to look for back to DataRobot. Libby Botsford, a DataRobot spokesperson, initially told The Intercept that she had “been trying to track down the info you requested from the government but haven’t been successful,” and later added, “I’m not authorized to speak about this. Sorry!” Subsequent requests sent to both DHS and DataRobot were ignored.

Verrico’s assurance — that the watchlist software is an outward-aiming tool provided to foreign governments, not a means of domestic surveillance — is an interesting feint given that Americans fly through non-American airports in great numbers every single day. But it obscures ambitions much larger than GTAS itself: The export of opaque, American-style homeland security to the rest of the world and the hope of bringing every destination in every country under a single, uniform, interconnected surveillance framework. Why go through the trouble of sifting through the innumerable bodies entering the United States in search of “risky” ones when you can move the whole haystack to another country entirely? A global network of terrorist-scanning predictive robots at every airport would spare the U.S. a lot of heavy, politically ugly lifting.

“Automation will exacerbate all of the worst aspects of the watchlisting system.”

Predictive screening further shifts responsibility. The ACLU’s Gillmor explained that making these tools available to other countries may mean that those external agencies will prevent people from flying so that they never encounter DHS at all, which makes DHS less accountable for any erroneous or damaging flagging, a system he described as “a quiet way of projecting U.S. power out beyond U.S. borders.” Even at this very early stage, DHS seems eager to wipe its hands of the system it’s trying to spread around the world: When Verrico brushed off questions of what the system would consider “risky” attributes in a person, he added in his email that “the risk criteria is being defined by other entities outside the U.S., not by us. I would imagine they don’t want to tell the bad guys what they are looking for anyway. ;-)” DHS did not answer when asked whether there were any plans to implement GTAS within the United States.

Then there’s the question of appeals. Those on DHS’s current watchlists may seek legal redress; though the appeals system is generally considered inadequate by civil libertarians, it offers at least a theoretical possibility of removal. The documents surrounding DataRobot’s predictive modeling contract make no mention of an appeals system for those deemed risky by an algorithm, nor is there any requirement in the DHS overview document that the software must be able to explain how it came to its conclusions. Accountability remains a fundamental problem in the fields of machine learning and computerized prediction, with some computer scientists adamant that an ethical algorithm must be able to show its work, and others objecting on the grounds that such transparency compromises the accuracy of the predictions.

Gadeir Abbas, an attorney with the Council on American-Islamic Relations, who has spent years fighting the U.S. government in court over watchlists, saw the DHS software as only more bad news for populations already unfairly surveilled. The U.S. government is so far “not able to generate a single set of rules that have any discernible level of effectiveness,” said Abbas, and so “the idea that they’re going to automate the process of evolving those rules is another example of the technology fetish that drives some amount of counterterrorism policy.”

The entire concept of making watchlist software capable of terrorist predictions is mathematically doomed, Abbas added, likening the system to a “crappy Minority report. … Even if they make a really good robot, and it’s 99 percent accurate,” the fact that terror attacks are “exceedingly rare events” in terms of naked statistics means you’re still looking at “millions of false positives. … Automation will exacerbate all of the worst aspects of the watchlisting system.”

The ACLU’s Gillmor agreed that this mission is simply beyond what computers are even capable of:

For very-low-prevalence outcomes like terrorist activity, predictive systems are simply likely to get it wrong. When a disease is a one-in-a-million likelihood, the surest bet is a negative diagnosis. But that’s not what these systems are designed to do. They need to “diagnose” some instances positively to justify their existence. So, they’ll wrongly flag many passengers who have nothing to do with terrorism, and they’ll do it on the basis of whatever meager data happens to be available to them.

Predictive software is not just the future, but the present. Its expansion into the way we shop, the way we’re policed, and the way we fly will soon be commonplace, even if we’re never aware of it. Designating enemies of the state based on a crystal ball locked inside a box represents a grave, fundamental leap in how societies appraise danger. The number of active, credible terrorists-in-waiting is an infinitesimal slice of the world’s population. The number of people placed on watchlists and blacklists is significant. Letting software do the sorting — no matter how smart and efficient we tell ourselves it will be — will likely do much to worsen this inequity.

The post Homeland Security Will Let Computers Predict Who Might Be a Terrorist on Your Plane — Just Don’t Ask How It Works appeared first on The Intercept.

November 2, 2018

Facebook Allowed Advertisers to Target Users Interested in “White Ge...

Apparently fueled by anti-Semitism and the bogus narrative that outside forces are scheming to exterminate the white race, Robert Bowers murdered 11 Jewish congregants as they gathered inside their Pittsburgh synagogue, federal prosecutors allege. But despite long-running international efforts to debunk the idea of a “white genocide,” Facebook was still selling advertisers the ability to market to those with an interest in that myth just days after the bloodshed.

Earlier this week, The Intercept was able to select “white genocide conspiracy theory” as a pre-defined “detailed targeting” criterion on the social network to promote two articles to an interest group that Facebook pegged at 168,000 users large and defined as “people who have expressed an interest or like pages related to White genocide conspiracy theory.” The paid promotion was approved by Facebook’s advertising wing. After we contacted the company for comment, Facebook promptly deleted the targeting category, apologized, and said it should have never existed in the first place.

Our reporting technique was the same as one used by the investigative news outlet ProPublica to report, just over one year ago, that in addition to soccer dads and Arianna Grande fans, “the world’s largest social network enabled advertisers to direct their pitches to the news feeds of almost 2,300 people who expressed interest in the topics of ‘Jew hater,’ ‘How to burn jews,’ or, ‘History of “why jews ruin the world.”’” The report exposed how little Facebook was doing to vet marketers, who pay the company to leverage personal information and inclinations in order to gain users’ attention — and who provide the foundation for its entire business model. At the time, ProPublica noted that Facebook “said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.” Rob Leathern, a Facebook product manager, assured the public, “We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”

Leathern’s “new guardrails” don’t seem to have prevented Facebook from manually approving our ad buy the same day it was submitted, despite its explicit labeling as “White Supremacy – Test.”

 

 

From the outside, it’s impossible to tell exactly how Facebook decides who among its 2 billion users might fit into the “white genocide” interest group or any other cohort available for “detailed targeting.” The company’s own documentation is very light on details, saying only that these groups are based on indicators like “Pages [users] engage with” or “Activities people engage in on and off Facebook related to things like their device usage, purchase behaviors or intents and travel preferences.” It remains entirely possible that some people lumped into the “white genocide conspiracy theory” fandom are not, in fact, true believers, but may have interacted with content critical of this myth, such as a news report, a fact check, or academic research on the topic.

But there are some clues as to who exactly is counted among the 168,000. After selecting “white genocide conspiracy theory” as an ad target, Facebook provided “suggestions” of other, similar criteria, including interest in the far-right-wing news outlets RedState and the Daily Caller — the latter of which, co-founded by right-wing commentator Tucker Carlson, has repeatedly been criticized for cozy connections to white nationalists and those sympathetic to them. Other suggested ad targets included mentions of South Africa;  a common trope among advocates of the “white genocide” myth is the so-called plight of white South African farmers, who they falsely claim are being systematically murdered and pushed off their land. The South African hoax is often used as a cautionary tale for American racists — like, by all evidence, Robert Bowers, the Pittsburgh shooter — who fear a similar fate is in store for them, whether from an imagined global Jewish conspiracy or a migrant “caravan.” But the “white genocide” myth appears to have a global appeal, as well: About 157,000 of the accounts with the interest are outside of the U.S., concentrated in Africa and Asia, although it’s not clear how many of these might be bots.

A simple search of Facebook pages also makes plain that there are tens of thousands of users with a very earnest interest in “white genocide,” shown through the long list of groups with names like “Stop White South African Genocide,” “White Genocide Watch,” and “The last days of the white man.” Images with captions like “Don’t Be A Race Traitor” and “STOP WHITE GENOCIDE IN SOUTH AFRICA” are freely shared in such groups, providing a natural target for anyone who might want to pay to promote deliberately divisive and incendiary hate-based content.

A day after Facebook confirmed The Intercept’s “white genocide” ad buy, the company deleted the category and canceled the ads. Facebook spokesperson Joe Osborne provided The Intercept with the following statement, similar to the one he gave ProPublica over a year ago: “This targeting option has been removed, and we’ve taken down these ads. It’s against our advertising principles and never should have been in our system to begin with. We deeply apologize for this error.” Osborne added that the “white genocide conspiracy theory” category had been  “generated through a mix of automated and human reviews, but any newly added interests are ultimately approved by people. We are ultimately responsible for the segments we make available in our systems.” Osborne also confirmed that the ad category had been used by marketers, but cited only “reasonable” ad buys targeting “white genocide” enthusiasts, such as news coverage.

Facebook draws a distinction between the hate-based categories ProPublica discovered, which were based on terms users entered into their own profiles, versus the “white genocide conspiracy theory” category, which Facebook itself created via algorithm. The company says that it’s taken steps to make sure the former is no longer possible, although this clearly did nothing to deter the latter. Interestingly, Facebook said that technically the white genocide ad buy didn’t violate its ad policies, because it was based on a category Facebook itself created. However, this doesn’t square with the automated email The Intercept received a day after the ad buy was approved, informing us that “We have reviewed some of your ads more closely and have determined they don’t comply with our Advertising Policies.”

Still, the company conceded that such ad buys should have never been possible in the first place. Vice News and Business Insider also bought Facebook ads this week to make a different point about a related problem: that Facebook does not properly verify the identities of people who take out political ads. It’s unclear whether the “guardrails” Leathern spoke of a year ago will simply take more time to construct, or whether Facebook’s heavy reliance on algorithmic judgment simply careened through them.

The post Facebook Allowed Advertisers to Target Users Interested in “White Genocide” — Even in Wake of Pittsburgh Massacre appeared first on The Intercept.

October 30, 2018

Never Trust a Reporter Who Bounces in His Chair With Glee...

You wouldn’t trust a music critic who’s buddies with the band, nor should you trust a tech reporter who hoots and hollers whenever Tim Cook takes the stage. And you definitely, absolutely should be suspicious of a political reporter who sits down with President Donald Trump and looks as if he’s meeting his favorite baseball player.

Axios and HBO gave viewers the first look at a new television show by teaming up with the White House to unveil a new entry in its xenophobic domestic policy lineup.

Along these lines, Tuesday morning held a sort of public relations convergence of interests that typifies the worst of political reporting: Axios and HBO gave viewers the first look at a new television show by teaming up with the White House to unveil a new entry in its xenophobic domestic policy lineup.

This sort of journalism is among the most obsequious — perhaps tied with tech coverage, at times — but the new video clip debuted today by Axios may be the ne plus ultra of media toadying. Axios has become a political media sensation in a very short amount of time, excelling at both cranking out access-based White House scoops and servility, like some sort of 1600 Pennsylvania Avenue-based Roomba.

Today’s video interview snippet, plucked from the upcoming Axios show on HBO, put the website’s bright star Jonathan Swan in a chair across from Trump. Prompted by Swan, Trump announced an innovative plan to bar nonwhite infants from attaining U.S. citizenship. It was, in Swan’s words, an “exciting” moment to behold:

“Excited to share” is usually how one begins a sentence about a pregnancy or a promotion, not the revelation of a plot to deny citizenship to newborns. The families affected by this attempt to subvert the 14th Amendment might have other words for the announcement, but not Swan, who took the news (and his ability to report it first) as a big, shiny win — merely another dose of stellar exclusive digital content to be consumed, a brilliant bit of multimedia cross-promotion.

The video itself, however, is somehow even worse than the tweet. We see firsthand just how pumped up Swan is to discuss Trump’s long-term ethnic exclusion strategies with the big man himself. At one point, Swan cajoles him into explaining just how Trump might actually execute this unilateral change to the Constitution, prompting Trump to speculate that he might use an executive order. “Exactly!” exclaims Swan, so amped up that he is literally unable to stay in his seat. Palpably thrilled, Swan points an eager finger at the president. “Tell me more!” he says next, all too cheerily, as if he’s conducting a Q&A with The Avengers at Comic-Con — and not being given the opportunity to interrogate the president of the United States. Swan is literally grinning throughout: The feeling that a high-five is imminent is hard to shake off.

This is grotesque on the face of it. Politics — particularly the politics of the day — aren’t supposed to be fun, nor exciting, nor any other chipper keywords you might feed into Netflix on a rainy evening. American politics in our present day are anguishing, alienating, bitter, bleak, cynical, and hateful. To take this opportunity to challenge Trump on his immigration policies at this time — when, in just one example, there’s a very good argument to be made that these policies just led to the worst act of anti-Semitic carnage in American history — and not only squander it but enjoy it, that’s something worse than monstrous. “What a revolting display,” Splinter’s Libby Watson remarked. Watson also noted that “when the president says other countries don’t have birthright citizenship, which is a lie, Swan says nothing, and Axios’ story was only updated after publication to reflect that reality.” Revolting, indeed.

It’s not that this is just terrible journalism or that Swan should consider nurturing his gifts for public relations in another sphere. What we’re watching here is a perverse amalgam of news, social media, entertainment, and the White House. It is truly a cross-promotional tour de force, but one that leaves a sour taste, a worse example of a familiar genre: It is a new kind of product launch. We’re watching the residue left behind as media industry stability evaporates, when “scoops” at all costs is one of the few currencies left, where shame is a luxury. This is truly the Trump effect at its greatest strength: The president’s lack of shame has always been his biggest selling point for fans, and Axios, in its bid for its own fans, is cribbing not only his style, but his politics as well. Axios wholesale adopted the big right-wing unveil as an audience-building tool.

Perhaps it’s too much to ask that our colleagues in D.C. not count themselves among enthusiasts for this brand of far-right politics, but, please, at least feel ashamed enough to stay in your chair.

The post Never Trust a Reporter Who Bounces in His Chair With Glee appeared first on The Intercept.

October 12, 2018

Some Silicon Valley Superstars Ditching Saudi Advisory Board After Kha...

While the world is grappling with the apparent grisly murder of Saudi dissident and Washington Post journalist Jamal Khashoggi, the Saudi government decided to announce a new band of influential Western allies, some plucked from the uppermost echelon of Silicon Valley, who would serve on an advisory board for NEOM, the Saudi government’s improbable, exorbitant plan to build a “megacity” in the desert.

But almost as soon as his participation was revealed, Sam Altman, head of famed venture capital firm Y Combinator, announced that he is “suspending” his role with NEOM, while two others on the star-studded list denied that they were participating.

Altman, along with legendary tech investor Marc Andreessen, notorious Uber founder (and ousted ex-CEO) Travis Kalanick, IDEO CEO Tim Brown, and Dan Doctoroff of Sidewalk Labs, a subsidiary of Google-owner Alphabet, were among those listed as members of the new board. Given that the United States itself is now forced into a momentarily uncomfortable spot given its longtime affection for and deep political ties to the Saudis, this was a less than ideal time for Americans to come out as friends of the Kingdom.

Despite Saudi Arabia’s vast history of human rights abuses and violent foreign policy, in a statement to The Intercept, Altman announced that this reported assassination and dismemberment was a step too far:

“I am suspending my involvement with the NEOM advisory board until the facts regarding Jamal Khashoggi’s disappearance are known. This is well out of my area of expertise, so I don’t plan to comment on the case until the investigation is finished. I remain a huge believer in the importance of building smart cities.”

A source close to members of the advisory board who spoke on the condition of anonymity described to The Intercept recent conversations with other board members in which they expressed that they were “inclined to just stay on the board” and continue helping plan the Saudis’ fantastical oasis megacity, despite Khashoggi’s reported assassination. “I’m always surprised by what ends up being a red line for people and what doesn’t,” this source added, though they reserved praise for Saudi’s crown prince Mohammad bin Salman, commonly known as MBS. Silicon Valley figures “have been cautiously optimistic” about the Saudi government, they explained. “MBS cares about technology, wants to invest in technology in way other world leaders aren’t, and has a boldness that is exciting to people. People want to believe.” Asked if they thought the murder of a dissident journalist might change this admiration for MBS, the source replied that “if this is true as alleged, it could change many peoples’ temperature on that.”

For his part, IDEO’s Tim Brown “has chosen not to participate in the advisory board at this time,” according to IDEO spokesperson Sara Blask, who would provide no further comment about why he was listed as an advisory board member and why he is now declining to participate.

Dan Levitan, a spokesperson for Sidewalk Labs’ Dan Doctoroff, told The Intercept that Doctoroff’s “inclusion on that list is incorrect, “and that “he is not a member of the NEOM advisory board,” but would not answer whether Doctoroff was ever a member of the advisory board, or whether he has discussed the NEOM project with the Saudi government in any other capacity in the past.

Requests for comment sent to Kalanick, Andreessen, and fellow NEOM board member Masayoshi Son of Japanese software mammoth SoftBank were not answered.

Top photo: Y Combinator President Sam Altman speaks onstage during TechCrunch Disrupt SF 2017 in San Francisco, Calif., on Sept. 19, 2017.

The post Some Silicon Valley Superstars Ditching Saudi Advisory Board After Khashoggi Disappearance, Some Stay Silent appeared first on The Intercept.

October 9, 2018

Government Report: “An Entire Generation” of American Weapons Is W...

A new report from the U.S. Government Accountability Office brings both good and bad news. For governments around the would that might like to sabotage America’s military technology, the good news is that this would be all too easy to do: Testers at the Department of Defense “routinely found mission-critical cyber vulnerabilities in nearly all weapon systems that were under development” over a five-year period, the report said. For Americans, the bad news is that up until very recently, no one seemed to care enough to fix these security holes.

In 1991, the report noted, the U.S. National Research Council warned that “system disruptions will increase” as the use of computers and networks grows and as adversaries attack them. The Pentagon more or less ignored this and at least five subsequent warnings on the subject, according to the GAO, and hasn’t made a serious effort to safeguard the vast patchwork of software that controls planes, ships, missiles, and other advanced ordnance against hackers.

The sweeping report drew on nearly 30 years of published research, including recent assessments of the cybersecurity of specific weapon systems, as well as interviews with personnel from the Department of Defense, the National Security Agency, and weapons-testing bodies. It covered a broad span of American weapons, examining systems at all of the service branches and in space.

The report found that “mission-critical cyber vulnerabilities” cropped up routinely during weapons development and that test teams “easily” took over real systems without detection “using relatively simple tools and techniques,” exploiting “basic issues such as poor password management and unencrypted communications.” Testers could also download and delete data, in one cases exfiltrating 100 gigabytes of material, and could tap into operators’ terminals, in one instance popping up computer dialogs asking the operators “to insert two quarters to continue.” But a malicious attacker could pull off much worse than jokes about quarters, warns the GAO: “In one case, the test team took control of the operators’ terminals. They could see, in real-time, what the operators were seeing on their screens and could manipulate the system.”

Posing as surrogates for, say, Russian or Chinese military hackers, testers sometimes found easy victories. “In some cases,” the GAO found, “simply scanning a system caused parts of the system to shut down,” while one “test team was able to guess an administrator password in nine seconds.” The testers found embarrassing, elementary screw-ups of the sort that would get a middle school computer lab administrator in trouble, to say nothing of someone safeguarding lethal weapon systems. For example, “multiple weapon systems used commercial or open source software, but did not change the default password when the software was installed, which allowed test teams to look up the password on the Internet.”

“In some cases, simply scanning a system caused parts of the system to shut down.”

Asked how she thought a culture of cyber-insecurity could flourish at an institution as guarded as the military, Cristina Chaplain, a director at the GAO, explained that the problem may be that the armed services overestimated the value of secrecy. “For the past 20 years, their focus has been on [networking] systems together,” at the expense of connecting them securely, because it was simply assumed that “security by obscurity” would be all that was needed — that, say, a classified bomb designed and built in secret is impervious to outside threats by virtue of being kept hidden. The whole culture of military secrecy, the belief that “they’re so standalone and so stovepiped that they’re almost secure just by virtue of that,” as Chaplain put it, is much to blame.

The findings are all the more disturbing given that the GAO said they “likely represent a fraction of total vulnerabilities” due to limitations in how the Defense Department tests for cybersecurity.

Although the GAO analyzed real weapon systems used by the Pentagon, the report is light on specifics for security and classification purposes. It lacks findings about, say, a particular missile or a particular ship, and Chaplain would not comment on whether vulnerabilities were found in nuclear weapon systems, citing classification issues.But the document nonetheless reveals colossal negligence in the broader process of building and buying weapons. For years, the Department of Defense did not prioritize cybersecurity when acquiring weapon systems, even as it sought to further automate such systems, the GAO said. Up until about three years ago, some in the department avoided cybersecurity assessments, saying requirements were not clearly spelled out, asserting that they “did not believe cybersecurity applied to weapon systems,” according to the report, complaining that “cybersecurity tests would interfere with operations,” or rejecting the tests as “unrealistic” because the simulated attackers had an unfair amount of insider information — an objection the NSA itself dismissed as unrealistic.

Even when weapons program officials were aware of problems, the issues were often ignored. In one case, an assessment found 19 of 20 vulnerabilities unearthed in a previous assessment had not been fixed. When asked why, “program officials said they had identified a solution, but for some reason it had not been implemented,” the GAO said. In other cases, weapons operators were so used to a broken product that warnings of a simulated breach didn’t even register. “Warnings were so common that operators were desensitized to them,” the report found.

Today, cybersecurity audits of weapons are of increasing importance to the Pentagon, according to the report. But it’s incredibly hard to fix security holes after the fact. “Bolting on cybersecurity late in the development cycle or after a system has been deployed is more difficult and costly than designing it in from the beginning,” the GAO noted. One weapons program needed months to apply patches that were supposed to be applied within three weeks, the report said, because of all the testing required. Other programs are deployed around the world, further slowing the spread of fixes. “Some weapon systems are operating, possibly for extended periods, with known vulnerabilities,” the report stated.

This, then, is the crisis: The U.S. has created a computerized global military using complex, interconnected, and highly vulnerable tools — “an entire generation of systems that were designed and built without adequately considering cybersecurity,” as the GAO put it. And now it must fix it. This is nothing less than an engineering nightmare — but far preferable to what will happen if one of these software flaws is exploited by someone other than a friendly government tester.

Top photo: A U.S. Air Force crew chief conducts preflight checks on Sept. 20, 2018 during Combat Archer, a two-week, air-to-air Weapons System Evaluation Program to prepare and evaluate operational fighter squadrons’ readiness for combat operations, at Tyndall Air Force Base, Fla.

The post Government Report: “An Entire Generation” of American Weapons Is Wide Open to Hackers appeared first on The Intercept.

September 26, 2018

The Government Wants Airlines to Delay Your Flight So They Can Scan Yo...

Omnipresent facial recognition has become a golden goose for law enforcement agencies around the world. In the United States, few are as eager as the Department of Homeland Security. American airports are currently being used as laboratories for a new tool that would automatically scan your face — and confirm your identity with U.S. Customs and Border Protection — as you prepare to board a flight, despite the near-unanimous objections from privacy advocates and civil libertarians, who call such scans invasive and pointless.

According to a new report on the Biometric Entry-Exit Program by DHS itself, we can add another objection: Your flight could be late.

Although the new report, published by Homeland Security’s Office of the Inspector General, is overwhelmingly supportive in its evaluation of airport-based biometric surveillance — the practice of a computer detecting your face and pairing it with everything else in the system — the agency notes some hurdles from a recent test code-named “Sprint 8.” Among them, the report notes with palpable frustration, was that airlines insist on letting their passengers depart on time, rather than subjecting them to a Homeland Security surveillance prototype plagued by technical issues and slowdowns:

Demanding flight departure schedules posed other operational problems that significantly hampered biometric matching of passengers during the pilot in 2017. Typically, when incoming flights arrived behind schedule, the time allotted for boarding departing flights was reduced. In these cases, CBP allowed airlines to bypass biometric processing in order to save time. As such, passengers could proceed with presenting their boarding passes to gate agents without being photographed and biometrically matched by CBP first. We observed this scenario at the Atlanta Hartsfield-Jackson International Airport when an airline suspended the biometric matching process early to avoid a flight delay. This resulted in approximately 120 passengers boarding the flight without biometric confirmation.

“Repeatedly permitting airlines to revert to standard flight-boarding procedures without biometric processing may become a habit that is difficult to break.”

The report goes on to again bemoan “airlines’ recurring tendency to bypass the biometric matching process in favor of boarding flights for an on-time departure.” DHS, apparently, is worried that it could be habit-forming for the airlines: “Repeatedly permitting airlines to revert to standard flight-boarding procedures without biometric processing may become a habit that is difficult to break.”

These concerns, however, are difficult to square with a later assurance that “airline officials we interviewed indicated the processing time was generally acceptable and did not contribute to departure delays.”

The report ends up concluding that this and other logistical issues “pose significant risks to CBP scaling up the biometric program to process 100 percent of all departing passengers by 2021.” And it has some ideas to do something about it, namely “enforcement mechanisms or back-up procedures to prevent airlines from  bypassing biometric processing prior to flight boarding.”

As the success of biometric-reliant line-skipping services — like TSA Pre-Check and Clear — have shown, many flyers are happy to swap their irreplaceable biometrics in the name of convenience. The prospect of missing a connecting flight, however, could bring out the pitchforks.

Top photo: Station Manager Chad Shane, right, of SAS airlines, ushers a boarding passenger through the process as Dulles airport officials unveil new biometric facial recognition scanners on Sept. 6, 2018.

The post The Government Wants Airlines to Delay Your Flight So They Can Scan Your Face appeared first on The Intercept.

September 22, 2018

Facebook Brushed Off the U.N. Five Separate Times Over Calls For Murde...

Facebook’s complete and total inability to keep itself from being a convenient tool for genocidal incitement in Myanmar has been well-covered, now a case study in how a company with such immense global power can so completely fail to use it for good. But a new report released this week by the United Nations fact-finding mission in Myanmar, where calls for the slaughter of Muslims have enjoyed all the convenience of a modern Facebook signal boost, makes clear just how unprepared and uninterested the company was for its role in an ethnic massacre.

In a recent New Yorker profile of Facebook founder and CEO Mark Zuckerberg, he responds to his company’s role in the crisis — which the U.N. has described as “determining” — with all the urgency and guilt of a botched restaurant order: “I think, fundamentally, we’ve been slow at the same thing in a number of areas, because it’s actually the same problem. But, yeah, I think the situation in Myanmar is terrible.” Zuckerberg added that the company needs to “move from what is fundamentally a reactive model” when it comes to blocking content that’s fueled what the U.N. described last year as a “textbook example of ethnic cleansing.”

The new report reveals just how broken this “reactive model” truly is.

According to the 479-page document, and as flagged in a broader Guardian story this week, “the Mission itself experienced a slow and ineffective response from Facebook when it used the standard reporting mechanism to alert the company to a post targeting a human rights defender for his alleged cooperation with the Mission.” What follows is the most clear-cut imaginable violation of Facebook’s rules, followed by the most abject failure to enforce them when it mattered most:

The post described the individual as a “national traitor”, consistently adding the adjective “Muslim”. It was shared and re-posted over 1,000 times. Numerous comments to the post explicitly called for the person to be killed, in unequivocal terms: “Beggar-dog species. As long as we are feeling sorry for them, our country is not at peace. These dogs need to be completely removed.” “If this animal is still around, find him and kill him. There needs to be government officials in NGOs.” “Wherever they are, Muslim animals don’t know to be faithful to the country.” “He is a Muslim. Muslims are dogs and need to be shot.” “Don’t leave him alive. Remove his whole race. Time is ticking.” The Mission reported this post to Facebook on four occasions; in each instance the response received was that the post was examined but “doesn’t go against one of [Facebook’s] specific Community Standards”. The Mission subsequently sent a message to an official Facebook email account about the matter but did not receive a response. The post was finally removed several weeks later but only through the support of a contact at Facebook, not through the official channel. Several months later, however, the Mission found at least 16 re-posts of the original post still circulating on Facebook. In the weeks and months after the post went online, the human rights defender received multiple death threats from Facebook users, warnings from neighbours, friends, taxi drivers and other contacts that they had seen his photo and the posts on Facebook, and strong suggestions that the post was an early warning. His family members were also threatened. The Mission has seen many similar cases where individuals, usually human rights defenders or journalists, become the target of an online hate campaign that incites or threatens violence.

This is a portrait of a system of rules by a company that oversees the online life of roughly two billion people that is completely broken, not merely flawed. Had someone at the Mission not had a “contact at Facebook” who could help, it’s easy to imagine that the post in question would have never been taken down — not that it mattered, given that it was soon re-posted and shared with impunity. Facebook’s typical mea culpa talking point has been that it regrets being “too slow” to curb these posts, when it fact it had done something worse by creating the illusion of meaningful rules and regulations.

It says everything about Facebook’s priorities that it would work so hard to penetrate poorer, “emerging” markets while creating conditions under which an “unequivocal” call to murder “Muslim animals” would be considered in compliance with its rules. The company, which reportedly had fewer than five Burmese-speaking moderators in 2015, now says it’s hiring a fleet of new contractors with language skills sufficient to field such reports — perhaps the second or third time, if not the first — but Zuckerberg et al have done little to convince the world that it’s learned anything from Myanmar. As usual, Facebook will slowly clean up this mess only after it’s been sufficiently yelled at.

Top photo: Facebook’s corporate headquarters in Menlo Park, Calif., on March 21, 2018.

The post Facebook Brushed Off the U.N. Five Separate Times Over Calls For Murder of Human Rights Worker appeared first on The Intercept.

September 5, 2018

Sheryl Sandberg Misled Congress About Facebook’s Conscience...

Facebook chief operating officer Sheryl Sandberg draped herself in the star-spangled banner of American principles before today’s Senate Select Intelligence Committee hearing on social media. Sandberg proclaimed that democratic values of free expression were integral to the company’s conscience. “We would only operate in a country where we could do so in keeping with our values,” she went on. Either this was a lie told under oath, or Facebook has some pretty lousy values.

“We would only operate in a country where we could do so in keeping with our values.”

Sen. Marco Rubio, R-Fla., questioned Sandberg and Twitter CEO Jack Dorsey about the fact that they are both ostensibly American companies, but also firms with users around the world — including in countries with legal systems and values that differ drastically from the United States. Rubio cited various governments that crack down on, say, pro-democracy activism and that criminalize such speech. How can a company like Facebook claim that it’s committed to free expression as a global value while maintaining its adherence to rule of law on a local level? When it comes to democratic values, Rubio asked, “Do you support them only in the United States or are these principles that you feel obligated to support around the world?”

Sandberg, as always, didn’t miss a beat: “We support these principles around the world.” Shortly thereafter she made the claim that Facebook simply would not do business in a country where these values couldn’t be maintained.

Based on the information Facebook itself makes available, this is false. In its latest publicly available “transparency report,” Facebook says it helps block free expression as a matter of policy — so long as it’s technically legal in a given market. For instance, in the United Arab Emirates, a country that Human Rights Watch says “arbitrarily detains and in some cases forcibly disappears individuals who criticize the authorities,” Facebook does its part to help.

According to its most recent update on its compliance with UAE takedown requests — when a government or company requests that the social media giant remove content from its site — Facebook “restricted access to items in the UAE, all reported by the Telecommunications Regulatory Authority, a federal UAE government entity responsible for [information technology] sector in the UAE. The content was reported for hate speech and was attacking members of the royal family, which is against local laws.” It’s hard to imagine even Facebook’s legendary public relations team could construe censoring criticism of “the royal family” as anything resembling a democratic value. A similar entry from the report, on Pakistan, notes that Facebook “restricted access to items that were alleged to violate local laws prohibiting blasphemy and condemnation of the country’s independence.” (Facebook declined to comment on the record for this story.)

Twitter’s Dorsey, to his credit, admitted that his company is essentially trapped between being a business and not wanting to cave to unjust — albeit locally legal — censorship requests. “We would like to fight for every single person being able to speak freely and see everything, but we have to realize that it’s going to take some bridges to get there,” Dorsey told Rubio when asked about takedown requests from the Turkish government.

According to Adrian Shahbaz, who researches internet liberties for Freedom House, Dorsey’s reply was appreciably “more grounded in reality” than Sandberg’s, who seemed to be claiming that her company didn’t need to compromise. Shahbaz pointed out that there will be a natural, inherent tension for any global company “tasked with regulating the public space for every single country in the world.”

Rather than pointing to local laws against, say, blasphemy, Shahbaz suggested companies like Facebook “should be defending democratic values and abiding by its own terms of service” instead of local frameworks that might stifle political speech. One tack would be for Facebook to hold up its corporate terms of service as something “more like a constitution, [saying] these are the values we believe in around the world,” regardless of jurisdiction.

“Facebook should explain what it means by democratic values if it complies with laws that don’t comply with those values.”

Such a stance would also require the spine to say no to a government whose citizens are potentially lucrative data fodder. Cynthia Wong, a senior internet researcher at Human Rights Watch, said that although it’s heartening that the social media firm has made public human rights commitments, such as joining the Global Network Initiative, “Facebook should explain what it means by democratic values if it complies with laws that don’t comply with those values.” Wong added that with Facebook’s controversial “real names” policy, which forbids the use of pseudonyms on the network, the social media company “creates a lot of danger” for democratic activists “who don’t want to use their real name because they’re facing reprisal.”

For Rubio, these questions are essentially about whether companies like Facebook are truly “built on these core values” or whether they were merely “global companies like all these other companies that come around here, who see their number one obligation to make money.” So, which is it? The easiest way to explain the apparent contradiction between “we would only operate in a country when we could do so in keeping with our values” and helping a royal family stifle criticism is that, yes, Facebook is a global company that sees the generation of profit as its number one obligation. Facebook’s values aren’t so much the promotion of global democracy, but the promotion of global Facebook.

Top photo: Facebook COO Sheryl Sandberg testifies before the Senate Intelligence Committee on Capitol Hill in Washington, D.C., on Sept. 5, 2018.

The post Sheryl Sandberg Misled Congress About Facebook’s Conscience appeared first on The Intercept.

September 4, 2018

Are We Making Elections Less Secure Just to Save Time?...

Something strange happens on election night. With polls closing, American supporters of both parties briefly, intensely align as one: We all want to know who’s going to win, and we don’t want to wait one more minute. The ravenous national appetite for an immediate victor, pumped up by frenzied cable news coverage and now Twitter, means delivering hyper-updated results and projections before any official tally is available. But the technologies that help ferry lightning-quick results out of polling places and onto CNN are also some of the riskiest, experts say.

It’s been almost two years since Russian military hackers attempted to hijack computers used by both local election officials and VR Systems, an e-voting company that helps make Election Day possible in several key swing states. Since then, reports detailing the potent duo of inherent technical risk and abject negligence have made election security a national topic. In November, millions of Americans will vote again — but despite hundreds of millions of dollars in federal aid poured into beefing up the security of your local polling station, tension between experts, corporations, and the status quo over what secure even means is leaving key questions unanswered: Should every single vote be recorded on paper, so there’s a physical trail to follow? Should every election be audited after the fact, as both a deterrent and check against fraud? And, in an age where basically everything else is online, should election equipment be allowed anywhere near the internet?

The commonsense answer to this last question — that sounds like a terrible idea — belies its complexity. On the one hand, the public now receives regular, uniform warnings from the intelligence community, Congress, and other entities privy to sensitive data: Bad actors abroad have and will continue to try to use computers to penetrate or disrupt our increasingly computerized vote. Just this past March, the Senate Intelligence Committee recommended that “[a]t a minimum, any machine purchased going forward should have a voter-verified paper trail and no WiFi capability.” Given that a hacker on the other side of the planet will have trouble connecting to a box in Virginia that’s not connected to anything at all, it stands to reason that walling off these sensitive systems from the rest of the world will make them safer.

Tammy Patrick, a former Arizona election officer and current senior adviser at the Democracy Fund, which, like The Intercept, is funded by eBay founder Pierre Omidyar, said that although she isn’t aware of a jurisdiction that “connects their voting equipment using Wi-Fi,” other wireless technologies are sometimes built in. Additionally, computers only one degree removed from the digital ballot boxes themselves will often connect to the internet, Patrick explained. “What does happen more frequently is that the vote storage unit may be removed [from the voting machine] and used to modem in results,” she said. Some election workers send vote tallies from tablets using Wi-Fi, while in other jurisdictions, poll workers come to centralized locations that have either hard-wired or wireless internet access. You can think of it as a sort of malware cross contamination, whereby a computer kept segregated from the internet is vulnerable nonetheless because of the internet-connected computers it comes into contact with. It’s the same basic concept that U.S. and Israeli hackers used to attack Iranian centrifuge computers that were technically walled off from the net.

Despite all these warnings, experts worry that wireless features — which could save a skilled hacker or other meddler the trouble of having to get physically close to the systems in question — are being pushed hard for reasons that just aren’t good enough, at a time when many other security issues remain unresolved. “At the local level, it is a serious struggle to get the basics right,” security researcher and cryptographer Kenneth White told The Intercept. “When we add in, for example, cellular or Wi-Fi connectivity to the actual voting equipment, it only makes security that much more difficult and the risk of compromise so much greater.”

According to one former federal election official who spoke to The Intercept on the condition of anonymity because he was not permitted to speak to the press, many states already employ wireless connections in one form or another and are loath to give them up now, even in the name of making the vote harder to hack. “Election officials do understand that it’s a security issue,” this person told The Intercept, “but this capability is already embedded into their election process and they rely upon it. Making that sort of logistical change to their process – during an election year – is arduous. This is especially true for results transmission on election night.”

Some voting machines allow preliminary results to be beamed to a county office using the same kind of modem found in smartphones, rather than being physically carried from each polling station. This means early results can be shared instantly — but it also means that the data is only as secure as the cellular company carrying it. Such connections, which not only transmit data but also receive it, provide yet another potential weak point that hackers could use to pry into a machine and compromise it. Wi-Fi skeptics like George Washington University computer science professor Poorvi Vora have argued that such vulnerabilities must be eliminated. “We have to reduce all opportunities for interference. Our systems are only as secure as their weakest links,” Vora wrote earlier this year on an election security email list maintained by NIST, the National Institute for Standards and Technology.

Modern voting systems — the equipment used to set up a ballot, cast votes, tabulate those votes, report them, and audit the entire process — are essentially just extremely specialized computers that, like your home laptop, run software, store inputs, and send outputs. As with any computer, it’s possible that some clever person can trick the machine into doing something it’s not supposed to, whether for a personal thrill or to serve a more sinister agenda.

Most methods of beefing up a computer’s security are accompanied by minor drawbacks: Putting a password on your phone means having to unlock it; anti-virus software on your computer will eat up some of its memory; and encrypting your email with PGP requires a small seminar on the fundamentals of cryptography. Securing the vote is a tradeoff like any other, and the wireless debate exposes a perennial tension: The easier we make it to run an election, the easier we may make it to meddle in that election.

Additionally, so much of the voting process, from registering voters to counting their ballots, now occurs digitally and across a patchwork of computers that rendering all these computers unable to talk to one another looks increasingly impractical. It’s also the case that many people involved in both the private-sector manufacturing and public-sector administration of elections want wireless connectivity for the same reasons you want it on your iPhone and laptop: It makes life a lot easier. Imagine you’re relying on wireless connections to administer an important vote, where delays and snags on Election Day could make your district the subject of humiliating headlines and local scorn.

“We don’t need to look far to see examples of what happens when a jurisdiction doesn’t report quickly,” Tammy Patrick cautioned. “When there are delays in reporting, it can jeopardize the reputation of the election official, their office, and call into question the legitimacy of the election itself — even when the delays are clearly documented and understood.” The former federal election official agreed, saying that the push for early results push is potentially perilous:

“In my opinion our nation is overly concerned with obtaining the results on election night. Election administrators will have already been putting in extreme overtime heading up to a larger general election. And now they must stay and continue to work after a 12-15 hour day to tabulate the results. These conditions can create an environment where corners are sometimes cut and mistakes made – although administrators work hard to prevent that from happening”

Disagreements over wireless electoral gear can get ugly. On the obscure email list run by NIST, where a diverse crowd of academics, private-sector executives, and voting officials are trying to hammer out voluntary election security guidelines, the wireless question is at an impasse.

In the exchange with Vora earlier this year, an executive at Votem, a company that sells smartphone voting software, scoffed at the demand for a blanket ban on election-related wireless as “lazy,” taking particular issue with “the idea that any of us in this discussion can possibly know enough about the future to say with certainty X technology should be banned or not.” (In a Votem blog post published a month earlier, the executive, David Wallick, wrote that the company’s “greatest challenge” was “pushing the envelope” with regard to technologies that make the public uncomfortable.)

Piling on, Bernie Hirsch, an executive at e-voting firm MicroVote, suggested that just like Wi-Fi, e-voting paper trails could be “hacked” by some malicious mailman — so why should one be forbidden while the other was left alone? Duncan Buell, a professor of computer science at the University of South Carolina, wasn’t amused, calling Hirsch’s response “at least hugely facetious and at worst a genuine troll.”

“Ballot corruption in a paper system involves complicit human actors on-site dealing with physical objects,” Buell noted. “As is well-known to all of us, corruption/disruption of electronic systems (ballot or otherwise) can be done without detection by almost anyone from almost anywhere on the planet.”

It’s not just vendors, loath to ban a feature today that they might be able to market tomorrow, who are pushing for wireless despite emphatic warnings against it. Running an election is an enormous, thankless undertaking, and being able to transmit data through the air means fewer steps required in person. On a recent conference call between NIST email list members, an election administrator in Texas argued that permitting wireless connections to their machines meant that they could turn them on remotely en route to the warehouse where they’re stored, saving everyone time spent standing around and waiting for computers to boot up, according call participants.

Although it’s possible to “harden” a wireless connection against an attacker for applications like this, doing so “is not child’s play and is the kind of thing that can be easily misconfigured,” cautioned Joseph Lorenzo Hall, chief technologist with the Center for Democracy & Technology and a scholar of voting insecurity. As with any kind of computer security, there are many, many opportunities for someone to quietly screw up. “There are stronger wireless protocols that could be used,” added cryptographer Kenneth White, “but they are considerably more difficult to administer and maintain.” Even the best security precautions on paper can be undone instantly by a single error, what White refers to as the “church basement volunteer problem.”

The desire to effortlessly beam unofficial election results “is definitely a real pressure” in the debate over wireless, agrees Hall. “Both voters and the press feel that there should be an almost immediate answer, when in fact the real answer takes 15 to 30 days in many places.” Patrick concurs, adding that “the pressure comes from all sides — media, candidates, parties, voters,” and that “no one is immune from wanting instant gratification, and perhaps catharsis.”

To White and many of his peers, there’s one simple takeaway: Get rid of as many of those screw-up opportunities as possible. “Do we want to assure the integrity of our votes or not? If we do, and we want it at scale, then paper-verifiable, electronic voting systems [are] our best path forward,” White said. “The less complex and connected we can make those systems, the more faith we can have that every citizen’s vote cast is recorded.”

The post Are We Making Elections Less Secure Just to Save Time? appeared first on The Intercept.

August 20, 2018

Facebook Suspended a Latin American News Network and Gave Three Differ...

On August 13th, Facebook shut down the English language page of TeleSUR, blocking access for roughly half a million followers of the leftist media network until it was abruptly reinstated two days later. Facebook has provided three different explanations for the temporary disappearing, all contradicting one another, and not a single one making sense.

TeleSUR was created by Venezuela’s then-president Hugo Chavez in 2005 and co-funded by hemispheric neighbors Cuba, Bolivia, Nicaragua, and Uruguay — Argentina pulled support for the web and cable property in 2016. As a state owned and media property, it exists somewhere on the same continuum as RT and Al Jazeera, though like the former, TeleSUR has been criticized as a nakedly partisan governmental mouthpiece, and like the latter, it does engage in real news reporting. But putting aside questions of bias and agenda aside, TeleSUR does seem to exist on a separate plane than, say, InfoWars, which exists primarily to peddle its particular, patently false genre of right wing paranoia fan fiction packaged as news (and brain pills), as opposed to some garden variety political agenda. Unlike RT, TeleSUR hasn’t been singled out for a role in laundering disinformation for military intelligence purposes, nor is it a hoax factory, a la Alex Jones.

So it was unexpected when TeleSUR English blinked out of existence on the 13th, and even stranger when Facebook struggled to explain its own actions. At the time of its suspension, TeleSUR received this boilerplate message from Facebook:

Hello,

Your Page “teleSUR English” has been removed for violating our Terms of Use. A Facebook Page is a distinct presence used solely for business or promotional purposes. Among other things, Pages that are hateful, threatening or obscene are not allowed. We also take down Pages that attack an individual or group, or that are set up by an unauthorised individual. If your Page was removed for any of the above reasons, it will not be reinstated. Continued misuse of Facebook’s features could result in the permanent loss of your account.

The Facebook Team

Later that day, a Facebook customer support agent told the network that the suspension appeared to be due to a technical glitch — a go-to explanation for the company— rather than a violation of the company’s Terms of Use, adding that the issue was “under analysis by the engineering department.”

We have a mishmash of incompatible justifications.

The next day, Facebook wrote TeleSUR again, this time saying that the company’s engineers had conducted “several tests,” and assured the outlet that “technicians” continued to look for an answer. On Wednesday, after a 48 hour blackout, Facebook wrote once more to say the page had been suspended due to a mysterious “instability on the platform,” which had now been corrected. It’s unclear whether Facebook would have corrected this “instability” had TeleSUR not complained to them, and equally unclear why the company had initially claimed TeleSUR had violated its terms of service.

But Facebook has a third reason for suspending TeleSUR: In an emailed statement to The Intercept, a company spokesperson said “The Page was temporally unpublished to protect it after we detected suspicious activity.” The term “suspicious activity” does not appear in Facebook’s terms of service. The spokesperson would not explain what “suspicious activity” was observed on TeleSUR’s page, nor define the term, nor is there any explanation for why it was initial blamed on rule-breaking by TeleSUR and then technical problems on the social network’s end.

Even if you were to assume the worst about TeleSUR, that it exists to parrot the opinions of repressive regimes, and even if you could come up with an argument that TeleSUR in fact ought to be suspended for one reason or another, it’s hard to imagine an argument that Facebook has no obligation to explain its actions in a manner that could be described as even mostly coherent, if not transparent. This is typical behavior for the company, which both touts its use of automated rule enforcement and scapegoats the algorithms when they go awry. In TeleSUR’s case, there’s no word as to whether a human or string of content-policing computer code “unpublished” the page, mistakenly or not, justifiably or otherwise. Instead, we have a mishmash of incompatible justifications, the latest in a long stream out of a company that’s struggled to create intelligible rules for acceptable content and behavior, let alone enforce them. To its credit, Facebook published a long account of its reasoning behind suspending InfoWars’ Alex Jones, though this likely has more to do with public relations angst than some commitment to consistency and transparency. For a company that’s testified before Congress and bought billboards around the country saying they’re working on accountability and earning public trust, this is a problem — it’s difficult to picture anything further from accountability than enforcing rules from behind a curtain.

The post Facebook Suspended a Latin American News Network and Gave Three Different Reasons Why appeared first on The Intercept.