Government is using private companies to pry into people’s lives. While some good comes from it, at what cost to privacy are people allowing this to continue?
February 3, 2020
By: Bobby Casey, Managing Director GWP
The amount of news surfacing about the various directions from which your privacy is being attacked is so much, I had to break this up into two parts! We cover privacy protection a lot here. The government has made my job more essential, which is sad in the same way they’ve made lawyers, CPAs, and other bureaucratic navigators essential.
Regardless of the government, protecting your privacy is always important. But the fact that the government is as much of a threat to your security as any private sector hacker is disconcerting to say the least.
The hacker never promised to protect and defend your rights. Meanwhile, the government has a rather sly way of getting people’s consent to access and collect any data they want.
We last left off at facial recognition.
Facial Recognition Continued
Clearview AI has a facial recognition app which boasts a database of over 3 billion images scraped from the internet and other apps. With one picture of a person, this app can follow where the image wherever it goes publicly.
Social media sites such as Facebook and Twitter strictly prohibit the scraping of the images on their platforms; and they expressly prohibit the scraping of images for the purposes of facial recognition. Clearview is directly violating those terms, and no one seems to care.
According to the NYT:
“The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and who they knew.”
This isn’t the first of its kind. But it’s the first to be used. Facebook, for example, had the technological capabilities to release something similar, but refrained due to the potential for it to be misused.
Moreover, law enforcement agencies have been using facial recognition technology to some extent for decades. They were mainly limited to government photos like driver licenses and mug shots.
Misuse could be a sick police officer stalking a romantic interest or ex spouse, but misuse could also be unintentional. Facial recognition technology has a track record of mismatching people of color especially.
While Clearview won’t release the list, they claim that over 600 law enforcement agencies around the US started using their services.
Important to note, that this tool has not been vetted for accuracy by an independent third party expert, nor has its ability to secure data been tested. These law enforcement agencies are uploading very sensitive images to its server, ostensibly crossing their fingers that those images won’t get released to the general public.
Law enforcement are closing a lot of open cases involving theft, assault, and identity theft using this tool. Unlike the TSA, some strong results have come from this endeavor.
However, the company admits that the surveillance photos are taken at much higher angles than many of the photos in the database. The database consists of images taken at eye level or thereabouts. Surveillance is taken at a much higher angle from posts and building doorways. This does make matching much more challenging.
Clearview claims if your online images are set to a privacy setting, they won’t get scraped. If they already scraped your images, however, it’s too late. You’re in the database. Enter the latest fight in the EU regarding the “right to be forgotten”. Do people have a right to request their information and images be deleted from databases?
Since 2014, the answer is yes, in the EU. You have that right. The US has not passed any such legislation, due in large part to the conflict it poses with free speech rights.
The New York Times article concludes:
“Even if Clearview doesn’t make its app publicly available, a copycat company might, now that the taboo is broken. Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable — and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”
Congress to Give DHS More Invasive Powers
The name of the bill is: Cybersecurity and Vulnerability Identification and Notification Act of 2020. It’s on its way to the House for a vote.
As Reason explains, this bill gives the “Department of Homeland Security (DHS) more power to subpoena information from internet and telecommunications companies, including subscriber names, addresses, and telephone numbers.”
It’s meant to complement the Cybersecurity and Infrastructure Security Agency (CISA), relying on a very fast and loose policy of “see something say something”. It’s the equivalent of red flag laws.
This, like the Clearview technology, is good when used only for good and its limited intended purposes. The thing is, the government knows no bounds. There are no good people restraining themselves and only using these tools for righteous purposes. That’s how these things work their way through the door and become normalized.
Countless agencies and policies were supposed to be temporary and limited, and are now mainstays of the political ecosystem.
Here are the criteria for the DHS to issue a subpoena:
The legislation grants that “the director of Homeland Security may issue a subpoena for the production of information necessary to identify” suspected security vulnerabilities in an “information system connected to the internet,” so long as DHS says the potential risk is connected “to critical infrastructure.”
That’s a pretty long-winded way to say, “DHS can get a subpoena whenever it damn well pleases.”
According to the government’s own website, under the Commercial Facilities Sector section, critical infrastructure could include: casinos, hotels, motels, campgrounds, zoos, shopping malls, self-storage facilities, condominiums, banks, insurance companies, and motion picture studios
If you look at all the sectors, “critical infrastructure” is pretty much everywhere, which all but guarantees that burden is always going to be met.
And let’s face it, what the government defines as “risk”, is shoddy at best. All they need to do is drum up fear about some virus or terrorist plot and publish it in the news, and the DHS can check that box as well.
The overall burden is quite low for this subpoena. But the subpoena itself can demand IP address and subscriber/user information from telecommunication companies.
Police Access DNA Websites
Who didn’t see this coming? It didn’t take a foil hat to know that these companies would soon be in the crosshairs of the surveillance state. Even if the companies themselves were totally naive and never considered using their databases this way, given how all databases are being tapped by the government, this was inevitable.
A warrant is supposed to be very specific in nature: who you are searching, where you are searching, and what you are searching for.
As John Doe Summonses taught us, no longer does the government have to specify WHO they are looking for.
In Florida, Detective Michael Fields was solving several cases using the database of GEDmatch, one of the smaller DNA search banks out there, with 1.3 million registrants. Ancestery.com and 23andMe are the two largest with 15 million and 10 million genetic data registrants respectively.
GEDmatch changed its policy to where its users would need to opt IN to being part of police searches. That cut the searchable database down to 185,000 people. Detective Fields sought the cooperation of a judge and a warrant to override that policy, granting him access once again to the entire database of GEDmatch.
The arugement boiled down to the ends justifying the means.
The issue is, he doesn’t know who he’s looking for. This is a John Doe warrant. He didn’t get access to one or two suspects. He got access to the whole database.
The precedent has been set. If police can get access to GEDmatch, they can get Ancestry and 23andMe. The DNA home tests were NEVER designed to be sophisticated enough to be used in criminal investigations. In fact, the margin of error on just determining genealogy is rather staggering.
Ambry Genetics, a company that interprets data from consumer DNA tests, examined the raw data from 49 patients that had already received results from at-home tests. Its re-analysis, recently published in the journal Nature, found that 40 percent of the variants reported to patients were not actually present at all.
Particularly cringe-worthy, MIT Technology Review reports that many of the false-positive calls were related to genes that are related to an increased cancer risk — meaning that tests could have given families a big scare for no reason. And this high error rate is particularly concerning given that the U.S. Food and Drug Administration just approved 23andMe to sell genetic tests for cancer risk.
Large amounts of data and broad scoped warrants are a terrible combination.
People willingly handed their data over for online services like free email, social media, and directions, with the express purpose of receiving a particular service in return. The understanding was that they were giving their information for the purposes of using those tools, not as some form of future self-incrimination.
Understandably, the purpose of collecting the information on the part of these private businesses was not to incriminate their users, or make them vulnerable. Yet, intended use, and actual use are proving to be two wildly different things.
Despite the good that has been done, I’m reluctant to be as Machiavellian as the Florida judge here. Putting innocent people’s rights on the chopping block to possibly catch a guilty person isn’t a trade to be made so lightly, or at all for that matter.
Admittedly, I use social media, I have a credit card, and I have a Gmail account. I’m not going to live under a rock and become a prisoner of all this. I also take a lot of precautions to ensure that my information and identity are protected.
All of this is happening, our indignation notwithstanding. What are you doing to protect yourself?