Does Apple Care More About Your Privacy Than the Government?

Solicitor Kevin Donoghue asks if Apple care about privacy more than the government.

Solicitor Kevin Donoghue considers privacy concerns.

By Kevin Donoghue, solicitor

 
Timing is everything.
 
On Tuesday Apple showcased its latest smartphone, the iPhone X.
 
The next day Professor Paul Wiles, the Commissioner for the Retention and Use of Biometric Material (‘the biometrics commissioner’), published his annual report.
 
On the face of it, these events are unrelated. But are they?
 

Apple iPhone X Privacy Concerns

 
Tim Cook, the CEO of Apple, thinks the iPhone X is “the biggest leap forward since the original iPhone”. That’s a bold claim. iPhone revolutionised the market, and Apple has sold an estimated 700 million units.
 
One reason Apple is so excited about the iPhone X is “Face ID”. Face ID uses a front-facing camera to read the contours of your face. The phone will use that biometric data to recognise you. You can then:
 
· Unlock your phone without a password. 
· Use your phone to buy things. 
· Operate third-party applications.
 
Apple says, “Face ID is the future of how we unlock our smartphones and protect our sensitive information.” 
 
No more remembering passwords or using thumbprints. Handy, but it raises security and privacy issues.
 
For example, there is a risk that a thief could hold the phone up to your face to unlock it, before taking it away from you. No password; no problem.
 
Also, your face is different to an easily changed password. It is a permanent, public, and unique feature. A bad actor could abuse the biometric data from the phone. Imagine the life-changing financial and other harm.
 
The day after Apple’s announcement, Sen. Al Franken of the United States Senate wrote to Apple expressing his concerns. Among other things, he sought answers about how Apple intended to use and share the data, saying,
 
…Apple itself could use the data to benefit other sectors of its business, sell it to third parties for surveillance purposes, or receive law enforcement requests to access it(s) facial recognition system- eventual uses that may not be contemplated by Apple customers.
 
And then there’s the bigger question about where this biometric data is stored or shared. Apple says that biometric data will not go to the cloud. It will stay on the phone. But it’s not clear if the company can extract that data remotely or through physical access to the phone.
Also, the company says it does not have plans to upload biometric data. But will it in future? And how will you know? Will you read the updated terms and conditions or just click “accept”? Chances are it’s the latter. Time reported that it would take you 76 work days to read the privacy policies you come across as an internet user in a year. Even if Apple told you it was going to share your biometric data, would you notice?

Biometrics Data Held by Police

 
These are serious issues. Apple must address them to reassure customers and legislators worldwide. Which brings me to the second piece of news this week: the biometrics commissioner’s latest report. For those of us with an interest in privacy concerns, it makes for grim reading.
 
As I previously wrote, police use facial recognition technology without proper oversight. The Home Office has largely ignored the issue, which suggests
 
a wilful disregard of government duties and the democratic process.
 
One reason for this lack of oversight is that DNA and fingerprint biometrics are treated differently to facial images. In his latest report, Paul Wiles noted that the National DNA Database and Fingerprint Strategy Board has statutory powers under the Protection of Freedoms Act (2012). Among other things, it
  1. Monitors the performance of the National DNA database.
  2. Gives guidance to the police on the collection and use of DNA.
Facial images held on the Police National Database fall outside its remit. This is concerning, as Professor Wiles notes in his report,
 
The use of facial images by the police has gone far beyond using them for custody purposes. In July 2016 there were 19 million facial images on the Police National Database, 16,644,143 of which had been enrolled in the facial image recognition gallery and were (and remain) searchable using facial recognition software.
 
19 million images is an underestimate. It does not include all those held by the Metropolitan Police Service, the UK’s largest force. The true number is more than 20 million.
 
And unlike in the National DNA database, facial images are stored in an “anarchic” way by the various police services. Not all forces upload facial biometrics and images to the Police National Database. Durham, Leicestershire, and the Metropolitan Police Service also hold images in their own databases. Databases use different systems and software. Image quality varies. So, according to Her Majesty’s Inspectorate of Constabulary (Scotland), 
 
This means that differing standards are being applied to a common UK database.
 
As Paul Wiles warns,
 
This situation could easily produce differential decision making and potentially runs the risk of false intelligence or wrongful allegations.
 

Facial Recognition Technology Trial

The risk Professor Wiles described was highlighted at this year’s Notting Hill Carnival. As Liberty reported, the Metropolitan Police Service trialled facial recognition technology for the second time at the event, which involved an estimated 2 million carnival-goers. To say the technology has a long way to go would be kind. Silkie Carlo of Liberty found a
 
worryingly inaccurate and painfully crude facial recognition operation where the rules are devised on the spot.
 
She described how the Metropolitan Police,
 
had constructed a “bespoke dataset” for the weekend – more than 500 images of people they were concerned might attend. Some police were seeking to arrest, others they were looking to apprehend if they were banned from attending.
The facial recognition system failed in its task. It couldn’t tell men from women. It produced around 30 false positives. As Ms Carlo explained,
 
At least five of these they had pursued with interventions, stopping innocent members of the public who had, they discovered, been falsely identified.
 
There was no concern about this from the project leaders.
 

Racial Bias

A serious issue with facial recognition technology is racial bias. As The Atlantic, an American magazine, explains,
 
Facial-recognition systems are more likely either to misidentify or fail to identify African Americans than other races, errors that could result in innocent citizens being marked as suspects in crimes. And though this technology is being rolled out by law enforcement across the country, little is being done to explore—or correct—for the bias.
 
 This can be for many reasons, including:
  • The engineer developing the system designs it to focus on facial features that are more easily seen in some races than others.
  • The engineer’s own race may influence them when designing the system to distinguish faces.
The software may not be designed as “racist”, but that doesn’t lessen its effect. Despite this, Ms Carlo found that the Metropolitan Police,
 
had no intention of independently testing for racial bias. They had not asked the vendor if they had tested the algorithm for bias. It wasn’t a concern.
 
Similarly, they were wilfully ignorant of the demographic data in their Carnival dataset. They didn’t know the ethnicities, ages or gender of those on their watch list – nor did they want to.
 

Public Confidence

 
In 2012 Lord Justice Richards found that the police’s policy of keeping facial images was unlawful. He said the government should revise its policy “within months”. It took 5 years for the Home Office to come up with a review. But, as Prof Wiles notes, even that was not good enough:
 
The recent Review proposes leaving all these issues solely in the hands of the police without any independent oversight or assurance to reassure the public, especially those individuals whom the 2012 Court judgment 192 described as “entitled to the presumption of innocence”.
 
It is now almost five years since the Court held that the police retention of facial images was unlawful, yet we still do not have a clear policy in operation to correct that situation.
 
And he warned,
 
Facial images are a powerful new biometric but the acceptance by the public of their use for crime control purposes may depend on the extent to which the governance arrangements provide assurance that their use will be in the public interest and intrusion into individual privacy is controlled and proportionate.
 

Response Request

Sen. Al Franken gave Apple a month to answer his questions about its Face ID facial recognition technology. The company has already addressed some of the issues. I expect it will go further and seek to reassure the public and regulators that its new technology is safe and will be managed responsibly. The Home Office and police should do the same.
 
Kevin Donoghue is a solicitor who specialises in civil actions against the police.