Uber Eats courier’s fight against AI bias shows justice under UK law is hard won
On Tuesday, the BBC reported that Uber Eats courier Pa Edrissa Manjang, who is Black, had received a payout from Uber after âracially discriminatoryâ facial recognition checks prevented him from accessing the app, which he had been using since November 2019 to pick up jobs delivering food on Uberâs platform.
The news raises questions about how fit UK law is to deal with the rising use of AI systems. In particular, the lack of transparency around automated systems rushed to market, with a promise of boosting user safety and/or service efficiency, that may risk blitz-scaling individual harms, even as achieving redress for those affected by AI-driven bias can take years.
The lawsuit followed a number of complaints about failed facial recognition checks since Uber implemented the Real Time ID Check system in the U.K. in April 2020. Uberâs facial recognition system â based on Microsoftâs facial recognition technology â requires the account holder to submit a live selfie checked against a photo of them held on file to verify their identity.
Failed ID checks
Per Manjangâs complaint, Uber suspended and then terminated his account following a failed ID check and subsequent automated process, claiming to find âcontinued mismatchesâ in the photos of his face he had taken for the purpose of accessing the platform. Manjang filed legal claims against Uber in October 2021, supported by the Equality and Human Rights Commission (EHRC) and the App Drivers & Couriers Union (ADCU).
Years of litigation followed, with Uber failing to have Manjangâs claim struck out or a deposit ordered for continuing with the case. The tactic appears to have contributed to stringing out the litigation, with the EHRC describing the case as still in âpreliminary stagesâ in fall 2023, and noting that the case shows âthe complexity of a claim dealing with AI technologyâ. A final hearing had been scheduled for 17 days in November 2024.
That hearing wonât now take place after Uber offered â and Manjang accepted â a payment to settle, meaning fuller details of what exactly went wrong and why wonât be made public. Terms of the financial settlement have not been disclosed, either. Uber did not provide details when we asked, nor did it offer comment on exactly what went wrong.
We also contacted Microsoft for a response to the case outcome, but the company declined comment.
Despite settling with Manjang, Uber is not publicly accepting that its systems or processes were at fault. Its statement about the settlement denies courier accounts can be terminated as a result of AI assessments alone, as it claims facial recognition checks are back-stopped with ârobust human review.â
âOur Real Time ID check is designed to help keep everyone who uses our app safe, and includes robust human review to make sure that weâre not making decisions about someoneâs livelihood in a vacuum, without oversight,â the company said in a statement. âAutomated facial verification was not the reason for Mr Manjangâs temporary loss of access to his courier account.â
Clearly, though, something went very wrong with Uberâs ID checks in Manjangâs case.
Worker Info Exchange (WIE), a platform workersâ digital rights advocacy organization which also supported Manjangâs complaint, managed to obtain all his selfies from Uber, via a Subject Access Request under UK data protection law, and was able to show that all the photos he had submitted to its facial recognition check were indeed photos of himself.
âFollowing his dismissal, Pa sent numerous messages to Uber to rectify the problem, specifically asking for a human to review his submissions. Each time Pa was told âwe were not able to confirm that the provided photos were actually of you and because of continued mismatches, we have made the final decision on ending our partnership with youâ,â WIE recounts in discussion of his case in a wider report looking at âdata-driven exploitation in the gig economyâ.
Based on details of Manjangâs complaint that have been made public, it looks clear that both Uberâs facial recognition checks and the system of human review it had set up as a claimed safety net for automated decisions failed in this case.
Equality law plus data protection
The case calls into question how fit for purpose UK law is when it comes to governing the use of AI.
Manjang was finally able to get a settlement from Uber via a legal process based on equality law â specifically, a discrimination claim under the UKâs Equality Act 2006, which lists race as a protected characteristic.
Baroness Kishwer Falkner, chairwoman of the EHRC, was critical of the fact the Uber Eats courier had to bring a legal claim âin order to understand the opaque processes that affected his work,â she wrote in a statement.
âAI is complex, and presents unique challenges for employers, lawyers and regulators. It is important to understand that as AI usage increases, the technology can lead to discrimination and human rights abuses,â she wrote. âWe are particularly concerned that Mr Manjang was not made aware that his account was in the process of deactivation, nor provided any clear and effective route to challenge the technology. More needs to be done to ensure employers are transparent and open with their workforces about when and how they use AI.â
UK data protection law is the other relevant piece of legislation here. On paper, it should be providing powerful protections against opaque AI processes.
The selfie data relevant to Manjangâs claim was obtained using data access rights contained in the UK GDPR. If he had not been able to obtain such clear evidence that Uberâs ID checks had failed, the company might not have opted to settle at all. Proving a proprietary system is flawed without letting individuals access relevant personal data would further stack the odds in favor of the much richer resourced platforms.
Enforcement gaps
Beyond data access rights, powers in the UK GDPR are supposed to provide individuals with additional safeguards, including against automated decisions with a legal or similarly significant effect. The law also demands a lawful basis for processing personal data, and encourages system deployers to be proactive in assessing potential harms by conducting a data protection impact assessment. That should force further checks against harmful AI systems.
However, enforcement is needed for these protections to have effect â including a deterrent effect against the rollout of biased AIs.
In the UKâs case, the relevant enforcer, the Information Commissionerâs Office (ICO), failed to step in and investigate complaints against Uber, despite complaints about its misfiring ID checks dating back to 2021.
Jon Baines, a senior data protection specialist at the law firm Mishcon de Reya, suggests âa lack of proper enforcementâ by the ICO has undermined legal protections for individuals.
âWe shouldnât assume that existing legal and regulatory frameworks are incapable of dealing with some of the potential harms from AI systems,â he tells TechCrunch. âIn this example, it strikes meâŚthat the Information Commissioner would certainly have jurisdiction to consider both in the individual case, but also more broadly, whether the processing being undertaken was lawful under the UK GDPR.
âThings like â is the processing fair? Is there a lawful basis? Is there an Article 9 condition (given that special categories of personal data are being processed)? But also, and crucially, was there a solid Data Protection Impact Assessment prior to the implementation of the verification app?â
âSo, yes, the ICO should absolutely be more proactive,â he adds, querying the lack of intervention by the regulator.
We contacted the ICO about Manjangâs case, asking it to confirm whether or not itâs looking into Uberâs use of AI for ID checks in light of complaints. A spokesperson for the watchdog did not directly respond to our questions but sent a general statement emphasizing the need for organizations to âknow how to use biometric technology in a way that doesnât interfere with peopleâs rightsâ.
âOur latest biometric guidance is clear that organisations must mitigate risks that come with using biometric data, such as errors identifying people accurately and bias within the system,â its statement also said, adding: âIf anyone has concerns about how their data has been handled, they can report these concerns to the ICO.â
Meanwhile, the government is in the process of diluting data protection law via a post-Brexit data reform bill.
In addition, the government also confirmed earlier this year it will not introduce dedicated AI safety legislation at this time, despite prime minister Rishi Sunak making eye-catching claims about AI safety being a priority area for his administration.
Instead, it affirmed a proposal â set out in its March 2023 whitepaper on AI â in which it intends to rely on existing laws and regulatory bodies extending oversight activity to cover AI risks that might arise on their patch. One tweak to the approach it announced in February was a tiny amount of extra funding (ÂŁ10 million) for regulators, which the government suggested could be used to research AI risks and develop tools to help them examine AI systems.
No timeline was provided for disbursing this small pot of extra funds. Multiple regulators are in the frame here, so if thereâs an equal split of cash between bodies such as the ICO, the EHRC and the Medicines and Healthcare products Regulatory Agency, to name just three of the 13 regulators and departments the UK secretary of state wrote to last month asking them to publish an update on their âstrategic approach to AIâ, they could each receive less than ÂŁ1M to top up budgets to tackle fast-scaling AI risks.
Frankly, it looks like an incredibly low level of additional resource for already overstretched regulators if AI safety is actually a government priority. It also means thereâs still zero cash or active oversight for AI harms that fall between the cracks of the UKâs existing regulatory patchwork, as critics of the governmentâs approach have pointed out before.
A new AI safety law might send a stronger signal of priority â akin to the EUâs risk-based AI harms framework thatâs speeding towards being adopted as hard law by the bloc. But there would also need to be a will to actually enforce it. And that signal must come from the top.