Imagine that you are a longstanding local postmaster. You’ve always prided yourself on your job and have been very careful as a custodian of public resources. That’s why you can’t believe it when you are arrested for stealing tens of thousands of dollars from the postal service.
When the charges are brought, you find out that the evidence against you isn’t from a fellow employee. It’s from a software program called “Horizon.” This is not a sci-fi story, but an actual situation faced by postal employees in the United Kingdom.
British postal employees were arrested, tried in court, found guilty, and imprisoned for crimes based solely on evidence from the Horizon system. This went on for decades. Those who were accused by the postal service often lost their marriages and couldn’t find new jobs. One employee committed suicide. A total of 736 employees were prosecuted over a period of five years ending in 2014.
Horizon was a flawed system. Postmasters reported bugs that led to inaccurate reports of shortfalls—shortfalls that some postmasters made up out of their own pockets. The British Post Office refused to acknowledge their complaints. The Post Office continued to argue that the system was not flawed even after their legal department knew of its flaws prior to many of the convictions.
After years of legal expenses, a number of employees have had their convictions overturned. Employees who were falsely accused are being compensated for their arrests and convictions. Some of them had even remortgaged their homes to make up for the money they were accused of stealing.
What are the dangers when we allow a bit of technology to essentially stand as an accuser, providing evidence against a person? We have a saying that “the numbers don’t lie.” Numbers might have even more compelling power when they’re gathered by a computerized system intended to be free from typical human errors or biases. But what if the numbers do lie—or were gathered by flawed systems?
What should be the evidentiary standards for such technology if it is going to play such a key role in our judicial system (think, for example, of DNA evidence)? How can a jury of peers be expected to adequately judge highly technical evidence? In the Horizon case, the errors produced by the program were unintentional, the result of bugs in the system. But what of situations where such errors are intentional? We are fast approaching a time when “deepfakes” will provide realistic video and audio recordings that will defy our capacity to tell the difference between what is true and what is fake.
Just imagine how the use of technology might evolve in our judicial system and in our democracy more broadly. What role might the evolution of AI play? How might we validate evidence that’s based upon technology—whether it’s used in our judicial system or more broadly for policy making in our democracy? What happens if there’s a time when we determine that such technology is in conflict with our democratic ideals?
* * *
“Technology could benefit or hurt people, so the usage of tech is the responsibility of humanity as a whole, not just the discoverer. I am a person before I’m an AI technologist.”–Fei-Fei Li, Chinese-born American computer scientist
This is part of our “Just Imagine” series of occasional posts, inviting you to join us in imagining positive possibilities for a citizen-centered democracy.