
Opinions | When technology of the future traps people in the past
But would you think it’s fair to be denied life insurance based on your Zip code, online shopping behavior or social media posts? Or to pay a higher rate on a student loan because you majored in history rather than science? What if you were passed over for a job interview or an apartment because of where you grew up? How would you feel about an insurance company using the data from your Fitbit or Apple Watch to figure out how much you should pay for your health-care plan?
Political leaders in the United States have largely ignored such questions of fairness that arise from insurers, lenders, employers, hospitals and landlords using predictive algorithms to make decisions that profoundly affect people’s lives. Consumers have been forced to accept automated systems that today scrape the internet and our personal devices for artifacts of life that were once private — from genealogy records to what we do on weekends — and that might unwittingly and unfairly deprive us of medical care, or keep us from finding jobs or homes.
With Congress thus far failing to pass an algorithmic accountability law, some state and local leaders are now stepping up to fill the void. Draft regulations issued last month by Colorado’s insurance commissioner, as well as recently proposed reforms in DC and California, point to what policymakers might do to bring us a future where algorithms better serve the public good.
The promise of predictive algorithms is that they make better decisions than humans — freed from our whims and biases. Yet today’s decision-making algorithms too often use the past to predict — and thus create — people’s destinations. They assume we will follow in the footsteps of others who looked like us and have grown up where we grew up, or who studied where we studied — that we will do the same work and earn the same salary.
Predictive algorithms might serve you well if you grew up in an affluent neighborhood, enjoyed good nutrition and health care, attended an elite college, and always behaved like a model citizen. But anyone stumbling through life, learning and growing and changing along the way, can be steered toward an unwanted future. Overly simplistic algorithms reduce us to stereotypes, denying us our individuality and the agency to shape our own futures.
For companies trying to pool risk, offer services or match people to jobs or housing, automated decision-making systems create efficiencies. The use of algorithms creates the impression that their decisions are based on an unbiased, neutral rationale. But too often, automated systems reinforce existing biases and long-standing inequities.
Consider, for example, the research that showed an algorithm had kept several Massachusetts hospitals from putting Black patients with severe kidney disease on transplant waitlists; it scored their conditions as less serious than those of White patients with the same symptoms. A ProPublica investigation revealed that criminal offenders in Broward County, Fla., were being scored for risk — and therefore sentenced — based on faulty predictors of their likelihood to commit future violent crime. And Consumer Reports recently found that poorer and less-educated people are charged more for car insurance.
Because many companies shield their algorithms and data sources from Scrutiny, people can’t see how such decisions are made. Any individual who is quoted a high insurance premium or denies a mortgage can’t tell if it has to do with anything other than their underlying risk or ability to pay. Intentional discrimination based on race, gender and ability is not legal in the United States. But it is legal in many cases for companies to discriminate based on socioeconomic status, and algorithms can unintentionally reinforce disparities along racial and gender lines.
The new regulations being proposed in several localities would require companies that rely on automated decision-making tools to monitor them for bias against protected groups — and to adjust them if they are creating outcomes that most of us would deem unfair.
In February, Colorado adopted the most ambitious of these reforms. The state insurance commissioner issued draft rules that would require life insurers to test their predictive models for unfair bias in setting prices and plan eligibility, and to disclose the data they use. The proposal builds on a groundbreaking 2021 state law — passed despite intense insurance industry lobbying efforts against it — meant to protect all kinds of insurance consumers from unfair discrimination by algorithms and other AI technologies.
In DC, five city council members last month reintroduced a bill that would require companies using algorithms to audit their technologies for patterns of bias — and make it illegal to use algorithms to discriminate in education, employment, housing, credit, health care and insurance. And just a few weeks ago in California, the state’s privacy protection agency initiated an effort to prevent bias in the use of consumer data and algorithmic tools.
Although such policies still lack clear provisions for how they will work in practice, they deserve public support as a first step toward a future with fair algorithmic decision-making. Trying these reforms at the state and local level might also give federal lawmakers the insight to make better national policies on emerging technologies.
“Algorithms don’t have to project human bias into the future,” said Cathy O’Neil, who runs an algorithm auditing firm that is advising the Colorado insurance regulators. “We can actually project the best human ideals onto future algorithms. And if you want to be optimistic, it’s going to be better because it’s going to be human values, but leveled up to uphold our ideals.”
I do want to be optimistic — but also vigilante. Rather than dread a dystopian future where artificial intelligence overpowers us, we can prevent predictive models from treating us unfairly today. Technology of the future should not keep haunting us with ghosts from the past.