March 16, 2023 at 1:53 pm EDT
But would you think it’s fair to be denied life insurance based on your Zip code, online shopping behavior or social media posts? Or to pay a higher rate on a student loan because you majored in history rather than science? What if you were passed over for a job interview or an apartment because of where you grew up? How would you feel about an insurance company using the data from your Fitbit or Apple Watch to figure out how much you should pay for your health-care plan?
Political leaders in the United States have largely ignored such questions of fairness that arise from insurers, lenders, employers, hospitals and landlords using predictive algorithms to make decisions that profoundly affect people’s lives. Consumers have been forced to accept automated systems that today scrape the internet and our personal devices for artifacts of life that were once private — from genealogy records to what we do on weekends — and that might unwittingly and unfairly deprive us of medical care, or keep us from finding jobs or homes.
With Congress thus far failing to pass an algorithmic accountability law, some state and local leaders are now stepping up to fill the void. Draft regulations issued last month by Colorado’s insurance commissioner, as well as recently proposed reforms in DC and California, point to what policymakers might do to bring us a future where algorithms better serve the public good.
The promise of predictive algorithms is that they make better decisions than humans — freed from our whims and biases. Yet today’s decision-making algorithms too often use the past to predict — and thus create — people’s destinations. They assume we will follow in the footsteps of others who looked like us and have grown up where we grew up, or who studied where we studied — that we will do the same work and earn the same salary.
Predictive algorithms might serve you well