Artificial intelligence (the ability of computer systems to perform tasks that normally require human intelligence), or AI, is estimated to reach levels of human intelligence by 2029. It is already being used in industries from government and health care to transportation.
The New York Police Department is currently using crime-forecasting software to predict crimes, like in the 2002 sci-fi movie “Minority Report.” Autonomous robots are performing surgery with more precision than expert human surgeons. Biometric software is allowing retailers to identify shoplifters and VIP customers within minutes of walking into a store. And a self-driving semi truck just safely made its first cross-country trip.
While AI boosts productivity and economic growth, AI isn’t neutral. Risks related to privacy, safety issues and bias are real.
Consider MIT’s recent study finding that facial recognition software contains programming bias resulting in error rates of up to 35 percent for women of color.
Digital Access for only $0.99
For the most comprehensive local coverage, subscribe today.
Similarly, in the first study of self-driving car crashes, while researchers found that self-driving cars are safer than human-piloted cars, the crash rate for self-driving cars was 3.2 per million miles, only one fewer per million miles than the national rate.
Recognizing the dark side of AI is why Harvard University and Massachusetts Institute of Technology are rushing to implement courses on the ethical and legal implications of artificial intelligence and scholars are pushing federal lawmakers to adapt existing regulations to account for AI.
While there’s even AI software that seeks to automate the swamp by analyzing and predicting the likelihood that a bill will become a law, there’s not much actual law covering AI on the books.
That’s why in this year’s Super Bowl, when Coca-Cola used image recognition software to identify people who posted photos in which they appeared happy with products of Coca-Cola’s competitors, and then targeted those people with ads for Coca-Cola products, people asked, “Can they do that?”
Facial recognition software, like geolocation software, wearables and the Internet of Things, all emerged post-2009 when Congress stopped passing consumer privacy laws.
On the state level, only Illinois, Texas, and Washington have enacted specific laws regulating the collection of biometric information such as retina and iris patterns, fingerprints, and voice waves.
Across the pond, once the General Data Protection Regulation (GDPR) goes into effect on May 25, U.S. companies that handle the personal data of European Union individuals must comply with the privacy regulation or risk fines of up to €20 million or 4 percent of annual global turnover.
Self-driving car safety legislation addressing privacy, security, and data access concerns recently stalled out in the U.S. Senate. And signaling the coming clash between automation and labor, unions are already seeking to ban UPS from using drones or driverless vehicles.
In its 2018 book, “The Future Computed,” Microsoft’s chief counsel wrote that the “real question is not whether AI law will emerge, but how it can best come together.”
Understanding AI in the context of privacy, safety, fairness and the value of human labor – and channeling it in the public interest – is a good place to start.
Lisa McGrath is a Boise attorney specializing in social media, privacy and technology law.