The following editorial originally appeared in The Seattle Times:
Ask an AI model like Microsoft Copilot to name the dangers of artificial intelligence, and it will offer a startling assessment of itself and others like it: An ability to hoover up an internet’s worth of data could lead to “uncontrolled self-improvement,” where humans lose control, conjuring a dystopian future about which science fiction has long forewarned.
Yet other hazards Copilot lists are pressing today: Deepfakes, or doctored images and videos, can distort public opinion, ruin reputations and throw elections. AI models can also spread misinformation, absorb private data and copyrighted content, and discriminate against job seekers during the hiring process.
Governments around the world are proposing guardrails around this rapidly advancing technology. Lawmakers’ paramount goal should be to protect the privacy and dignity of all Americans as AI changes many facets of society. Already, President Joe Biden and Washington Gov. Jay Inslee have issued executive orders that will guide AI development and use with the federal and state government, respectively. But what about the private sector?
State lawmakers around the country this year introduced over 400 AI-related bills in 41 states, according to the Software Alliance. Washington’s Legislature in 2023 attempted to tackle deepfakes with a law that empowers victims to sue for damages and, during this year’s session, passed a bill to give recourse to victims whose faces are used to make pornographic content.