The governors of Oregon and Idaho signed new laws to strictly regulate artificial intelligence systems. The legislation sets clear operational rules for companies developing conversational agents.
Companies are legally required to publicly declare the use of algorithms during user interactions. The law penalizes the concealment of a software program’s non-human identity.
The state of Tennessee created a specific legal framework for healthcare technology. The new rules control the use of decision-making algorithms during the diagnostic process.
The legislation demands full transparency for the source code used in hospitals. Automated systems require preliminary evaluations before managing real patient data.
U.S. states are adopting local policies to mitigate risks in the absence of a unified federal law. Legislative fragmentation forces companies to adapt their products to regional requirements.
AI modules now integrate geo-restriction features. Legal experts foresee the extension of these regulations into the financial services sector.
Technical audits conducted by third parties are becoming mandatory for maintaining operational licenses. The legal framework shifts the responsibility for algorithmic errors to the executive management of tech firms.
Government agencies will verify the datasets used to train language models. The laws prohibit the use of copyrighted material without explicit financial compensation.
Developers must include emergency shutdown protocols for all autonomous agents. Human oversight remains a mandatory requirement for high-risk algorithms to operate.
Courts gain new powers to penalize algorithmic discrimination. The Civil Code now covers damages caused by software-driven decisions.
Source:
Cover Photo by Tingey Injury Law Firm

