Blog

Landscape of AI Regulation in IDV, Part 1: Federal & State

TERRY BRENNER, LLM

The rise of artificial intelligence (AI) has revolutionized the field of identity verification (IDV), offering unprecedented speed, accuracy, and scalability. However, with great power comes great responsibility—and increasing regulatory scrutiny. As AI-powered identity verification systems become more prevalent, policymakers are scrambling to keep pace, introducing new regulations to protect consumer privacy and ensure ethical use of this powerful technology.

In this first part of our three-part series, we’ll explore the evolving regulatory landscape surrounding AI in identity verification, examining key federal- and state-level developments that are shaping the future of our industry.

The current regulatory environment

The regulatory environment for AI in identity verification is complex and rapidly evolving. Currently, there is no comprehensive federal law in the United States specifically governing AI or biometric data use. Instead, we see a patchwork of state laws, industry-specific regulations, and general consumer protection statutes being applied to AI technologies.

This fragmented approach has led to challenges for IDV providers operating across multiple jurisdictions. Recent developments, however, suggest a move towards more cohesive and AI-specific regulations at both the federal and state levels. (Note to reader: For the purpose of this article, reference to an “IDV provider” includes both the developer and deployer/distributor of IDV technology.)

Key federal developments

White House Executive Order on AI

In a landmark move, the White House issued Executive Order 14110 on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” in October 2023. This order sets forth a comprehensive national strategy to advance American leadership in AI while protecting citizens’ rights and safety.

For the identity verification industry, key provisions include:

  1. Mandating the development of guidelines for AI system security and testing
  2. Emphasizing the importance of privacy-preserving technologies in AI development
  3. Directing federal agencies to address AI-related risks in critical infrastructure sectors, which could impact identity verification in finance and healthcare

While not legally binding on private companies, this executive order signals the direction of future Federal regulation and sets expectations for responsible AI development and use.

Potential National Biometric Privacy Act

There have been multi-year efforts in Congress for a federal law specifically addressing biometric data privacy (for example, the Senator Merkley-sponsored National Biometric Information Privacy Act of 2020). While still in process, such legislation could be modeled after state laws like Illinois’ BIPA but with national scope.

A National Biometric Privacy Act could potentially:

  1. Establish uniform standards for collecting, storing, and using biometric data across the country
  2. Require explicit consent from individuals before collecting their biometric information
  3. Mandate secure storage and timely deletion of biometric data
  4. Provide a private right of action for individuals whose biometric privacy rights are violated

For identity verification providers, a federal law could simplify compliance by creating a single standard, but it would also likely impose stricter requirements on data handling and user consent.

Spotlight on state-level regulations

Colorado Consumer Protections for Artificial Intelligence Act

In May 2024, Colorado enacted the Consumer Protections for Artificial Intelligence Act, a first-of-its-kind law in the United States. The law will regulate the development and deployment of AI starting February 2026. Its primary focus is regulating and limiting “algorithmic discrimination,” defined as using an AI system to discriminate based on a class protected under either Colorado or federal law. 

Related are “consequential decision[s],” which are decisions that have material legal or otherwise significant effects on consumers stemming from a denial of: education enrollment or opportunity; employment or employment opportunity; financial or lending services; essential government services; healthcare services; housing; insurance; or a legal service. AI systems that make or assist in making consequential decisions are deemed “high-risk.”

The law regulates two groups: those that deploy AI systems (“deployers”) and those that develop or intentionally and substantially modify AI systems (“developers”). Both deployers and developers have a duty of reasonable care (with their own obligations) to protect consumers from “known or reasonably foreseeable” risks of algorithmic discrimination by high-risk AI systems. 

Deployer obligations include:

  • Implementing an AI risk management policy and maintaining regular audits of AI systems for bias and discrimination
  • Conducting impact assessments
  • Requiring businesses to disclose when AI is used to make decisions that significantly impact consumers (including website disclosure)
  • Granting consumers the right to opt out of AI-driven profiling in certain circumstances

For IDV providers operating in Colorado, this law necessitates clear communication about AI use and robust testing protocols to ensure fairness and accuracy.

Developer obligations include:

  • Providing other developers and deployers of the AI system with a statement describing the reasonably foreseeable uses of the system and known harmful or inappropriate uses of the system
  • Providing documentation including on the AI’s purpose, benefits, and intended uses; reasonably foreseeable limitations, such as risks of algorithmic discrimination based on the system’s intended uses; and the type of data used to train the AI system.
  • Producing documents that describe (1) how the AI’s performance was measured and the steps taken to mitigate algorithmic discrimination prior to making the AI available and (2) how the training data was reviewed for potential biases and suitability for training, among other details.

It is IDVerse’s opinion, subject to Colorado state agency regulations deeming otherwise, that where facial biometric IDV technology is used but not for a decisioning process (i.e. just to verify the identity of the presenting end user), that the Colorado act is unlikely to apply—because the technology is not used to make (or assist in) a decision.

Image animated using Luma.

Illinois Biometric Information Privacy Act (BIPA)

BIPA remains one of the most stringent and influential State laws governing biometric data. 

Its key requirements include:

  1. Obtaining consent before collecting or processing biometric data
  2. Providing disclosures about the purpose of the biometric data collection and use
  3. Establishing a retention schedule and guidelines for permanently destroying biometric data

BIPA’s private right of action has led to numerous high-profile lawsuits, making compliance a critical concern for any company handling biometric data.

New York City’s Biometric Identifier Information Law

New York City’s law, effective since 2021, focuses on commercial establishments’ use of biometric identification technology. 

Its main provisions include:

  1. Requiring conspicuous notices at business entries if biometric identifying technology is in use
  2. Prohibiting the sale or sharing of biometric identifier information
  3. Mandating safeguards for storing and accessing biometric data

While primarily aimed at physical businesses, this law sets a precedent for transparent biometric data use that could influence future regulations affecting digital identity verification.

Impact on the use of identity verification

Compliance challenges

The evolving regulatory landscape presents several challenges for the use of IDV that incorporate biometric processing through the use of generative AI (GenAI):

  1. Navigating a patchwork of state and local laws while preparing for potential federal regulation
  2. Implementing robust consent mechanisms and clear user communications about AI and biometric data use
  3. Ensuring AI systems are free from bias and discrimination, with regular audits and testing
  4. Balancing data retention for security and compliance purposes with privacy requirements for data deletion

Opportunities for innovation

However, these challenges also present opportunities:

  1. Adopting privacy-enhancing technologies that enable effective identity verification while minimizing data collection and storage
  2. Creating more transparent and explainable AI models to meet growing regulatory demands for AI accountability (or, if you are a deployer of IDV technology, asking your IDV developer for their AI model explainability that ensures your compliance with this business practice)
  3. Implementing adaptive compliance systems that can quickly adjust to new regulatory requirements across different jurisdictions
  4. Positioning ethical, unbiased and compliant AI use as a competitive advantage in the marketplace

Preparing for a more regulated future

As AI continues to transform identity verification, regulatory oversight will inevitably increase. Forward-thinking companies in this space (both developers and deployers) must stay ahead of the curve, not just reacting to new regulations but anticipating them and shaping industry best practices.

In part two of our series, we will take a deeper look into specific compliance strategies and navigate the complexities at the intersection of AI, biometrics, and privacy concerns.

About the post:
Images and videos are generative AI-created. Prompt: A woman asleep underneath a quilt made to look like a map of the United States, soft bedroom lighting, intricate stitching details. Overhead view, focus on sleeper and entire quilt. Tools: Midjourney, Luma.

About the author:
Terry Brenner is the Head of Legal, Risk, and Compliance for IDVerse Americas. He oversees the company’s foray into this market, heeding to the sensitivities around data protection, inclusivity, biometrics, and privacy. With over two decades of legal experience, Brenner has served in a variety of roles across a diverse range of sectors.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security