Blog

Landscape of AI Regulation in IDV, Part 2: Biometrics & Privacy

TERRY BRENNER, LLM

As artificial intelligence (AI) and biometric technologies continue to advance, their application in identity verification (IDV) has become increasingly sophisticated. These innovations offer unprecedented accuracy and efficiency, but they also raise significant privacy concerns. 

The unique nature of biometric data—being both highly personal and irreplaceable—combined with the often opaque nature of AI algorithms, creates a complex regulatory landscape that IDV providers, at both the developer and deployer levels, must navigate carefully.

In this second part of our series, we’ll explore the impact on AI policy by regulators, cross-border considerations, and industry self-regulation efforts.

Deep dive into regulator activity

As described in part one of this series, federal- and state-level legislation, along with executive orders, are two ways AI can be regulated in the United States. Another pillar of AI policy is regulator action. This includes their enforcement action plus their interpretation of federal agency guidelines.

FTC guidelines on facial recognition tech

In the enforcement arena, the Federal Trade Commission (FTC) has been very active in enforcing existing consumer protection laws in the area of AI, including addressing privacy concerns related to facial recognition technology. 

The FTC has demonstrated its willingness to enforce these principles through actions against companies that misuse facial recognition technology or fail to adequately protect biometric data. This message was communicated very strongly in FTC v. Rite Aid, with the Commission citing failure to (and these are just a few examples):

  • Consider and mitigate potential risks to consumers from misidentifying them, including heightened risks to certain consumers because of their race or gender. For example, Rite Aid’s facial recognition technology was more likely to generate false positives in stores located in plurality Black and Asian communities than in plurality White communities.
  • Test, assess, measure, document, or inquire about the accuracy of its facial recognition technology before deploying it.
  • Regularly monitor or test the accuracy of the technology after it was deployed, including by failing to implement or enforce any procedure for tracking the rate of false positive matches or actions that were taken based on those false positive matches.
  • Adequately train employees tasked with operating facial recognition technology in its stores and flag that the technology could generate false positives.

While not legally binding and although applied to a retail setting, the FTC’s guidelines provide a strong message and framework for best practices for IDV AI systems:

  1. No bias: Product accuracy and performance should have no variability among demographic groups of age, ethnicity, and gender.  
  2. Privacy by design: Companies should build privacy considerations into every stage of product development.
  3. Transparency: Clear notice should be provided to consumers before collecting or using biometric data.
  4. Data minimization: Only collect biometric data necessary for the specific purpose and retain it only as long as necessary.
  5. Security: Implement reasonable security protections for biometric data.

Directives on AI-powered IDV in finance

The US Department of the Treasury—with the Financial Crimes Enforcement Network (FinCEN) as its authorized bureau to manage anti-money laundering (AML) efforts in the US—issued a report in March 2024 on Managing AI-Specific Cybersecurity Risks in the Financial Services Sector. The report highlights specific model risks that arise in this most sensitive of potential AI application arenas, which include (i) ensuring protection of FI personnel and customers and their data, and (ii) potential data poisoning, data leakage, and data integrity attacks.

As a response, the department points to examples of risk management and control principles common across financial sector laws, regulations, and supervisory guidance that also apply to the use of AI regarding cybersecurity and fraud issues. These include:

  • Risk management with assessments and due diligence prior to implementation of AI technologies, and determining which are appropriate for the intended business purpose
  • Model risk management, including validation and testing and ongoing performance monitoring
  • Technology risk assessment, touching on assessment of the risk level associated with each AI use case; issue and incident tracking; and effective information security, cybersecurity, resilience, privacy, and operational and fraud-related controls
  • Data management, including best practices for information sharing and safeguarding of sensitive data
  • Third-party risk management, applying the principles listed above to vendor suppliers.

These directives underscore the need for financial institutions to balance the benefits of AI-powered identity verification with regulatory compliance and risk management.

Image animated using Luma.

Cross-border considerations

Influence of EU AI Act on US regulations

The European Union’s AI Act, while not directly applicable to US companies, is likely to influence US regulations due to its comprehensive and risk-based approach.

Risk categories: The Act categorizes AI systems based on their potential risk, namely:

  1. Minimal Risk: Allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category
  2. Limited Risk: Refers to the risks associated with lack of transparency in AI usage. Users must be informed when interacting with AI like chatbots. Also, AI-generated content, especially for public information, must be labeled and deepfakes in audio and video must be disclosed as artificially created.
  3. High Risk: Includes critical infrastructure, education, employment, public services, law enforcement, and justice. These systems must meet strict requirements including risk assessment, quality datasets, activity logging, documentation, transparency, human oversight, and robust security. Remote biometric identification is generally prohibited; for example when applied in public spaces for law enforcement, with limited exceptions subject to authorization.
  4. Unacceptable Risk: Prohibited under the Act. Examples include social scoring systems and manipulative AI. 

Transparency requirements: Providers must ensure their AI systems are sufficiently transparent, a principle already echoed in some US regulations.

Data governance: Strict data quality and management requirements, which align with existing US data protection laws.

US companies operating globally or serving EU customers will need to comply with the AI Act, potentially driving changes in their domestic operations as well. The Act is effective from August 1, 2024 with a timelined rollout of compliance obligations.

Data transfer limitations

The transfer of personal data across borders presents unique challenges. The best known of the regulations managing these transfers is Chapter V of the EU GDPR, which permits transfers with the application of:

  1. Adequacy decisions: The EU has deemed only a handful of countries as providing adequate data protection. The EU-US framework was published in July 2023, with various safeguards to be applied prior to personal data travel.
  2. Standard Contractual Clauses (SCCs): These EU-approved contract terms can facilitate data transfers but require careful implementation for biometric data.
  3. Binding Corporate Rules (BCRs): For multinational companies, BCRs offer a comprehensive approach to data protection across the organization.

These international data transfer limitations are another crosswire for identity verification providers to carefully consider when designing global systems, ensuring compliance with both domestic and international regulations.

Industry self-regulation efforts

Voluntary standards and best practices

In response to the evolving regulatory landscape, several industry initiatives have emerged to present guidance on the use of AI systems. Notable of these are the NIST AI Risk Managment Framework, OECD AI Principles overview, and OWASP AI Security and Privacy Guide.

These voluntary standards can help companies demonstrate a commitment to ethical practices and potentially influence future regulations.

Balancing innovation with regulatory compliance

Navigating the complex world of AI and biometric data regulations requires a delicate balance between innovation and compliance. As regulations continue to come into focus, IDV developers and deployers must remain agile, adopting a proactive approach to compliance while continuing to push the boundaries of technology.

Until the ink on the paper of final regulations is dry, key strategies for success include:

  • Staying informed about regulatory developments across different jurisdictions
  • Incorporating privacy and ethical considerations into every stage of product development and rollout
  • Participating in industry self-regulation efforts
  • Maintaining transparency with users about data collection and use
  • Investing in flexible, modular systems that can adapt to changing regulatory requirements

Through embracing these strategies, identity verification providers can both meet regulatory requirements and also build trust with users and position themselves as leaders in responsible AI and biometric technology use. 

In our final installment of this series, we’ll look ahead to future trends in AI regulation and provide strategic recommendations for identity verification leaders.

About the post:
Images and videos are generative AI-created.
Prompt: Majestic futuristic legislative building resembling the US Capitol building set in a sprawling high-tech metropolis, gleaming skyscrapers with vertical gardens, flying vehicles weaving between buildings, iridescent energy fields, hyper-realistic style, golden hour lighting, ultra-wide angle view. Tools: Midjourney, Luma.

About the author:
Terry Brenner is the Head of Legal, Risk, and Compliance for IDVerse Americas. He oversees the company’s foray into this market, heeding to the sensitivities around data protection, inclusivity, biometrics, and privacy. With over two decades of legal experience, Brenner has served in a variety of roles across a diverse range of sectors.

x  Powerful Protection for WordPress, from Shield Security
This Site Is Protected By
Shield Security