Privacy Regulation Roundup

Author(s): Safayat Moahamad, Carlos Rivera, Ahmad Jowhar, Mike Brown, John Donovan

This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

Navigating Privacy and AI Compliance for Digital Clones

Type: Article

Published: May 2025

Affected Region: USA

Summary: Digital clones are AI-generated replicas such as lifelike avatars, voice mimics, or interactive chatbots. These offer creators transformative opportunities to engage with fans, scale personalized interactions, and preserve their legacies online. However, for companies developing and deploying these technologies, the legal and ethical landscape is filled with complexity.

Key compliance areas include the collection of biometric information, which underpins the creation of digital clones through the analysis of photos, videos, or audio recordings. Laws like Illinois’ Biometric Information Privacy Act (BIPA) classify such data as biometric identifiers (voiceprints or facial geometry) and impose strict obligations, including obtaining explicit consent and adhering to retention limits.

Noncompliance risks severe penalties, with BIPA allowing statutory damages up to $5,000 per violation, amplified by an active class action environment. States like Texas and Washington have similar regulations, while comprehensive privacy laws, such as the California Consumer Privacy Act (CCPA) and Washington’s My Health My Data Act, add further layers of requirements for handling sensitive data, often requiring opt-in or opt-out mechanisms and impact assessments.

Beyond data collection, the commercialization of an individual’s likeness raises significant concerns, particularly with the rise of sophisticated AI tools enabling unauthorized deepfakes. State legislatures have responded by strengthening the right of publicity laws, with California’s Assembly Bill 2602 mandating specific contractual clauses detailing the intended use of digital replicas to ensure enforceability. Transparency is equally critical: California’s Bolstering Online Transparency Act and the EU's AI Act require companies to disclose when consumers interact with AI, preventing deception in commercial or electoral settings.

Meanwhile, the Federal Trade Commission (FTC) has flagged the fraud potential of digital clones, citing $1.1 billion in impersonation scam losses in 2023, and urges safeguards like watermarking to trace AI-generated content and mitigate misuse.

Analyst Perspective: The quick rise of digital clones showcases the double-edged nature of technological innovation – unlocking creative and commercial potential while thrusting companies into a maze of legal, ethical, and security challenges. Navigating this space demands more than reactive compliance; it requires a proactive integration of privacy and cybersecurity principles into the core of product design and deployment.

Transparency is also key. It must be a non-negotiable standard, ensuring consumers and creators alike understand the artificial nature of these interactions, while robust identity verification and authentication (e.g. watermarking or biometric checks) can serve as a barricade against fraud and abuse. If there is one key takeaway here, it is to build the trust necessary to thrive in this AI-driven market you must embed ethical foresight, and adaptive resilience into your strategy.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Compliance Challenges: Copyright and the EU AI Act

Type: Article

Published: April 2025

Affected Region: EU

Summary: The evolving regulatory landscape in the EU is placing greater responsibility on developers of general-purpose AI models to ensure their training processes respect copyright law. With the rapid rise of large language models (LLMs) like ChatGPT and Claude, which rely on vast datasets often sourced through web scraping, the need for clear compliance mechanisms has become more critical.

The EU AI Act reinforces existing copyright protections and requires AI providers to implement policies that detect and honor opt-out signals from content creators. These signals, often delivered through machine-readable protocols like robots.txt or metadata tags, allow creators to restrict the use of their content for AI training. In addition, AI developers are expected to publish transparent summaries of the content used to train their models, providing rightsholders with a way to verify if their works have been included – and whether restrictions have been respected.

A proposed industry code of practice further outlines how AI developers and providers should approach compliance, including commitments to adopt and support standardization efforts for opt-out protocols. However, challenges remain. Not all web crawlers respect these signals, and there is currently no single, unified standard that governs how rights are reserved across the web. While the move toward harmonized frameworks is promising, concerns persist that favoring only widely adopted protocols could marginalize innovative or creator-friendly alternatives.

Importantly, these requirements are not limited to developers and providers based in Europe. Any organization offering AI models in the EU must comply, regardless of where training occurred. This extraterritorial scope underscores a broader shift toward global accountability in AI development and responsible data use.

Analyst Perspective: The EU AI Act represents a meaningful advancement in AI regulation, especially when it comes to addressing long-standing copyright concerns. From a technology leadership perspective, it’s encouraging to see regulation that not only demands transparency in training data but also pushes for industry-aligned, machine-readable standards to manage rights reservations. That said, leaning heavily on outdated mechanisms like robots.txt highlights how far the ecosystem still has to go.

If we want AI to scale responsibly, we need smarter, enforceable, and future-ready protocols – not stopgaps. Striking the right balance between innovation and the rights of content creators is complex, but it’s essential. This isn’t about slowing down progress, it's about doing it right.

Analyst: John Donovan, Principal Research Director – Infrastructure and Operations

More Reading:


SAG-AFTRA Calls Out ‘Fortnite’ Over Darth Vader AI Voice

Type: Article

Published: May 2025

Affected Region: USA

Summary: Epic Games recently introduced an AI-powered Darth Vader NPC in Fortnite, featuring the voice of the late James Earl Jones, which thrilled players but soon posed challenges for the developer. Within hours, players exploited its AI to produce inappropriate language, forcing Epic to deploy a rapid update to tighten its restrictions. Despite this fix, the incident sparked widespread sharing of the problematic clips online, marking the start of broader issues tied to this feature.

The situation intensified when SAG-AFTRA, or the Screen Actors Guild‐American Federation of Television and Radio Artists, filed an unfair labor practice charge with the National Labor Relations Board. The filing was against Epic Games and Llama Productions, accusing them of bypassing union negotiations by implementing AI-generated voices without prior notice or bargaining. While SAG-AFTRA recognizes Jones’ permission to use his voice, it argues that Epic was obligated to negotiate – given that other actors, who are still active and available, have previously portrayed Darth Vader in video games. This dispute unfolds amid Epic’s ongoing legal clash with Apple over Fortnite’s iOS presence and weakening of support for SAG-AFTRA’s strike against AI exploitation in entertainment.

Analyst Perspective: Once again, this situation showcases how difficult it can be to weave AI into creative domains, particularly, when replicating human performances tied to iconic legacies like Vader’s. Although AI has tremendous potential for innovation, it also opens doors to misuse and labor disputes. The legal and ethical frameworks governing AI in entertainment remain fluid, requiring companies to tread carefully to sidestep conflicts with stakeholders like unions. In my experience advising organizations, I’ve consistently advocated preemptive risk assessments and inclusive dialogue before deploying emerging tech. Epic’s current challenge reinforces the necessity of robust AI governance – balancing innovation with ethical accountability and labor rights to prevent this type of fallout from overshadowing technological promise.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Uncertainty Is Shaping the Canadian Privacy and AI Landscape

Type: Article

Published: April 2025

Affected Region: Canada

Summary: Uncertainty around the current and future relationship between Canada and the US has prompted many Canadian privacy and data protection practitioners to reevaluate existing approaches to cross-border data transfer and associated privacy safeguards. Unlike European data protection laws, Canadian privacy laws do not restrict the cross-border transfer of personal data to only jurisdictions where the privacy laws have been deemed adequate. This has contributed to large quantities of Canadian’s personal identifiable information (PII) being stored in the US.

With growing concerns, privacy practitioners with a legal, policy, and technology perspective feel responsible to formulate solutions that further protect the PII of Canadians. Some of the solutions being considered are:

  • Legally forbidding companies operating in Canada from disclosing PII to foreign governments without specific consent, and strengthening domestic privacy laws.
  • Creating a public cloud in Canada for storage and AI platforms that mitigates the risks posed by the US CLOUD Act and similar legislation.
  • Modernizing cross-border data agreements, which permit countries to share information and cooperate with criminal investigations.
  • Demanding AI safety standards for sectors that control and process the most sensitive personal data, and developing ethical AI models.
  • Ensuring privacy by design in all publicly led AI initiatives.

Analyst Perspective: Uncertainty spans a wide scope of areas including social, cultural, economic, politics, legal, health, technology, ethics, and the environment. This undoubtedly adds complexity to how each one relates to another. However, in times like this, those who shape policy and enact change in the various areas have a real opportunity to identify the associated risks with these “uncertain times” and impart positive and lasting change.

It can be noted that the key points identified align with necessary and overdue reforms to Canadian privacy and AI legislation (politics), contractual agreements (legal), and information management (technology).

It seems irresponsible that a US company can operate in Canada with no controls around the US government’s ability to access Canadian PII without consent. Hopefully, tabling new legislation that addresses gaps in cross-border data transfers will be a priority for the new government.

Sovereign cloud infrastructure would facilitate an environment free from foreign oversight and legislation, providing assurance and protection to Canadian businesses that store and process the PII of Canadians. It will also foster the rapid growth of technology like AI within a Canadian-owned and operated space.

Analyst: Mike Brown, Advisory Director – Security & Privacy

More Reading:


A Look Into the Updates to the NIST Privacy Framework

Type: Article

Published: April 2025

Affected Region: USA

Summary: The widely adopted NIST privacy framework has recently been updated to address emerging privacy needs and alignment with its other trusted frameworks. The announcement, which came this past April, is the first update to the framework since it was published in 2020. The framework, also known as PFW, provides organizations with a roadmap for building effective governance processes for privacy risks throughout their data lifecycle.

Some of the changes include the addition of new privacy roles, which indicates the importance of establishing responsibility and accountability among leadership. They also advocate for adequately resourcing staff, funding, and technology to support privacy initiatives and build an improvement roadmap.

AI has also influenced changes to the framework with the integration of privacy and AI governance. Guidance on establishing roles, responsibilities and accountability for AI-related privacy concerns was also added to the framework. Furthermore, NIST’s PFW was updated to ensure alignment with the recently updated NIST Cybersecurity Framework (CSF) 2.0. This includes updates to align with NIST CSF’s Govern and Protect functions. Given the overlap between privacy and security, the approach to integrate both frameworks will enable organizational efficiency by addressing enterprise risks together.

Analyst Perspective: The emerging privacy needs and impact of AI have influenced the updates to NIST’s PFW. The aim is to ensure organizations are receiving effective and relevant guidance in building their privacy operations. As organizations demonstrate different maturity levels with respect to their privacy, having a roadmap to ensure you are not only aligning with best practices but fostering continuous improvement will improve the privacy program. This will enable organizations to establish a holistic and integrated privacy program by implementing a phased approach. This will not only ensure a privacy program meets compliance requirements and aligns with best practices but also be a key driver of business efficiency, which will promote competitiveness within industries.

Analyst: Ahmad Jowhar, Research Analyst – Security & Privacy

More Reading:


Biometrics: An Intersection of the GDPR and the EU AI Act

Type: Article

Published: April 2025

Affected Region: EU

Summary: Once limited to law enforcement, biometrics are now used in various sectors like retail, HR, education, and online platforms, often to infer traits and emotions using AI. The EU’s General Data Protection Regulation (GDPR) governs biometric data processing as a special category of data. However, the European regulatory landscape is evolving. By layering on the new EU AI Act, the EU now categorizes biometric AI systems based on risk and mandates lifecycle governance of these systems.

Notably, The AI Act prohibits the creation of facial recognition data sets from indiscriminate scraping of web or CCTV images. This prohibition is absolute. Other prohibitions include:

  • Real-time remote biometric identification (RBI) for law enforcement is banned except in narrow cases. Other uses are deemed high risk, requiring extensive compliance such as data governance, risk & quality management, and certification & registration.
  • Systems able to infer emotions are prohibited in workplaces and schools, unless medically necessary.
  • A list of sensitive traits (e.g. race, religion) is prohibited from being leveraged for biometric categorization.

Organizations looking to leverage biometric technologies powered by AI may face significant compliance complexity due to overlapping regulations.

Analyst Perspective: The EU's strategy emphasizes proactive governance and ethical foresight, moving beyond traditional consent-based compliance to a lifecycle governance model. Although this approach may significantly increase operational and compliance costs for organizations, it can safeguard fundamental rights and foster trustworthy innovation. That said, distinguishing between prohibited, high-risk, and limited-risk applications – as well as interpreting where the organization operates as a controller, deployer, provider, etc. – may pose challenges, particularly for small and medium-sized enterprises. Effective navigation of this landscape will require clear, practical guidance from regulators and active collaboration within industries.

Analyst: Safayat Moahamad, Research Director – Security & Privacy

More Reading:


If you have a question or would like to receive these monthly briefings via email, submit a request here.


Visit our IT Critical Response Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171