← Back to all briefings
Governance 6 min read Published Updated Credibility 96/100

Governance Briefing — NYDFS Circular Letter No. 2 (2024) on insurance AI governance

NYDFS Circular Letter No. 2 (2024) obliges insurers to implement board-approved AI governance, fairness testing, vendor oversight, and annual attestations covering all external consumer data systems.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)

New York’s Department of Financial Services (NYDFS) issued Circular Letter No. 2 (2024) on 17 January 2024, establishing the most prescriptive governance expectations yet for insurers that deploy external consumer data and artificial intelligence (AI) systems. The letter applies to all insurers authorized in New York—including life, property and casualty, and health carriers—and covers both proprietary and third-party models used across underwriting, pricing, marketing, claims, and fraud detection. Boards and senior management must implement documented frameworks that guarantee AI-enabled decisions are lawful, transparent, and free of unfair discrimination. The guidance takes immediate effect, with the first annual attestation due 1 April 2025 for the 2024 reporting year, creating a compressed timeline for insurers to overhaul governance and control environments.

Establishing board-level accountability and governance policies

NYDFS expects insurer boards to approve an enterprise-wide governance program for external consumer data and AI systems (ECDIS). The program must assign clear roles to the board, senior management, business units, compliance, risk management, model risk management, and internal audit. Boards should amend charters to capture oversight of ECDIS strategy, risk appetite, and policy exceptions. Senior management must implement policies covering model development, acquisition, deployment, monitoring, and retirement. Insurers should draft a dedicated ECDIS policy that references related standards—information security, third-party risk, fair lending, privacy, and model risk—ensuring consistent expectations across the enterprise.

NYDFS also requires insurers to maintain an enterprise inventory of all AI systems and external data sources, including third-party models embedded in vendor platforms. The inventory should capture use cases, decisioning authority (automated versus human-in-the-loop), input variables, training data provenance, and validation status. Governance committees need to review the inventory at least annually, flagging high-risk use cases for enhanced scrutiny, such as predictive models that affect eligibility, pricing, or claims outcomes.

Risk assessment, model development, and validation expectations

Insurers must conduct documented risk assessments before deploying or materially changing any ECDIS use case. Assessments should evaluate compliance, fairness, cybersecurity, privacy, and operational risks, considering factors such as consumer impact, reliance on protected class proxies, and explainability limitations. The circular letter mandates pre-deployment testing to confirm models do not produce unfair discrimination; insurers should design statistical fairness tests (e.g., adverse impact ratios, disparity analyses) and qualitative reviews of feature importance. Validation teams must evaluate training data quality, ensure representativeness, and confirm that data transformations do not introduce bias.

Model risk management frameworks should be updated to reflect AI-specific characteristics. Independent validators—separate from model developers—must review methodology, documentation, and performance metrics. Validation documentation should include conceptual soundness assessments, benchmark comparisons, back-testing results, stress testing, and limitations. Monitoring plans must specify performance thresholds, drift indicators, and triggers for recalibration or retirement. NYDFS emphasises human oversight: even when AI systems automate decisions, insurers should retain personnel capable of explaining outcomes, challenging anomalies, and overriding model outputs when warranted.

Consumer fairness, transparency, and disclosure obligations

NYDFS reiterates that insurers remain responsible for compliance with New York Insurance Law § 2303, § 2403, and federal anti-discrimination statutes. Insurers must demonstrate that ECDIS-driven decisions do not rely on prohibited factors or proxies for protected classes. When adverse actions occur, insurers must provide specific, plain-language reasons referencing variables actually used in the decision. Circular Letter No. 2 encourages insurers to furnish additional educational materials, such as FAQs or model fact sheets, describing how AI systems influence underwriting or claims, the data sources involved, and consumer remediation options.

Complaints related to AI decisions must be logged, investigated, and analysed for systemic issues. Insurers should integrate complaint analytics into model monitoring dashboards, tracking themes that may signal bias or data quality problems. When consumers request corrections to external data, insurers must coordinate with vendors to validate and update records promptly, documenting the resolution.

Third-party vendor management and contractual safeguards

Many insurers rely on vendors for rating engines, marketing platforms, and claims analytics. NYDFS makes clear that outsourcing does not transfer accountability. Insurers must conduct due diligence on vendors’ governance frameworks, data provenance, fairness testing methodologies, and security controls. Contracts should mandate transparency into model features and training data, provide audit and access rights, require notification of model changes, and stipulate remedies for compliance failures. Insurers should incorporate vendor models into their inventory, risk assessments, and validation schedules, and ensure vendors support regulatory inquiries.

Where vendors assert trade secret protections, insurers must negotiate mechanisms—such as secure data rooms, independent validation arrangements, or escrowed documentation—that allow the insurer to meet NYDFS’s documentation requirements without exposing proprietary code. Third-party risk management teams should conduct periodic reviews of vendors’ control environments, including site visits, SOC reports, and cybersecurity assessments, tailored to AI-related risks.

Documentation, recordkeeping, and attestation requirements

NYDFS expects comprehensive documentation for every ECDIS use case, including policies, risk assessments, validation reports, monitoring results, incident logs, consumer disclosures, and board minutes. Records must be retained for at least six years and made available to examiners upon request. Insurers should centralise documentation in secure repositories with access controls, version history, and metadata tagging to support retrieval during examinations.

Beginning 1 April 2025, the chief executive officer and the board (or an authorised senior officer) must attest annually that the insurer has adopted and is maintaining a program that complies with the circular letter. To support the attestation, compliance functions should design readiness assessments, internal audits, and testing schedules that cover all business lines. Attestation packages should summarise program maturity, outstanding remediation items, incident reports, and remediation plans.

Incident response and regulatory engagement

NYDFS requires insurers to notify the department within 30 days of identifying any material issues—such as systemic unfair discrimination, unauthorized data use, or security breaches—related to ECDIS. Incident response plans should include decision trees for determining materiality, escalation pathways to legal and compliance teams, and communication templates for regulators and consumers. Post-incident reviews must document root causes, corrective actions, and enhancements to controls.

Insurers should expect NYDFS to incorporate ECDIS oversight into regular examinations. Preparing examination binders, training examination liaisons, and running mock supervisory requests can reduce disruption. Industry associations may coordinate with NYDFS on interpretive questions; insurers should monitor FAQs, webinars, and enforcement trends to refine their programs.

Implementation roadmap

Given the aggressive timeline, insurers should immediately launch multi-phase implementation programs. Phase one should focus on governance: appointing accountable executives, creating policies, compiling the model inventory, and performing gap analyses against existing model risk frameworks. Phase two should address high-risk use cases, executing risk assessments, validation, and fairness testing while negotiating updated vendor contracts. Phase three should institutionalise monitoring, consumer disclosures, training, and audit schedules, culminating in attestation readiness testing early in 2025.

By operationalising NYDFS’s guidance with disciplined governance, rigorous testing, and transparent consumer engagement, insurers can harness AI innovations while protecting policyholders and satisfying heightened regulatory expectations.

Timeline plotting source publication cadence sized by credibility.
2 publication timestamps supporting this briefing. Source data (JSON)
Horizontal bar chart of credibility scores per cited source.
Credibility scores for every source cited in this briefing. Source data (JSON)

Continue in the Governance pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • NYDFS AI governance
  • Insurance compliance
  • Model risk management
  • Fairness testing
  • Third-party oversight
Back to curated briefings