← Back to all briefings
Governance 7 min read Published Updated Credibility 95/100

NIST Releases Preliminary Cyber AI Profile Integrating CSF 2.0 with AI

NIST released the preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (NIST IR 8596) in December 2025, with public comment open until January 30, 2026. The profile integrates CSF 2.0 with the AI Risk Management Framework to address three focus areas: securing AI systems, using AI for cyber defense, and countering AI-enabled attacks. Organizations can use this framework to align AI governance with cybersecurity risk management practices.

Fact-checked and reviewed — Kodi C.

Governance pillar illustration for Zeph Tech briefings
Governance, ESG, and board reporting briefings

The National Institute of Standards and Technology (NIST) released the preliminary draft of the Cybersecurity Framework Profile for Artificial Intelligence (NIST IR 8596) on December 16, 2025, with public comments accepted through January 30, 2026. This profile integrates the Cybersecurity Framework 2.0 (CSF 2.0) with the AI Risk Management Framework (AI RMF) to provide thorough guidance for managing AI-related cybersecurity risks. The profile addresses three critical focus areas: securing AI systems, using AI for cyber defense, and countering AI-enabled attacks. Organizations developing or deploying AI systems should evaluate this framework for integration into their governance structures.

Profile structure and scope

The Cyber AI Profile provides a structured approach to applying CSF 2.0 outcomes to AI-specific scenarios. The profile maps CSF 2.0's six functions—Govern, Identify, Protect, Detect, Respond, and Recover—to AI system lifecycle considerations. This mapping enables organizations already using CSF 2.0 to extend their cybersecurity programs to AI systems consistently.

Notably, the profile does not prescriptively define "artificial intelligence." NIST deliberately chose this approach to accommodate the fast-changing nature of AI technology and varying organizational definitions. This flexibility allows organizations to apply the profile to their own AI scope determinations rather than conforming to a potentially outdated technical definition.

The profile integrates with the AI RMF's risk management lifecycle, creating alignment between cybersecurity and AI governance practices. Organizations already implementing the AI RMF can use the Cyber AI Profile to strengthen the cybersecurity aspects of their AI governance. Conversely, organizations strong in cybersecurity can use the profile to extend their programs to AI-specific considerations.

Community-specific implementations are encouraged through the profile's adaptable structure. Different sectors—healthcare, financial services, critical infrastructure—face varying AI risk profiles. The framework provides a common foundation while supporting sector-specific adaptations reflecting unique regulatory and operational requirements.

Secure focus area

The Secure focus area addresses protecting AI systems themselves from cyber threats. AI systems present unique attack surfaces including training data, model architectures, inference endpoints, and supporting infrastructure. The profile provides guidance for identifying and mitigating risks specific to these AI components.

Training data security receives particular attention. Poisoning attacks that corrupt training data can cause models to behave incorrectly or maliciously. The profile recommends controls for training data provenance, integrity verification, and access management. Organizations should treat training datasets as critical assets requiring protection comparable to production systems.

Model integrity protection addresses threats to the AI models themselves. Adversarial attacks can manipulate model behavior through crafted inputs. Model theft through extraction attacks can compromise intellectual property. The profile maps CSF 2.0 controls to these AI-specific threats, providing actionable guidance for model protection.

Inference endpoint security ensures that deployed AI systems resist attacks during operation. Rate limiting, input validation, output filtering, and monitoring recommendations help organizations protect operational AI systems. These controls address both availability threats (denial of service) and integrity threats (adversarial manipulation).

Defend focus area

The Defend focus area explores how organizations can use AI to enhance their cybersecurity posture. AI capabilities offer significant opportunities for threat detection, incident response, and security operations automation. The profile provides guidance for responsible integration of AI into security programs.

AI-enhanced threat detection can identify patterns and anomalies that human analysts or rule-based systems might miss. Machine learning models trained on network traffic, user behavior, and system logs can detect novel attacks. The profile recommends practices for developing, validating, and maintaining AI detection capabilities.

Security operations automation through AI can improve response times and reduce analyst burden. AI systems can triage alerts, enrich incident data, and suggest response actions. The profile addresses both the opportunities and risks of security automation, including the need for human oversight of automated decisions.

The Defend focus area acknowledges that AI security tools introduce their own risks. AI models used for defense can be attacked, manipulated, or evaded. Organizations must secure their AI security tools with the same rigor applied to other security infrastructure. The profile recommends defense-in-depth approaches that do not rely solely on AI capabilities.

Thwart focus area

The Thwart focus area addresses countering AI-enabled attacks by adversaries. Threat actors now use AI for phishing content generation, malware development, vulnerability discovery, and attack automation. Organizations must understand and prepare for AI-enhanced threats.

AI-generated phishing content poses growing challenges for traditional detection methods. Machine-generated text can be grammatically correct, contextually appropriate, and highly personalized. The profile recommends detection strategies that account for AI-generated content characteristics and behavioral indicators beyond content analysis.

Automated attack tools powered by AI can operate faster and more adaptively than human-directed attacks. The profile addresses defensive strategies for AI-accelerated attacks, including automated response capabilities and resilience measures that limit damage even when initial defenses are breached.

Deepfakes and synthetic media create risks for identity verification and social engineering. AI-generated audio and video can impersonate individuals convincingly. The profile recommends verification procedures and technical controls that resist synthetic media attacks, including multi-factor authentication and out-of-band verification.

Integration with existing frameworks

The Cyber AI Profile is designed for integration with existing organizational frameworks rather than replacement. Organizations using CSF 2.0 can incorporate AI-specific considerations into their current structures. The profile provides mapping tables showing how CSF 2.0 outcomes apply to AI scenarios.

Integration with the AI RMF creates thorough AI governance coverage. The AI RMF addresses broader AI risks including fairness, transparency, and accountability, while the Cyber AI Profile focuses specifically on cybersecurity. Together, these frameworks provide holistic AI risk management guidance.

Sector-specific frameworks can incorporate the Cyber AI Profile as appropriate. Organizations in regulated industries may need to map the profile to industry-specific requirements. The flexible structure supports this mapping while maintaining consistency with the broader CSF and AI RMF ecosystems.

International alignment considerations are addressed in the profile. While developed by NIST for U.S. organizations, the profile acknowledges international AI governance developments. Organizations operating globally can use the profile alongside international frameworks like the EU AI Act and ISO/IEC 42001.

Public comment and finalization

NIST is accepting public comments on the preliminary draft through January 30, 2026. Organizations should review the profile and provide feedback on applicability, clarity, and practical implementation considerations. This input shapes the final profile content and ensures it addresses real-world organizational needs.

A virtual workshop scheduled for January 14, 2026, provides opportunity for stakeholder engagement. The workshop allows direct discussion with NIST staff developing the profile. Organizations planning significant AI security investments should consider participation to understand evolving guidance.

The finalization timeline has not been announced, but NIST typically incorporates public feedback over several months. Organizations should not wait for final publication to begin implementation planning. The preliminary draft provides sufficient guidance for initial assessment and program development.

Future updates to the profile are expected as AI technology and threats evolve. NIST has committed to maintaining the profile's relevance through periodic updates. Organizations should build adaptable AI security programs that can incorporate future guidance refinements.

Near-term action plan

  • Review the preliminary Cyber AI Profile draft and assess applicability to organizational AI systems.
  • Submit public comments to NIST by January 30, 2026, addressing implementation concerns.
  • Attend the January 14 virtual workshop for direct engagement with NIST.
  • Assess current AI systems against the Secure, Defend, and Thwart focus areas.
  • Identify gaps between current practices and profile recommendations.
  • Evaluate integration opportunities with existing CSF 2.0 and AI RMF implementations.
  • Brief leadership on the profile's implications for AI governance investments.
  • Begin planning for profile adoption once finalized.

Analysis summary

The NIST Cyber AI Profile represents a significant advancement in AI governance guidance, providing structured approaches for managing AI-related cybersecurity risks. The integration of CSF 2.0 with AI-specific considerations creates actionable guidance for organizations already familiar with NIST frameworks. The three focus areas comprehensively address both AI system protection and AI-enabled threat response.

Organizations developing or deploying AI systems should engage with this profile during the public comment period. The preliminary draft provides sufficient detail for initial assessment and planning, while stakeholder feedback will shape the final guidance. Early engagement positions organizations to adopt the finalized profile efficiently.

The profile's flexibility regarding AI definitions accommodates the fast-changing technology environment. Rather than anchoring to specific technical definitions that may quickly become outdated, the framework provides principles and practices applicable across AI implementations. This approach ensures longer-term relevance for organizational planning.

This analysis recommends that organizations begin Cyber AI Profile assessment and integration planning promptly. The guidance addresses critical gaps in AI cybersecurity practices, and early adoption supports competitive positioning as AI security expectations mature. The public comment period offers opportunity to shape guidance addressing organizational-specific concerns.

Continue in the Governance pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Source material

  1. NIST Issues Preliminary Draft of Cyber AI Profile — wilsonelser.com
  2. NIST Publishes Preliminary Draft of Cybersecurity Framework Profile for AI — insideprivacy.com
  3. NIST issues draft AI cybersecurity framework profile for AI era — infosecbulletin.com
  • NIST Cyber AI Profile
  • CSF 2.0 Integration
  • AI Risk Management
  • AI Cybersecurity
  • AI Governance
  • Framework Integration
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.