Verification Workshop
San Francisco
17 May, 2026

Program Committee
Verification Workshop
In partnership with the Center for AI Safety, FAR.AI is convening a workshop on infrastructure for secure and verifiable AI development, colocated with IEEE S&P in San Francisco on May 17, 2026. The workshop will bring together researchers, engineers, and funders working across machine learning, hardware security, systems, cryptography, and computer security to discuss recent work and emerging technical approaches to enable third parties such as auditors to verify the safety and security of AI systems during development and deployment.
A growing body of research already explores mechanisms that could support verifiable claims about AI systems, including work on confidential computing, hardware-backed attestation, side-channel measurement, zero-knowledge proofs, proof-of-training and proof-of-learning, model fingerprinting, and formal verification. However, many of these efforts have developed within separate research communities, and there has been relatively limited opportunity to examine how these approaches might interact or be combined in practice to support auditing and oversight of AI systems.
As AI capabilities improve and the potential consequences of failure grow, stakeholders including AI users, policymakers, investors, and insurers may require stronger forms of assurance about how advanced systems are trained, evaluated, and deployed. Enabling such assurance may require technical progress that allows auditors to independently verify certain claims about AI development and operation — while preserving confidentiality for sensitive assets such as model weights. In certain high-stakes settings, it could be important to be able to provide continual monitoring and to develop solutions that are adversarially robust against determined attempts at subversion. This workshop aims to provide a forum for researchers to share ongoing work, examine open technical challenges, and explore how advances in hardware, systems, and cryptography could contribute to higher-assurance auditing and verification mechanisms for AI development and deployment.
The workshop will explore research directions such as:
- Confidential evaluations and auditing of AI development processes: How can hardware- and systems-level mechanisms (e.g. secure enclaves, attestation, telemetry) and cryptographic approaches such as zero-knowledge proofs and proof-of-training enable auditors to verify claims about a model’s training process and safety-relevant properties — such as computation used or evaluation results — while protecting model weights and other sensitive IP against theft or unintended disclosure? How could monitoring technologies be used to generate auditable signals when systems change in ways that could invalidate prior assessments, while minimizing the risk of sensitive information leakage?
- Compute accounting: What technical approaches could strengthen assurance that reported claims about AI development are complete, including by helping detect or rule out unreported large-scale training activity?
- Verifiable inference and change detection: How might mechanisms such as confidential computing, cryptographic proofs, or external measurement support verifiable claims about model capabilities, safety properties, or continued validity of evaluation results at inference time? To what extent can black-box or limited-access techniques detect meaningful changes to models or AI systems—for example, changes to model weights, system prompts, inference optimizations, or auxiliary classifiers—and what are the limits and reliability of such fingerprinting approaches?
- Formal verification applications: How might advances in formal methods—including automated specification generation, theorem proving, and AI-assisted verification—support the development of high-assurance audit infrastructure? What near-term applications of formal verification could help increase confidence in specific components of AI verification systems?
- Adversarial testing of verification mechanisms: How should verification and monitoring systems be evaluated against adversarial threat models, including attempts to spoof attestations, bypass monitoring infrastructure, or operate shadow systems? What kinds of red-teaming or adversarial testing frameworks are needed to ensure these mechanisms are robust in practice?
We aim to support exploration of novel research directions that can enable auditing and verification of claims about AI safety and security made by AI developers, and the questions listed above are not intended to be exhaustive. This reading list provides further detail on motivations and research directions in this space.
The workshop will be closed-door and operate under Chatham House rules.
Goals:
- Provide a forum for presenting and discussing recent and ongoing work on relevant research topics across ML, cryptography, systems, and hardware security.
- Clarify promising technical directions and open questions for future research, including through comparison of existing proposals and discussion of their assumptions, threat models, and implementation challenges.
- Catalyze new research, demonstrations, and collaborations by connecting attendees with potential collaborators, funders, and follow-on opportunities.
- Foster an interdisciplinary community of researchers and practitioners interested in these topics.
Hybrid event: The event will be primarily in-person. In exceptional cases, we may allow remote participants to join presentations and other parts of the program.
Target Audience:
- Researchers and engineers with expertise in machine learning, hardware security, cryptography, computer security, and AI policy
- AI start-ups, independent evaluators and AI industry representatives working on relevant projects
- Grantmakers at philanthropic foundations and other institutions
Travel Support: If financial reasons would prevent you from attending in person, contact verificationworkshop@far.ai to inquire about potential travel support. Note that our budget for this is limited, but if you need support, please reach out.
Questions: If you have any questions about the event, please email verificationworkshop@far.ai.
Speakers
Media & attribution
The Verification Workshop will be a closed-door event, follow Chatham House Rule for participants, and have an off-the-record policy for press.
Event Partners


