The Seville Declaration on the Professionalisation of AI Auditing
Adopted by participants in the June 2025 AI Audit Workshops, Seville, Spain.
A-LIGN’s VP of Innovation and Strategy, Patrick Sullivan, leads the Certification Working Group that created the IAAA Audit Body of Knowledge. This group met in the Sevilla Sessions to bring together global experts in AI auditing to identify and address the gaps in current practice, structure, language, and expectations for the profession. The intent was to lay the groundwork for formalizing AI auditing as a recognized, standards-aligned discipline that ensures transparency, accountability, and safety in AI systems.
As artificial intelligence (AI) systems shape decisions that affect individuals, institutions, and societies increases, so does the need to have tools that ensure that automated algorithmic systems and AI products are safe and accountable. If AI innovation is to revolutionize jobs, processes, and relationships, it can only do so when it is safe for, and trusted by, users and society. Adequate safety and accountability specifications cannot be achieved without the capacity to independently and continuously evaluate AI systems, models, and their impacts through audits.
We, the undersigned participants of the Seville AI Audit Workshop, are AI experts from different corners of the world and from diverse fields including regulation, law, ethics, engineering, standards development, and frontline audit practice. We are committed to developing and shaping the AI auditing profession as a structured practice that draws on established IT and security auditing processes; captures the new risks and opportunities that AI poses; and brings transparency, accountability, and assurance to how AI systems operate and evolve over time.
During our first meeting in Seville (26-27JUN2025), we identified significant gaps in the current state of AI auditing practices. These gaps include:
- a lack of consistent terminology and definitions,
- undefined scope boundaries,
- missing professional standards,
- limited agreement on the qualifications required to conduct thorough and sufficient audits,
- an absence of consensus methods, metrics, benchmarks, and auditing procedures.
Without an established AI auditing profession, we believe that general commitments to AI safety and governance efforts will fail to identify and mitigate the many risks of AI, and to realize its potential.
As representatives of an emerging profession, we gathered to listen to one another, challenge assumptions, and find alignment in our purpose. We acknowledged the need for interdisciplinary collaboration going forward. And that shared understanding became our foundation. As a result of this collaboration, we declare our collective intent to:
- Work towards the development of a professional discipline of AI auditing that is credible, interdisciplinary, grounded in evidence, and committed to advancing trust and safety in AI.
- Align the practice of AI auditing with recognized international regulations, best practices and standards, including (where applicable), but not limited to ISO/IEC 42001, 42006, 27001, 27006, and related assessment frameworks, to ensure that AI audits are thorough and of sufficient depth to assure trust and safety.
- Define and share core competencies, methodologies, and ethical and accountability principles that guide AI auditors in their responsibilities to the public, to clients, and to the systems they assess.
- Offer our experience, tools, and networks to developers and policymakers to facilitate regulatory testing, improve AI policy, and promote best practices.
- Promote global collaboration so that AI audits remain responsive to cultural, legal, and regional differences while being repeatable and meeting sufficient and appropriate assurance.
- Prioritize the meaningful involvement of impacted communities in audit design, scoping, execution, and follow-up, recognizing that legitimacy cannot exist without stakeholder voices.
As a first step in our shared journey, we will work together on defining and testing a set of “minimum viable audit” frameworks and metrics for different use cases, incorporating both existing and established auditing procedures and the specificities brought about by advanced AI technologies.
This declaration is not the final word. It is the beginning of an organized effort to formalize what many of us have already been practicing informally. In the months ahead, we will work together to advance this vision, convene others, and build the infrastructure needed to sustain a professional field of AI audit that brings cohesion and direction to the many efforts of people working around the world on AI evaluations, assurance, and trustworthiness.
Signed in good faith by the participants of the Seville Workshops,
July 2025