Independent Audit and Assessment Framework for AI Systems

AI systems are increasingly being used in critical applications where safety and reliability are paramount such as health care, autonomous driving, and financial services. Numerous AI systems operate as “black boxes,” meaning a system that processes inputs and produces outputs without revealing how it reaches its conclusions. This makes it difficult to understand, trust and verify their decisions. Current technologies each have limitations such as lack of standardization, intrusiveness, and potential inconsistency. This is a prominent issue to address for many reasons such as making sure AI systems are making decisions in a fair, unbiased, accurate manner and meeting legal and regulatory requirements for transparency and explainability. There is a need for a standardized, transparent method to externally assess the safety and performance of AI systems without accessing their internal mechanics.

Researchers at Arizona State University have developed an AI Interface Specification (AIIS) and AI Assessment Tool (AIAT) that provides an efficient way to evaluate AI systems’ safety and performance without needing to understand their internal workings. The AIIS specifies the set of queries that an AI system should be able to accept, and the types of outputs it must yield, to support audits in this paradigm. This specification does not place any restrictions on the internal design of AI systems, and it does not require access to the internal source code. The second component of this system is an AIAT which uses a well-defined algorithmic process to generate inputs for the target AI device that conform to AIIS. This system uses logic-based and natural-language based representations to make the derived models of AI system capabilities more readily understandable by human users and auditors.

Potential Applications:

  • Tech companies  
  • AI development companies
  • Regulatory and compliance agencies (e.g., AI safety regulatory agencies)

Benefits and Advantages:

  • Risk Mitigation: reduces the risk of AI failures or unintended behaviors in critical applications through thorough evaluation
  • Accessibility: makes AI assessment more accessible to a broader audience, including non-technical stakeholders
  • Adaptability: enables evaluation of AI systems’ ability to adapt to different tasks and environments
Patent Information: