Facilitating Security Verification of System Designs Using Adversarial Machine Learning

­Advantages:

  • Automated security verification drastically reduces reliance on specialized human expertise
  • Hidden flaws from complex multi-component interactions are consistently identified and exposed
  • Novel attack scenarios are continuously generated, tested, and refined for better coverage
  • The solution scales across diverse hardware and software architectures with minimal friction

Business Summary:

Modern computing devices, from IoT sensors to cloud infrastructure, rely on System-on-Chip architectures that integrate components from multiple third-party vendors. These black-box components introduce serious security risks, and the stakes are high. Current verification methods are simply not built for this complexity, as they are labor-intensive, dependent on specialized human expertise, and consistently fail to catch the subtle vulnerabilities that emerge when multiple untrusted components interact on the same chip.

This technology automates SoC security verification using adversarial machine learning, where one AI model learns normal communication patterns while another generates novel attack scenarios to probe for weaknesses. The system continuously refines both models using a feedback loop, enabling it to uncover cross-component vulnerabilities that manual testing routinely misses. Unlike conventional approaches that evaluate components in isolation, this solution scales dynamically and reduces reliance on human oversight, offering a more systematic and efficient path to securing complex chip architectures.

 

An overview of surrogate model learning

Desired Partnerships:

  • License
  • Sponsored Research
  • Co-Development
Patent Information:
Title App Type Country Serial No. Patent No. File Date Issued Date Expire Date
Facilitating Security Verification Of System Designs Using Adversarial Machine Learning Utility United States 18/750,135 12,579,283 6/21/2024 3/17/2026 7/26/2044