Modern computing devices, from IoT sensors to cloud infrastructure, rely on System-on-Chip architectures that integrate components from multiple third-party vendors. These black-box components introduce serious security risks, and the stakes are high. Current verification methods are simply not built for this complexity, as they are labor-intensive, dependent on specialized human expertise, and consistently fail to catch the subtle vulnerabilities that emerge when multiple untrusted components interact on the same chip.
This technology automates SoC security verification using adversarial machine learning, where one AI model learns normal communication patterns while another generates novel attack scenarios to probe for weaknesses. The system continuously refines both models using a feedback loop, enabling it to uncover cross-component vulnerabilities that manual testing routinely misses. Unlike conventional approaches that evaluate components in isolation, this solution scales dynamically and reduces reliance on human oversight, offering a more systematic and efficient path to securing complex chip architectures.
An overview of surrogate model learning