Functional verification ensures that the register transfer layer (RTL) implementation of semiconductor designs operates according to specified requirements. Electronic engineers typically perform functional verification using hardware verification languages (HVLs) such as SystemVerilog paired with the universal verification methodology (UVM). Other HVLs, such as VHSIC Hardware Description Language (VHDL) and Property Specification Language (PSL), may be used for specific applications.
This article discusses how artificial intelligence (AI) and machine learning (ML) accelerate formal verification processes, including coverage, debugging, and regression. It also highlights the expanding capabilities of large language models (LLMs) across electronic design automation (EDA) workflows and explores advancements in functional verification using convolutional neural network (CNN) architectures. Lastly, the article reviews the various challenges of integrating AI-driven tools into verification workflows.
Accelerating coverage, debugging, and regression
As shown in Figure 1, functional verification is a time-intensive process. Engineers must analyze documentation, create a verification plan, build the verification environment, execute defined tests, and achieve complete test coverage.

AI and ML technologies, including large language models (LLMs), can significantly optimize functional verification workflows by automating or accelerating a wide range of key tasks, such as:
- Converting natural language specifications into SystemVerilog Assertions (SVA) or other verification languages — achieving a 9.29% success rate under typical design conditions and 80% accuracy in optimal scenarios.
- Generating verification code directly from design specifications and producing Verilog code with success rates ranging from 59.9% to 98.7%, depending on complexity.
- Using advanced ML algorithms to generate test stimuli — achieving up to 98.94% coverage on simpler designs and 86.19% on more complex ones.
- Predicting code coverage with accuracy rates of 20–30% at the line level and 84–90% at the statement level.
- Automating and expediting the selection and implementation of formal proof engines.
- Analyzing failed test reports to reproduce 33.5% of reported bugs, streamlining debugging workflows and expediting root cause analysis.
- Automating the identification of high-priority test cases and predicting failure scenarios to significantly improve regression efficiency.
Expanding LLM capabilities across EDA workflows
Recent advancements in commercial and open-source LLMs optimize EDA workflows beyond functional verification, spanning front-end, back-end, and production test phases. These models automate critical tasks such as code generation, responding to engineering queries, and assisting with documentation, including report generation and bug triage. Another key application is the automation of script generation to integrate EDA tools, reference methodologies, and proprietary logic.
Foundation models like Code Llama excel at generating Python scripts and can be fine-tuned for other scripting languages, such as Perl and Tcl, which are widely used with GUI-based EDA tools. By enabling AI-driven engineering assistants to create and explain scripts and facilitate natural language interactions, LLMs bridge the gap between engineers and design interfaces. This capability optimizes efficiency while addressing the increasing complexity of chip design workflows.
Advancing functional verification with CNN architectures
To further optimize functional verification, researchers recently developed two convolutional neural network (CNN) architectures that efficiently classify information across diverse documentation. Tested on Apple M1 hardware with a 10-core GPU in an Anaconda environment, these models use a custom dataset augmented with synonym substitution to bolster diversity and robustness.
The first CNN architecture employs a sequential model consisting of an embedding layer, two convolutional layers, two activation layers using the Rectified Linear Unit (ReLU) function, a MaxPooling layer, a Flatten layer, and a dense layer. This model achieved high test accuracy (98.33%), robust precision, recall, and F1-scores, and low validation loss (10.25%). Although researchers plan to further refine the model using hyperparameter tuning, these results highlight its reliability and effectiveness for document classification in the functional verification process.
The second CNN architecture builds on the first by adding a third convolutional layer to improve performance.

It maintains a similar sequential structure, comprising three convolutional layers, three activation layers, and the same embedding, MaxPooling, Flatten, and dense layers. As shown in Figure 2, this model achieved a validation accuracy of 96.67%, with favorable precision, recall, and F1-scores, and an even lower validation loss (-5%).
Overcoming challenges in AI-driven functional verification
Integrating AI and ML into functional verification workflows isn’t without challenges. For example, the scarcity of open-source hardware description language (HDL) datasets limits model training and may result in inaccuracies or hallucinations during prediction. Additionally, models that perform well on smaller, simpler semiconductor designs often struggle to maintain efficiency and accuracy when applied to complex, large-scale projects.
Ensuring generalization across hardware architectures is a significant challenge for electronic engineers, as ML models trained on specific datasets often require extensive fine-tuning and retraining. Additionally, integrating AI and ML into verification workflows may necessitate upgrading computing resources to handle large datasets or redesigning workflows to support AI-driven tools.
Despite various challenges, ML-powered EDA formal verification tools have achieved significant improvements, with some vendors reporting up to 10x speedups. Using ML-based clustering and root cause analysis (RCA), these tools can deliver up to 10x debug efficiency gains for static verification in typical chip project cycles.
Summary
Functional verification is a time-intensive process involving multiple steps, such as analyzing documentation, building the verification environment, and executing defined tests. AI and ML technologies, including LLMs and CNNs, can significantly accelerate functional verification workflows and optimize front-end, back-end, and production test phases.
References
Applications of AI/ML in Functional Verification, Siemens EDA
Better, Faster, and More Efficient Verification With the Power of AI, Synopsys
Verisium AI-Driven Verification Platform, Cadence
Artificial Intelligence Application in the Field of Functional Verification, MDPI
Generative AI for Semiconductor Design and Verification, Amazon AWS
Related EE World content
How Do Generative AI, Deep Reinforcement Learning, and LLMs Optimize EDA?
How Hardware-Assisted Verification (HAV) Transforms EDA Workflows
The ABCs of Functional Verification Techniques
The Nuts and Bolts of Verification: Recasting SystemVerilog for Portable Stimulus
Empowering Innovation: OpenROAD and the Future of Open-Source EDA