• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Analog IC Tips

Analog IC Design, Products, Tools Layout

  • Products
    • Amplifiers
    • Clocks & Timing
    • Data Converters
    • EMI/RFI
    • Interface & Isolation
    • MEMS & Sensors
  • Applications
    • Audio
    • Automotive/Transportation
    • Industrial
    • IoT
    • Medical
    • Telecommunications
    • Wireless
  • Learn
    • eBooks / Tech Tips
    • FAQs
    • EE Learning Center
    • EE Training Days
    • Tech Toolboxes
    • Webinars & Digital Events
  • Resources
    • Design Guide Library
    • Digital Issues
    • Engineering Diversity & Inclusion
    • LEAP Awards
    • Podcasts
    • White Papers
    • DesignFast
  • Video
    • EE Videos
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Engineering Training Days
  • Advertise
  • Subscribe

How do generative AI, deep reinforcement learning, and large language models optimize EDA?

December 4, 2024 By Aharon Etengoff

Artificial intelligence (AI) and machine learning (ML) are playing an increasingly crucial role in optimizing electronic design automation (EDA) across the semiconductor industry. This article explores the rising complexity and costs of designing chips at advanced nodes. It highlights how generative AI (GenAI) and deep reinforcement learning (DRL) help semiconductor companies accelerate time to market (TTM), improve yields, and lower costs over time. It also reviews how customizing large language models (LLMs) with retrieval augmented generation (RAG) and fine-tuning significantly expands GenAI’s core capabilities for EDA processes and tasks.

EDA challenges: increasing chip complexity and costs

For decades, EDA has supported Moore’s Law by accelerating the development of advanced central processing units (CPUs), graphics processing units (GPUs), and various types of memory. Despite the impressive capabilities of EDA tools, a recent NVIDIA study found that up to 60% of a chip designer’s time is spent debugging or performing checklist tasks, such as tool usage, design specification, testbench creation, and root cause analysis. Additionally, technical documentation, processes, and design methodologies are often outdated or not fully shared across teams, further extending the design cycle.

The effects of these inefficiencies are worsened by the increasing complexity and costs of chip design at advanced nodes (5nm to 3nm). According to McKinsey & Company, designing a 5nm chip costs an average of $540 million and requires 864 engineer days — two to three times more than previous nodes. The rise of vertically stacked 3D multi-die systems (chiplets), devices with billions of transistors, and angstrom-scale structures further drives up design costs. Consequently, developing advanced-node chips for AI accelerators, high-performance computing (HPC), and autonomous vehicles could soon exceed $1 billion per design.

Accelerating time-to-market with GenAI and DRL

EDA companies integrate DRL and GenAI to address these challenges throughout their solution stacks (Figure 1). AI-based DRL trains agents to implement sequential decisions autonomously through trial and error in complex EDA environments. In contrast, by learning from extensive EDA datasets, GenAI creates new content such as text, images, or designs.

Figure 1. Detailed illustration of Synopsys’ AI-driven EDA suite, spanning system architecture, design capture, and verification to signoff, test, and silicon manufacturing. (Image: Synopsys)

Many AI-based EDA tools leverage DRL to help chip design teams evaluate floorplans, review existing design libraries, and explore millions of potential design alternatives to optimize power, performance, and area (PPA). These DRL-based tools significantly shorten back-end processes from months to weeks, enabling small teams of engineers to complete complex design tasks efficiently. DRL improves yields and boosts overall production efficiency by identifying optimal configurations and reducing design errors.

Alongside DRL, GenAI EDA tools accelerate TTM by optimizing system design processes and improving engineering and manufacturing methodologies. With advanced natural language capabilities, these GenAI-based tools integrate inference-based conversational intelligence into workflows. Engineers can automate the design of chips, electronic subsystems, and other semiconductor components using generative methodologies and specific prompts. Additionally, generating register transfer level (RTL) code from natural language specifications helps cross-functional teams efficiently navigate increasing chip design complexity.

Integrating GenAI throughout the EDA stack streamlines the creation and application of actionable insights from large datasets. It also automates the creation of shared product datasheets, technical manuals, and customized documentation. The continual expansion of internal datasets and optimization of inference capabilities allow GenAI tools to provide crucial contextual recommendations during all EDA chip design process stages.

Optimizing AI-based EDA with LLMs

LLMs significantly expand GenAI’s core capabilities for EDA by improving accuracy in natural language processing (NLP) and generation. Recent commercial and open-source LLM advancements streamline natural and programming language tasks across front-end, back-end, and production test phases. For example, LLMs automate EDA tasks such as code generation, responding to engineering queries, and assisting with documentation, including report generation and bug triage.

Figure 2. The overview of the Llama Stack illustrates the flow between developers, APIs, models, and distribution to facilitate model customization and deployment. (Image: Meta)

Another common use case for LLMs in chip design workflows is writing automation scripts to integrate EDA tools, reference methodologies, and proprietary logic. Foundation models like Code Llama (Figure 2) efficiently generate and explain code in natural language. These models excel at Python code generation and can be fine-tuned for other scripting languages like Perl and Tcl, which are often used by EDA tools to interact with designs in a graphical user interface (GUI) environment. LLMs empower AI-driven engineering assistants to generate these scripts and facilitate natural language interactions with EDA tools, bridging the gap between engineers and design interfaces.

Customizing EDA LLMs with RAG and fine-tuning

LLM accuracy depends on the quality and breadth of the data used for training. Due to the limited availability of semiconductor-specific data, LLMs can’t be effectively deployed out of the box in a production environment. Two customization (Figure 3) approaches are common for EDA and semiconductor applications: RAG and fine-tuning.

Figure 3. Overview of model customization levels in AI, from prompt engineering to pretraining, highlighting increasing complexity and methods for optimizing model performance. (Image: Amazon AWS)

RAG relies on external data sources, such as document repositories or databases, to enrich prompts by converting documents and queries into numerical embeddings. This process enables relevancy searches within a knowledge library before appending relevant context to the foundation model (FM). In contrast, fine-tuning involves supplemental training of a pre-trained model on domain-specific data, modifying the model weights to improve its performance for specific tasks. Techniques like parameter-efficient fine-tuning and low-rank adaptation (LoRA) limit the extent of these modifications while adapting the model for specific EDA tasks.

A fully managed service such as Amazon Bedrock helps EDA engineers simplify the development of generative AI applications by providing access to a range of high-performing foundation models (FMs) from AI21 Labs, Anthropic, Cohere, Meta, Mistral, and Stability AI. Bedrock allows EDA engineers to experiment with multiple FMs using a single API for inference and switch models with minimal code changes. It also enables easy model customization through a visual interface, supports RAG with integrated knowledge bases for seamless querying, and provides a powerful toolset for building applications like EDA engineering assistants.

Conclusion

Many EDA companies integrate GenAI and DRL capabilities throughout their solution stacks to help semiconductor companies efficiently design chips at advanced nodes. LLMs significantly expand GenAI’s core capabilities for EDA by improving NLP accuracy. For example, LLMs automate EDA tasks such as code generation, responding to engineering queries, and assisting with documentation, including report generation and bug triage. Together, these AI-driven technologies accelerate TTM, improve yields, and lower costs over time.

Related EE World content

Collaboration Aims to Bring AI to PCB Design
How Does the Open Domain-Specific Architecture Relate to Chiplets and Generative AI?
What Are the Different Types of Circuit Simulation?
What Are the Challenges When Testing Chiplets?
How do Heterogeneous Integration and Chiplets Support Generative AI?

References

Generative AI for Semiconductor Design and Verification, Amazon AWS
Meet Synopsys.ai Copilot, Industry’s First GenAI Capability for Chip Design, Synopsys
Cadence Generative AI Solution: A Comprehensive Suite for Chip-to-System Design, Cadence
The Role of AI-infused EDA Solutions for Semiconductor-Enabled Products and Systems, Siemens
AI Is Reshaping Chip Design, But Where Will It End?, Forbes

You may also like:


  • AI design platform links engineers to electronic component options

  • How do directed energy weapons work?

  • What are the different encoding techniques used for chipless RFID…

  • What is an analog front end (AFE) in a battery…

  • Does 32 kHz or 3.58 MHz mean anything to you?…

Filed Under: Artificial Intelligence, EDA, FAQ, Featured Tagged With: FAQ

Primary Sidebar

Featured Contributions

Design a circuit for ultra-low power sensor applications

Active baluns bridge the microwave and digital worlds

Managing design complexity and global collaboration with IP-centric design

PCB design best practices for ECAD/MCAD collaboration

Open RAN networks pass the time

More Featured Contributions

EE TECH TOOLBOX

“ee
Tech Toolbox: Internet of Things
Explore practical strategies for minimizing attack surfaces, managing memory efficiently, and securing firmware. Download now to ensure your IoT implementations remain secure, efficient, and future-ready.

EE LEARNING CENTER

EE Learning Center
“analog
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.

EE ENGINEERING TRAINING DAYS

engineering

RSS Current EDABoard.com discussions

  • Elektronik devre
  • 12VAC to 12VDC 5A on 250ft 12AWG
  • SPI speed pic18f66j15
  • Antiparallel Schottky Diodes VDI-Load Pull
  • Power handling in RF waveguide components

RSS Current Electro-Tech-Online.com Discussions

  • How to repair this plug in connector where wires came loose
  • how to work on pcbs that are thick
  • compatible eth ports for laptop
  • Actin group needed for effective PCB software tutorials
  • Kawai KDP 80 Electronic Piano Dead
“bills

Design Fast

Component Selection Made Simple.

Try it Today
design fast globle

Footer

Analog IC Tips

EE WORLD ONLINE NETWORK

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Battery Power Tips
  • Connector Tips
  • DesignFast
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Microcontroller Tips
  • Power Electronic Tips
  • Sensor Tips
  • Test and Measurement Tips

ANALOG IC TIPS

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2025 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy