Cadence expands AI and robotic partnerships with Nvidia, Google Cloud

Cadence Design Systems announced two AI-related collaborations at its CadenceLIVE event this week, expanding its work with Nvidia and introducing new integrations with Google Cloud. The Nvidia partnership focuses on combining AI with physics-based simulation and accelerated computing for robotic systems and system-level design.

The companies said the approach targets modelling and deployment in semiconductors and large-scale AI infrastructure, including robotic systems that Nvidia describes as physical AI.

Cadence is integrating its multi-physics simulation and system design tools with Nvidia’s CUDA-X libraries, AI models, and Omniverse-based simulation environment. The tools model thermal and mechanical interactions so engineers can assess how systems behave under real-world operating conditions. They also extend beyond chip design to cover infrastructure components like networking and power systems. The combined platform lets engineers simulate system behaviour before physical deployment. The companies said system performance depends on how compute, networking and power systems operate together.

The collaboration also includes robotics development. Cadence’s physics engines, which model how real-world materials interact, are being linked with Nvidia’s AI models used to train AI-driven robotic systems in simulated environments.

“We’re working with you in the board on robotic systems,” said Nvidia CEO Jensen Huang during the event.

Training robots in simulation reduces the need for real-world data collection. The companies said these datasets must be generated with physics-based models not gathered from physical systems. Simulation-generated datasets are used to train models, with outcomes dependent on the accuracy of the underlying physics models.

“The more accurate (generated training data) is, the better the model will be,” said Cadence CEO Anirudh Devgan.

Nvidia said industrial robotics companies are using its Isaac simulation frameworks and Omniverse-based digital twin tools to test robotic systems before deployment. Companies including ABB Robotics, FANUC, YASKAWA, and KUKA are integrating these simulation tools into virtual commissioning workflows to test production systems in software prior to physical rollout.

Nvidia said these systems are used to model complex robot operations and entire production lines using physically accurate digital environments.

Chip design automation on cloud

Separately, Cadence introduced a new AI agent designed to automate later-stage chip design tasks. The agent focuses on physical layout processes, translating circuit designs into silicon implementations. The release builds on an earlier agent introduced this year for front-end chip design, where circuits are defined in code-like descriptions. That earlier system handles circuit design, while the new agent focuses on translating those designs into physical layouts on silicon.

The system will be available through Google Cloud. Cadence said the integration combines its electronic design automation tools with Google’s Gemini models for automated design and verification workflows. The cloud deployment allows teams to run those workloads without relying on on-premise compute infrastructure.

Cadence’s ChipStack AI Super Agent platform uses model-based reasoning with native design tools to coordinate tasks in multiple design stages. The system can interpret design requirements and automatically execute tasks in different stages of the design process.

Cadence reported productivity gains of up to 10 times in early deployments in design and verification tasks. The company did not disclose specific customer implementations.

“We help build AI systems, and then those AI systems can help improve the design process,” Devgan said.

The companies said simulation tools are used to validate systems in virtual environments before physical deployment. Digital twin models allow engineers to test design trade-offs, evaluate performance scenarios, and optimise configurations in software.

They added that the cost and complexity of large-scale data centre infrastructure limit the use of trial-and-error deployment methods.

Quantum models announcement

In a separate announcement, Nvidia introduced a family of open-source quantum AI models called NVIDIA Ising. The models are named after the Ising model, a mathematical framework used to represent interactions in physical systems.

The models are designed to support quantum processor calibration and quantum error correction. Nvidia said the models deliver up to 2.5 times faster performance and three times higher accuracy in decoding processes used for error correction.

“AI is essential to making quantum computing practical,” Huang said. “With Ising, AI becomes the control plane – the operating system of quantum machines – transforming fragile qubits to scalable and reliable quantum-GPU systems.”

(Photo by Homa Appliances)

See also: Hyundai expands into robotics and physical AI systems

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and co-located with other leading technology events. Click here for more information.

AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

The post Cadence expands AI and robotic partnerships with Nvidia, Google Cloud appeared first on AI News.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Delete Files in PowerShell: Remove-Item with Safety

Related Posts