Unlocking the Future of 3D Design: Generative AI Models for Solid Modeling
With Karl D.D. Willis of Autodesk Research
The realm of artificial intelligence has seen groundbreaking advancements in text and image generation over the past few years. Tools like ChatGPT can craft haikus on machine learning, while image generators like Midjourney and Stable Diffusion can create impressive visuals from simple prompts.
These achievements have been largely fueled by massive datasets of text and images scraped from the internet. However, the world of 3D design, particularly manufacturable 3D models, hasn't experienced the same level of success due to a lack of useful data.
At the recent CDFAM Computational Design Symposium in New York City, Karl D.D. Willis from Autodesk's AI Lab shared insights into the challenges and breakthroughs in generative models for 3D design.
Willis and his team have been working to develop generative models trained on Boundary Representation (B-rep) solid models, the foundational elements of most computer-aided design (CAD) systems.
Why 3D is Harder Than 2D
Unlike 2D images or text, 3D models require precise surfaces, tolerances, and components that fit together seamlessly—factors that are critical in manufacturing. The complexity of B-reps, which involve various curve and surface types and intricate topologies, makes them challenging for machine learning models to interpret and generate.
Two Approaches to Generative Modeling
Modeling Sequence Generation: This approach focuses on learning how a model was created—the sequence of operations or steps taken during the design process. By analyzing parametric histories from datasets like the Fusion 360 Gallery, models can learn to generate editable designs that mimic human actions in CAD software.
Direct Shape Generation: Here, the emphasis is on generating the final shape without considering the steps taken to create it. This method treats the B-rep as a "dumb" STEP file, focusing solely on the geometry.
Introducing B-rep Diffusion Models
Building on the successes of image diffusion models, Willis's team developed B-Gen, the first diffusion model for B-reps. By introducing and then removing noise from sampled points on B-rep surfaces and edges, the model learns to generate coherent and manufacturable 3D shapes. The results are promising, showcasing the model's ability to create both furniture designs and mechanical parts.
Challenges and the Road Ahead
Despite these advancements, several challenges remain:
Generation Quality: Issues like missing faces, self-intersections, and imperfect surfaces need refinement.
Complexity Management: Generating complex parts and assemblies is still a hurdle.
Physics Integration: Incorporating physical properties to ensure designs are viable in the real world is an underexplored area.
User Interfaces: Developing intuitive interfaces for controlling generative models is essential. Text prompts may not suffice for intricate CAD designs, and alternative modalities like sketch-based inputs could be more effective.
The work presented by Karl at CDFAM marks a significant step forward in bringing AI-driven generative models to the world of 3D design and manufacturing.
As these models continue to evolve, they hold the potential to revolutionize how we approach design, making the boring things more efficient so engineers and designers can concentrate on where they add the most value, not on repetitive tasks.
CDFAM Computational Design Symposium brings together leading experts in computational design from industry, academia and software development for two days of knowledge sharing and networking.
Submissions are now open to present at CDFAM Amsterdam, July 9-10, 2025.