Kaedim
Kaedim: AI-Powered 3D Model Generator for Game & Product Design
GET3D is NVIDIA’s AI model that generates detailed, textured 3D meshes directly from 2D images. Ideal for gaming, animation, and virtual world creation—no 3D scanning required.
GET3D is an advanced generative model developed by NVIDIA that creates high-quality, textured 3D meshes directly from 2D image collections. Unlike traditional 3D modeling pipelines that require scans, sensors, or CAD tools, GET3D leverages deep learning to generate complex 3D objects—ready to use in animation, games, and virtual production.
Trained using adversarial learning and differentiable rendering, GET3D can produce diverse objects with realistic textures and geometry. It outputs meshes with high fidelity, arbitrary topology, and intricate material details, bridging the gap between AI and production-ready 3D assets.
GET3D generates two distinct latent codes: one for shape (geometry) and another for texture. These are used to produce a signed distance field (SDF) and a texture field that define the 3D mesh and surface appearance.
Using DMTet (Deep Marching Tetrahedra), GET3D converts the SDF into a triangular mesh. Then, it queries the texture field to paint the mesh with detailed color and material features.
GET3D is trained using 2D images and silhouettes with adversarial losses. Differentiable rendering allows the model to backpropagate errors from image space into 3D space, enabling learning without explicit 3D supervision.
GET3D generates textured 3D objects with fine details such as headlights, seams, fur, and reflections—making it suitable for animation and simulation tasks.
Unlike many earlier models, GET3D can generate complex, non-rigid shapes across a wide range of categories including animals, vehicles, furniture, shoes, and human avatars.
GET3D separates geometry and texture into distinct latent codes. Users can independently manipulate either aspect to achieve greater control in asset generation.
By interpolating between latent vectors, GET3D enables smooth transitions and morphing between shapes and textures. This feature is useful for animation, asset variation, and design iteration.
Incorporating CLIP-based directional loss (as seen in StyleGAN-NADA), GET3D supports text-guided shape generation. Users can fine-tune outputs using natural language prompts for creative control.
When combined with DIBR++ (a hybrid renderer), GET3D can also simulate materials and lighting effects in an unsupervised fashion, enhancing realism in renders.
Game developers can rapidly generate character models, props, and environments with consistent geometry and texture, significantly reducing manual modeling time.
GET3D enables fast prototyping of stylized or photorealistic assets with flexible design variation and direct export into rendering pipelines.
Ideal for VR creators, GET3D provides a scalable way to populate virtual spaces with high-quality 3D content—without the need for traditional scanning or modeling.
Retailers and industrial designers can use GET3D to visualize products in 3D from catalog images, enhancing interactive shopping and simulation workflows.
GET3D is the result of collaborative research between NVIDIA, the University of Toronto, and the Vector Institute, presented at NeurIPS 2022. It builds on prior work like DMTet, EG3D, and DIBR++, further advancing 3D generative modeling.