
Design 3D models using AI.
You can now design 3D models using AI. 3D model Generation is now here as we can use text prompts to produce 3D models. Give the AI a text prompt/description, run it and wait for it to output a 3D rendered model based on the text prompt. This AI tech is called DreamFusion.
This is made possible by the combination of Neural Radiance Field (NeRF) and pretrained diffusion model, where the diffusion model known as Imagen is responsible for the generation of super high quality 2D image whiles the NeRF — neural network, handles the reconstruction of synthetic 3D scenes from a handful of images taken at different angles and these optimized and put together are able to turn a normal photo(which is in 2D) into a 3D rendered model. “We optimize a NeRF from scratch using a pre-trained text-to-image diffusion model. No 3D data needed!” — Ben Poole (research scientist-Google brain).
In simple terms this is how the technology works; the user inputs a text prompt/description describing the image they want the AI to generate. The AI uses two mechanisms, one for generating the image and the other for creating a 3D scene by projecting/positioning the image at different angles and making sure the scene is close to perfect to be rendered in 3D. All this is done without using any 3D images/models to train the AI therefore making it a little bit more faster to create scenes.
Video : How DreamFusion works (Google)
The good part of this is that, you can import the AI generated 3D model into a 3D modeling software for further engineering and enhancements.
Checkout some examples here