That turquoise stuff and they skybox definitely aren’t real polygons. The trees on the left are kind of transparent. This is just midjourney or similar.
However I reckon existing transformer architectures would manage to output 3D models just fine given sufficient training data and compute power. Generating grammatically-correct files is what GitHub copilot already does with other languages (I tried, it does not seem to support .obj though). It would definitely be interesting for procedural game generation, in fact I would be surprised to learn that nobody is working on something like this.
There are models out there that can create 3D models, but you would want to manually re-topo them as they’re pretty jank. Another year two and it’ll probably be much better.
Do these AI image generators actually render the 3D models? It would be cool if it could output a model or CAD file.
That turquoise stuff and they skybox definitely aren’t real polygons. The trees on the left are kind of transparent. This is just midjourney or similar.
However I reckon existing transformer architectures would manage to output 3D models just fine given sufficient training data and compute power. Generating grammatically-correct files is what GitHub copilot already does with other languages (I tried, it does not seem to support
.obj
though). It would definitely be interesting for procedural game generation, in fact I would be surprised to learn that nobody is working on something like this.There are models out there that can create 3D models, but you would want to manually re-topo them as they’re pretty jank. Another year two and it’ll probably be much better.