Whereas many builders are figuring out how generative AI may very well be used to provide whole 3D objects from scratch, Adobe is already utilizing its Firefly AI mannequin to streamline current 3D workflows. On the Recreation Builders Convention on Monday, Adobe debuted two new integrations for its Substance 3D design software program suite that enable 3D artists to rapidly produce inventive belongings for his or her initiatives utilizing textual content descriptions.
The primary is a “Textual content to Texture” function for Substance 3D Sampler that Adobe says can generate “photorealistic or stylized textures” from immediate descriptions, resembling scaled pores and skin or woven cloth. These textures can then be utilized on to 3D fashions, sparing designers from needing to search out applicable reference supplies.
The second function is a brand new “Generative Background” instrument for Substance 3D Stager. This enables designers to make use of textual content prompts to generate background photos for objects they’re composing into 3D scenes. The intelligent factor right here is that each of those options truly use 2D imaging expertise, identical to Adobe’s earlier Firefly-powered instruments in Photoshop and Illustrator. Firefly isn’t producing 3D fashions or recordsdata — as an alternative, Substance is utilizing 2D photos produced from textual content descriptions and making use of them in ways in which seem 3D.
The brand new Textual content to Texture and Generative Background options can be found within the beta variations of Substance 3D Sampler 4.4 and Stager 3.0, respectively. Adobe’s head of 3D and metaverse, Sébastien Deguy, instructed The Verge that each options are free throughout beta and have been educated on Adobe-owned belongings, together with reference supplies produced by the corporate and licensed Adobe inventory.