and other impressive results by Alexey Choy with Phygital+.
project timelines reduced
instead of multiple platforms
high-quality illustrations per concert
cohesive visual style maintained
Alexey Choy is a media artist and stage designer working on large-scale projects for theaters, operas, and philharmonic performances. His challenge was to create dozens of cohesive illustrations per show — around 60 visuals for each concert — that fit seamlessly into the narrative and stage design.
Despite using AI tools like Midjourney, Google Colab, and ComfyUI, Alexey faced constant slowdowns. Switching between disconnected platforms, ensuring consistency, and handling high volumes of content made the process complex and time-consuming.
Too many platforms for one workflow.
60+ visuals per performance.
Maintaining a unified style across multiple tales and scenes.
Phygital+ provided a single environment where Alexey could integrate multiple AI models, simplify production, and focus on creativity instead of technical hurdles.
— Alexey Choy
With its node-based interface and cloud processing, Phygital+ enables Alexey to iterate quickly and collaborate easily, significantly accelerating the creative process. This also allows for the seamless integration of AI-generated elements with other design software.
In his work on children’s fairy tales for the philharmonic, Alexey utilizes Phygital+ to create dynamic, generative graphics that were projected onto the stage. These graphics brought stories like The Cat That Walked by Himself, Rikki-Tikki-Tavi, The Jungle Book, and Mary Poppins to life, ensuring that the visuals were not just static images but integral parts of the performance’s narrative structure. The platform allows Alexey to generate entire landscapes, refine them, and maintain stylistic consistency across various elements, even when transitioning between different stories.
Alexey begins by generating a landscape or a character using Stable Diffusion. Sometimes, he creates the entire scene at once, while other times, he generates smaller parts and then refines them.
He then moves to Photoshop, where he adds different parts of the location in layers. These parts are created using Inpainting XL in Phygital+, which helps fill in gaps and add details. For example, you can see here how Rikki-Tiki-Tavi sits in the same place, but on a different landscape.
For characters and their elements, he either collaborates with an illustrator or generates them using AI tools. Most of the time, both characters and landscapes are created in Stable Diffusion and Midjourney within Phygital+.
The convenience of such a working interface in Phygital+ is that you can simultaneously interact with all available neural networks. And you can work with them in a complex and sequential way, building chains of tools application with the help of special blocks – nodes. So forget about separate registration in all AI services and tools and tedious switching between different applications or windows. All work happens in one window.
To ensure consistency, he generates characters via Midjourney in Phygital+, which makes the process smoother.
For a consistent artistic style across different tales, Alexey relies heavily on the Inpainting XL. With Phygital+, he can seamlessly transition from one tale to another, ensuring that each visual element aligns perfectly with the story’s dramatic arc while fitting naturally into the physical space of the performance.
Phygital+ became an essential part of Alexey’s workflow, particularly in the stages of prototyping, sketching, and texturing. To achieve consistency in style, Alexey uses DreamBooth to train his AI models in Phygital+ and generate multiple textures and then imports them in TouchDesigner, where they become audio-reactive, meaning they change based on sound, creating unique visual effects.
Phygital+ is your AI design pipeline workspace. Build creative workflows with 30+ neural networks and go from idea to final design, faster than ever.