Skip to content
Skip to content
Connect
Tap into expert insights, share knowledge, and spark new ideas.
Learn
Explore tools and resources to uplevel your content production.
Company
Learn more about the people and purpose driving our work.
digital models cover

Digital Models

The Challenge: Studios don’t have the flexibility they need in post-production to effectively change detailed aspects of an image, such as a model’s hair, makeup, style, or even the model themselves.

The Solution: Digitized models, built off of stable diffusion and an in-house talent roster with full rights management and support, allowing for flexible image customization directly within the workflow process.

Project stage

Underway

There are a lot of potential issues with swapping out human models, we know that. And knowing our industry, we take care to make sure that we do things in an ethical way. At the same time, it seems important to be able to actually swap a model in post-processing. This can be due to having to do a reshoot, or wanting to be able to localise content without wasting resources on flying crew and models around the world, and working repeatedly on the same pictures but with new models.

While there are many tools that allow you to do face swaps, we know that we have to live up to a certain standard. The images are usually very high resolution, and we need to match the quality end to end. We also knew from the beginning that we needed to allow the user to have a lot of flexibility. This allows the user to not just change the face, but also change the makeup, change the hair, and much more.

The initial back-end for our system is based on stable diffusion models running through Comfyui to be able to very quickly improve our workflow and get new features and better quality. But for all the flexibility that gives us, we also run into issues with model support, rights management, and much more, which we are fixing by creating our own models and nodes.

digital-models-comfy-uiAn initial workflow to allow for rapid development and improvement of models

 

For example, after trying out all the nodes for face replacement, we ended up implementing our own, which is the same for hair styling and many other parts. We also have our own methods of fixing antialiasing and edges where we replace skin or backgrounds, and for in-painting clothes where there used to be hair if changing the style.

And then there’s the non-techy part… We need to be aware of the image quality at all times. While some systems can just change the face, we quickly realized that we need to not just be technically correct, but also aesthetically. So on top of solving for the technical parts, we also check that the image quality, sharpness, grain, and more, are on par with the original. This adds a new level of complexity, but also technical satisfaction in our team.

The technology also opens up a host of new opportunities down the line that keep spawning exciting projects, which deal with everything from AI-based QC (Quality Control) to versioning and even replacing other things than “just” humans.

digital-models-model-change-2

digital-models-model-change-1Digital versions of models generated by Dreem

 

Like with our Copywriting tool, we are not trying to be the end-all generative ai solution, but rather focus on the uses that provide value for our customers. With that in mind, we can be very specific in what we do and how we solve it, allowing much higher quality than any other solution and making sure it works at scale. This requires us to be extremely aware of our infrastructure, as well as create the foundational models needed to be really flexible. It’s a big task, but one we are really enjoying.