Note: This page has proprietary content, which is part of the development work for the project "SPIRE".
Please keep it private. © Eon Sounds 2023

Spire Dev Log 1

Neema's Look development

The appearance of the main character in a film is always crucial. We experience most of the story through their eyes and expressions, allowing us to share their moments and emotions. This aspect often determines the success or failure of the film performance, as it enables the audience to accompany the character throughout the two-hour storyline.

Therefore, authenticity and specific sensibilities help us form a connection with the character. Finding a talented cast and fostering close collaboration between the director and actors are essential in both live-action and 3D animation. However, in 3D animation, there is an additional challenge of translating the natural abilities of an actor into animated motion, transforming a static model into a vibrant and lifelike performance.

The most significant hurdle in 3D animation has always been achieving a sense of realism that transcends the “uncanny valley,” where the audience forgets they are viewing a 3D representation of a human or creature. This is likely why Pixar often designs stylized characters with large eyes and rounded faces. Such designs make it easier for the characters to evoke a feeling of authenticity, which the audience readily accepts.

The Realism challenge

The 3d Model approach

A fully 3D model gives the most overall flexibility but also the most challenges to make it look authentic.
In recent years, there have been several attempts to overcome the challenge of achieving human realism and bridging the uncanny valley.

One notable example is the initial endeavor to portray realistic humans in the movie “Final Fantasy: The Spirits Within.”

…to the latest Terminator “Dark Fate” intro sequence…

…to the more stylized version of Battle Angel.

And in the past year, we’ve witnessed remarkable leaps in technology, showcased in films like Avatar 2. Although these characters may not be human, the advancements hint at the possibility of achieving fully realized 3D models. With the dawn of AI and its boundless potential for film production, we stand on the edge of completely closing that gap and delivering truly believable characters.

The Deepfakes

Back in December 2021, when we initiated the Spire project, AI applications were still in their early stages. MidJourney and Stable Diffusion were being released in beta versions, and we were intrigued by the potential of leveraging AI-assisted concept art and development to enhance our creativity.

Furthermore, Deepfakes were a recent phenomenon, and we were witnessing their initial utilization in productions. Notably, The Mandalorian showcased a memorable scene featuring Luke Skywalker in Season 2, which sparked both interest and controversy. Subsequently, we observed improved usage and more convincing versions of Deepfakes, such as in an episode of Boba Fett.

First version from The Mandalorian

Updated version from The Book of Boba Fett

We found that Deepfakes serves as a compelling alternative to achieve a highly detailed and precise rendition of a 3D character’s face.

 

our first ARTISTIC tests

Given this potential, we promptly made the decision to explore the utilization of Deepfake technology in our quest to enhance character realism. Our initial tests leaned towards artistic interpretations, but even at that stage, they served as a promising proof of concept.

Over the course of 2 years we tried and tested numerous versions and iterations

moving and acting

The next natural progression was to explore the possibilities of creating a moving version of the character, to test the limits of this technology and its potential for authenticity.

To accomplish this, we conducted initial tests using various scenes from different movies. In these tests, we applied the same face to all the scenes, experimenting with different appearances, actors, and emotional expressions.

This experiment provided sufficient evidence to persuade us to explore the practical application of this technology in a real production environment.

 

Motivated by this initial success, we proceeded to conduct other stress tests.

This time, we aimed to challenge the technology further by simulating a more demanding scenario—a live 3D animator test.

Followed by a more intentional animated sequence.

takeaway

At this stage, we believe that both Deepfakes and fully modeled 3D versions offer valuable options, each with their own advantages and challenges.

As we enter the full research and development phase, we will continue to explore and extensively test both approaches. This ongoing exploration will enable us to make informed decisions about the most suitable method for our development process.