Online documentation and examples output are now available for our products
“you are now fu....riously waving your hands around”. At least, that’s the G-rated version. :)
— rosuav on Stack Overflow
Posted on 27 Apr 2022 at 16:20:00 by Phil
A quick disclaimer: I don’t think I’ve seen any computer-animated films made since the late 2000’s, so it’s very possible that the state of the art has greatly improved in the last dozen years or so.
Computer generated imagery (computerized animation) has come a long way in the last few decades, but still has far to go. Here we’re talking about fully animated 3D models (rendered in 2D) such as by Pixar, and not just computer-assisted inking and stacking of hand-drawn cells (think a show such as The Simpsons or Futurama). The results can be quite beautiful, but they’re still lacking something.
This is not so much a quality issue in the finished product (unless the filmmaker chooses to take inappropriate shortcuts), as something which greatly slows the production and raises the cost. The temptation in animation is to take shortcuts, such as keeping as many characters as possible in the dark, or at least, dim lighting, so that they don’t have to be so detailed or even to move. Watch a weekly animated show (such as The Simpsons) and see how many background characters stand there frozen, even if they’re supposed to be engaged in conversation with another background character.
In the first Shrek movie, one shortcut they took was to randomly generate whole forests based on a few basic tree models. Minor variations were “grown” from tree to tree, so that large numbers of 3D tree models could be quickly created. This appears to have been quite successful, and something similar was done to generate whole crowds of people, but the filmmakers said they weren’t terribly pleased with those results. Nevertheless, it was the only way to fill in large numbers of acting extras without busting the time and money budget.
Good CGI takes forever to do right. A full-length animated film can take three or four years (sometimes more) to create from start to finish, from starting the script to final release. Hand-animated films take less time, provided you can afford any army of animators to do the drawings and ink and paint the cells (whether computer-assisted or old-fashioned ink-and-paint). A comparable live-action film can be completed even faster.
The basic problem here (with CGI) is that every pose has to be modeled on the computer, rather than simply key poses being sketched, and someone else (actually, many artists) drawing the “in-betweens” to create smooth action. Each character or object has to be moved the proper amount from frame to frame, and this simply takes time. Lots of time. This doesn’t even cover filling the background, etc., with stuff, which has to be fully modeled in 3D so characters can interact with the scenery and the camera (audience viewpoint) can move around somewhat realistically. In live action filming, all this comes pretty much for free — actors know how to move their arms and legs to walk, horses know how to run, and so on. The animated model has to be extensively reviewed (manually checked) to ensure that everything is working properly, nothing is intersecting with anything else, and the like.
With every minute change in a character or object model having to be explicitly entered into the computer, for each frame, it’s no wonder that this takes forever. And every director dreads the thought of having to discard thousands of man-hours of work when a scene is cut, or even just modified in any way.
What is needed is a way to fully describe (model) a character or object (e.g., a car), and then be able to simply tell it at a very high level to move from A to B in a certain general way (e.g., walk, jog, run, skip). You would just give the program a command to have Joe standing in a certain position (with default attributes), and then tell his model to walk forward three full strides and come to a stop. The computer would know how a human normally walks, and could adjust this reflect fatigue or various injuries or deformities. The speed (within reasonable limits for a walk cycle) could be adjusted, and some slight randomness could even be introduced in order to keep it from being “too perfect”. The randomness would have to be saved in the scene once a visually pleasing gait is found.
There are already languages to model and set up a scene for a static image (frame); this would extend these languages to describe at a high level how all the participants will move (walk three steps; not limb-by-limb, frame-by-frame, orders to the model). The result would still have to be manually reviewed to ensure there are no collisions or intersections (software can help with this), and the scene is how the writer and director envisioned it, but it should be much quicker to generate the action in the scene, as compared to today’s process. Note that this does not affect full rendering and shading for the final product; just getting characters into place and moving them around. The full render can be fixed by more and faster computers, and doesn’t necessarily need more people.
One thing that would be more difficult, though not impossible, is matching facial features and mouth and lip movement to speech. The actors’ voices are usualy recorded first, and the animation is made to match the voice track. This currently a very time-consuming laborious process. Perhaps some day the voices and animation (of mouth movements) can be generated from the script, but would such results be satisfactory? After all, audiences usually want to hear big-name actors in the roles, not some computer-generated anonymous voice. It might be more realistic to automate mouth movements to the sound track and script, but that will still take considerable work.
In much CGI animation, clothing is cheated as the skin of the characters. It doesn’t move in a realistic way, except in a few special cases where extra effort has been expended to do that. Note in a movie like Shrek, the villagers’ clothing is simply their skin, and even on major characters only limited cloth modeling (Shrek’s tunic below the belt, Fiona’s dress) is done.
Real clothing drapes over the body of the person wearing it; it has varying degrees of stiffness and bulk (compare a silk scarf to a down-filled parka); it has varying degrees of inertia and reaction to air currents. Imagine a woman wearing a long dress doing a pirouette — as she starts her turn, the cloth lags behind and wraps around her, at speed centrifugal force extends it a bit (depending on the fabric weight and other factors), and as she slows, its inertia keeps it going and it wraps around her in the opposite direction from before. The clothing’s weight and stiffness affects how air currents and wind makes it move.
There have long been adequate models for cloth (a grid of weighted nodes with springs and dampers connecting them), such as a flag waving in a breeze, but clothing is more complicated than that. As the body underneath it changes shape, it will drape differently. Clothing is often made with cloth cut and sewn in different directions, which affects its behavior when it is draped or interacts with the body or with a breeze. Also affecting it is the stiffness and elasticity of the cloth, how fast it is being moved, whether there are layers of cloth interacting with each other, and probably other factors. Compressed cloth should bunch up in folds or wrinkles.
I think the basic modeling of cloth is advanced enough that clothing could be successfully modeled, but the amount of model setup effort and computer time needed to do this well may presently simply be prohibitive. As computer prices drop and speeds increase, we may well see more realistic modeling of clothing on all characters. However, it will take some modeling improvements to keep setup time reasonable.
Hair on characters often behaves as though it’s been glued down with full cans of hair spray, or some gel (think of the infamous scene from There’s Something About Mary). No matter how windy it is, or how violently the character moves, every hair stays in place. The same holds for fur and feathers. Hair, etc. just becomes another skin (over an oddly-shaped skull).
It should be possible to model individual hairs in a manner similar to cloth. Naturally, the sheer number of hairs on a person’s head makes this a heavy computational and modeling load, so something would have to be done to streamline the process. Perhaps only a few sample hairs could be fully modeled, and the rest of the hairs play “follow-the-leader”, with their behavior averaged over that of their nearest sampled neighbors?
Some CGI-animated movies I’ve seen have credits listed for people concerned with modeling “fur and feathers”, so apparently some movie makers have addressed this issue, at least in part.
Most CGI living character movement seems to simply move the limbs through various allowable angles, doing nothing about the muscles beneath. While a bone skeleton will be fairly rigid (not perfectly so, but close enough), and most joints can be fairly well modeled as either hinges or ball-and-sockets, as we all know, a contracted muscle will bulge out as it keeps a nearly constant volume while contracting. However, very few models seem to handle this at all. There is also fat and skin that tend to wrinkle into folds when compressed, instead of simply vanishing when the joint moves.
One aspect of modeling the human body in CGI animation that is very unrealistic is things that have a lot of fat in them, with no skeletal structure. Yes, I’m talking about breasts, and yes I’m always closely looking! :-) You’ll notice that women (and fat men) always are either not well endowed, so to speak, or they’re wearing some sort of nearly invisible super supporting sports bra. Nothing ever jiggles or bounces, yet in real life you’ll see plenty of such action. Women in CGI are incredibly hard-bodied, I guess. What kind of movie you’re making, and the rating you want, will affect how much bounce and jiggle you want to portray — the amount will differ for a Disney G-rated family film and for something X rated. But even a small amount of movement in the former would make it so much more realistic! Or at least, not draw attention to itself for failing to be lifelike.
There are other places were jiggle and bounce are needed for realism. Not only will abdominal fat jiggle like “a bowl full of jelly”, but fat and loose skin can build up on the thighs, especially. Poorly toned muscles, often seen in the elderly, can be very fatty (though not enlarged) and just hang there and jiggle. Finally, men have their own “swingers” :-), but unless you’re making a porn film or animating kilt-clad Scotsmen, such parts are usually fairly well restrained and don’t need extra modeling.
If you want realistic-looking humans (as well as other animals), something needs to be added to CGI models to portray the underlying musculature and fat, so that clothing will drape well over it and movement will be lifelike. It could be that an extended (relaxed) muscle is modeled in 3D, as would be a contracted muscle, and some (possibly nonlinear) interpolation be made between the two extremes. Different people, with different levels of conditioning, will have different shapes and sizes of muscles, which would be set at the modeling stage. Deposits of fat (such as breasts) will keep the same volume in different positions, but will respond somewhat to gravity (constrained by their own cohesion and of enclosing skin). Modeling how they would respond to various impulses and body movements is an open question, but I’m sure it could be done.
The term “Uncanny Valley” was coined by a Japanese roboticist to describe how we are increasingly comfortable with increasingly lifelike robot appearances, but only up to a point. When a robot face and body get a bit too close to human appearance, we suddenly become very uncomfortable with it. Many people report being “creeped out” by very lifelike artificial constructs.
While some dispute the very existence of the Uncanny Valley (so named for the sharp dip in comfort with the robot), there seems to be at least general agreement that it’s real, and a problem. It even seems to exist in CGI animation. Humans can be portrayed very realistically, but beyond a certain point, they creep out most real people. That may be why we’re quite comfortable with caricatures of people, such as the exaggerated necks and skinny limbs of most animated people, or the massive overbite and googly eyes of Matt Groening’s beloved Simpsons characters. They’re recognizable as humans, but not so close to real that it makes us uncomfortable.
So, can we create animation so lifelike that it is accepted as real people, or is it not worth putting so much effort in, and just leave it as a caricature? Some workers in the field have created extremely lifelike faces, but even those were not 100 percent acceptable to viewers, and the computer and modeling cost to make a movie with them would be enormous.
A real human face (the part most important to get right, as we spend so much brainpower processing faces) has imperfections. There are very slight asymmetries, differences in coloring and skin texture across it, subsurface effects of bone, fat, muscle, nerves, and blood vessels; and more. A great artist can paint a superb portrait, but so far, getting a computer to do it acceptably has been a problem. I suppose all the imperfections listed above could somehow be put into a model, with some random variation from person to person. Don’t forget the intricate musculature around the lips, critical for speaking.
Finally, no accurate portrait of a human could be complete without their proper movements. This includes not only gross motions such as movement and even talking, but other, often involutary movements that others will notice are missing or incorrect. The eyes blink at varying rates (dry and dusty conditions can increase it, as can nervousness), lips are licked, the face is touched by fingers many times an hour, the eyes move constantly (saccades), we swallow saliva once in a while, and of course we breathe, which affects the face (as well as much of the rest of the body). All of these need to be modeled correctly and in a reasonable range of timings, or people will find that it’s “off” for some reason they can’t quite put their finger on.
It may not be worth trying to cross the Uncanny Valley, as an incredible amount of modeling work would have to be done to get super-realistic people. However, even if accepting a small amount of artistic license (or caricature), there are still many things that could be done to make an animated character more lifelike in appearance and behavior, and therefore contributing to the desired suspension of disbelief among viewers. Fortunately, most people are fairly similar in their anatomy, so a model could be built once and reused widely (with minor random variations introduced). It will still be expensive in rendering time, but once the modeling is done, very little will have to be done in extra modeling per character.
All content © copyright 2005 – 2023
by Catskill Technology Services, LLC.
All rights reserved.
Note that Third Party software (whether Open Source or proprietary) on this
site remains under the copyright and license of its owners.
Catskill Technology Services, LLC does not claim copyright over such software.
This page is https://www.catskilltech.com/cgi-animation.html
Search Quotations database.
Last updated Fri, 04 Aug 2023 at 11:32 AM