04-03 Facial Expressions

Location: Samples/04 Layers/03 Facial Expressions

Recommended After: Dynamic Layers

Learning Outcomes: in this sample you will learn:

How to control Blend Shapes using animations.

How to manage facial expression animations.

How to implement a Custom Transition Type.

Summary

This sample uses Blend Shapes and another Layer to add facial expressions on top of the same character and behaviour from the Dynamic Layers sample.

Overview

The scene is a full copy of Dynamic Layers, with the addition of a FacialExpressionManager to create some Sample Buttons which play the expression animations.

FacialExpressionManager initializes a Facial Expressions layer on top of the Base Layer and Action Layer created by the LayeredAnimationManager:

using Animancer;
using Animancer.Samples;
using UnityEngine;

[DefaultExecutionOrder(1000)]
public class FacialExpressionManager : MonoBehaviour
{
    [SerializeField] private AnimancerComponent _Animancer;
    [SerializeField] private SampleButton _Button;
    [SerializeField] private NamedClipTransition[] _Expressions;

    private AnimancerLayer _Layer;

    protected virtual void Awake()
    {
        _Layer = _Animancer.Layers.Add();
        _Layer.SetDebugName("Facial Expressions");

        _Layer.Play(_Expressions[0]);

        for (int i = 0; i < _Expressions.Length; i++)
        {
            NamedClipTransition expression = _Expressions[i];

            _Button.AddButton(
                i,
                expression.Name,
                () => _Layer.Play(expression));
        }
    }
}

It uses Sample Buttons to create a list of buttons which play each of the animations.

Those buttons are defined by an array of NamedClipTransitions, which are just ClipTransitions with an additional Name field to use for the text on each button:

using Animancer;
using UnityEngine;
using System;

[Serializable]
public class NamedClipTransition : ClipTransition
{
    [SerializeField]
    private string _Name;

    public override string Name
        => _Name;

    public void SetName(string name)
        => _Name = name;
}

Animations

Unlike regular bone based animations, facial expressions are often implemented using Blend Shapes which appear as sliders in the character's SkinnedMeshRenderer component. Those sliders can be controlled in code or by animations.

The character used in these samples was created using VRoid Studio which means it automatically comes with the common expressions shown here as well as various others for things like mouth shapes and movements for specific parts of the face. These expressions are all quite simple so the animations only contain a single keyframe to set their corresponding Blend Shape value to 100:

The Transitions used to play those animations each have the default Fade Duration of 0.25 seconds so when Animancer plays them, the fade creates a smooth blend between the expressions.

Initialization

FacialExpressionManager is designed to be usable on top of any other system without interfering with the other animations.

It does this by using a [DefaultExecutionOrder] attribute set to 1000 so that its Awake will run after the other scripts which are at the default time 0.

[DefaultExecutionOrder(1000)]
public class FacialExpressionManager : MonoBehaviour
{
    [SerializeField] private AnimancerComponent _Animancer;
    [SerializeField] private SampleButton _Button;
    [SerializeField] private NamedClipTransition[] _Expressions;

    private AnimancerLayer _Layer;

Then it simply calls Layers.Add instead of getting a specific layer number:

    protected virtual void Awake()
    {
        _Layer = _Animancer.Layers.Add();
        _Layer.SetDebugName("Facial Expressions");

        _Layer.Play(_Expressions[0]);

That would allow this script to be used in any other sample, regardless of how many layers the other scripts use.

Custom Transition Type

The animations used in this sample have long names like AnimancerHumanoid-Face-Neutral so it would be nice to give them each a shorter display name. We could use some code to do something like get the last - character and use everything after that as the name or even just add a string[] field to match up with the array of animations, but this is a good time to demonstrate how you can create your own Custom Transition Type.

In order to be usable in Serialized Fields, your class must have the [System.Serializable] attribute:

using Animancer;
using UnityEngine;
using System;

[Serializable]

Then you can simply Inherit from one of the existing Transition Types or if you don't want all their features you can implement ITransition directly:

public class NamedClipTransition : ClipTransition

And that's it, now you can put whatever you want in it.

In this case, we want a Serialized Field for the name which we expose publicly by overriding the Name property Inherited from the base Transition<TState> class:

{
    [SerializeField]
    private string _Name;

    public override string Name
        => _Name;

Unfortunately, C# doesn't allow a property override to add a setter if the base property didn't have one, so we also need a SetName method:

    public void SetName(string name)
        => _Name = name;
}

We won't actually use that method since we want to set the names in the Inspector, but it's there just in case.

Now the Inspector shows a Name field for each of those transitions:

The Platformer Game Kit uses its own custom transition type for melee attacks to store their hit area and damage as well as one for ranged attacks to reference their projectile prefab and details about how to launch it.

Buttons

Now we can go back to the FacialExpressionManager.Awake method we were looking at in the Initialization section.

To create a button for each expression, it simply calls AddButton on the Sample Button script for each of them:

protected virtual void Awake()
{
    ...

    for (int i = 0; i < _Expressions.Length; i++)
    {
        NamedClipTransition expression = _Expressions[i];

        _Button.AddButton(
            i,
            expression.Name,
            () => _Layer.Play(expression));
    }
}

Manual Mixers

The animations used in this sample are all simply poses with 0 length which set the values of Blend Shapes on the character's face renderer and they are able to smoothly blend together by simply Fading like any other animation. In this case, playing them each as individual animations is quite convenient.

In an Animator Controller, facial expressions are often managed inside a Direct Blend Tree which allows you to control parameters which correspond to the weight of each animation.

Animancer allows you to directly control the Weight of a state whenever you want without any extra setup or grouping:

AnimancerState state = _Layer.Play(expression);
state.Weight = 0.5f;

But it also has Manual Mixers which allow you to group multiple manually controlled states into one Direct Blend Trees.

[SerializeField] private ManualMixerTransition _Expressions;

...

AnimancerState state = _Layer.Play(_Expressions);
state.GetChild(x).Weight = 0.5f;

That wouldn't simplify this particular use case, but they can be useful in certain situations.