This article explains how facial expressions are handled when exporting avatars as VRM (in ver. 2.0.2 and later).
When exporting VRChat avatar data as VRM using the dress-up feature, expressions are automatically converted to conform to VRM standards.
In VRM, facial expressions are categorized under "Expressions" and are divided into the following four types used by most VRM applications: Emotion, LipSync, Blink, and LookAt.
VRM-compatible applications also support Custom Clips specific to individual avatars.
https://vrm.dev/en/univrm1/vrm1_tutorial/expression/
During export, the dress-up feature estimates and maps the avatar's original facial data to the corresponding VRM Expressions.
The estimation methods for each expression type are summarized in the table below.
For avatars where the base model is in VRM format, the original Expressions data is used.
Emotion
Emotion expressions are estimated based on the names of AnimationClips set in the Playable Layers of the AnimatorController.
Expression | Estimation method |
---|---|
Happy | Derived from AnimationClip names or assigned to the hand sign for "peace" |
Angry | Derived from AnimationClip names |
Sad | Derived from AnimationClip names |
Relaxed | Derived from AnimationClip names |
Surprised | Derived from AnimationClip names |
- If no clear match is found for an expression, it will not be assigned.
This is common with Sad and Angry expressions. - AnimationClips that aren't assigned are added as Custom Clips.
LipSync
LipSync expressions are determined from the contents of the VRCAvatarDescriptor. For any unassigned elements, the system estimates matches using BlendShape names.
Expression | Estimation method |
---|---|
Aa | Uses the "aa" entry in the AvatarDescriptor. If unavailable, estimates from BlendShapes like fcl_mth_a or "あ" |
Ih | Uses the "ih" entry in the AvatarDescriptor. If unavailable, estimates from BlendShapes like fcl_mth_ih or "い" |
Ou | Uses the "ou" entry in the AvatarDescriptor. If unavailable, estimates from BlendShapes like fcl_mth_ou or "う" |
Ee | Uses the "ee" entry in the AvatarDescriptor. If unavailable, estimates from BlendShapes like fcl_mth_ee or "え" |
Oh | Uses the "oh" entry in the AvatarDescriptor. If unavailable, estimates from BlendShapes like fcl_mth_oh or "お" |
Blink
Blink expressions are first determined from the contents of the VRCAvatarDescriptor. For any unassigned elements, the system estimates matches using BlendShape names.
Expression | Estimation method |
---|---|
Blink | Uses the Blink entry in the AvatarDescriptor. If unavailable, estimates from BlendShapes like fcl_eye_close or "まばたき" |
Blink Left | Estimates from BlendShape names. If unavailable, splits the Blink into separate BlendShapes for each eye* |
Blink Right | Estimates from BlendShape names. If unavailable, splits the Blink into separate BlendShapes for each eye* |
*Enabled only if the "Adjust blinking for each eye" option is selected.
LookAt
Since LookAt is not included in the VRCAvatarDescriptor, it is not assigned to VRM Expressions.