Skip to content

An Open Source Modular Framework From Face to FACS Based Avatar Animation (currently in Unity 3d).

License

Notifications You must be signed in to change notification settings

kfriesth/FACSvatar

 
 

Repository files navigation

FACSvatar v0.2.7-Alpha

Affective computing and avatar animation both share that a person's facial expression contains useful information. Up until now, these fields use different processes to obtain and use these data. FACSvatar combines both purposes in a single framework. Empower your Embodied Conversational Agents (ECAs)!

  • Affective computing: Facial expressions can not only be analyzed, but also be used to generate animation, purely on data.
  • Animators: Capture facial expressions with just a camera and use it to animate any compatible avatar.

This interoperability is possible, because FACSvatar uses the Facial Action Coding System (FACS) by Paul Ekman as an intermediate data representation. FACS describes facial expressions in terms of muscle groups, called Action Units (AUs). By giving these AUs a value between 0-1, we can describe the contractions / relaxation of facial muscles.

FACSvatar demo 2018-02

Documentation & simple how to run

Open 3 terminals and open the project unity_FACSvatar in Unity 3D (2017.3)

  1. Press 'play' in the Unity editor
  2. Install the PyZMQ library (ZeroMQ for Python)
  3. Terminal: python N_proxy_M_bus.py (/modules/)
  4. Terminal: python pub_blend.py (/modules/02_facs-to-blendshapes/)
  5. Terminal: python pub_facs.py (/modules/01_facs-from-csv/)
  6. See an avatar move its head and make facial expressions!

For more detailed instructions, see the FACSvatar documentation.

Modules & cross-platform

This framework is tested on both Windows and Linux (Ubuntu).

Everything in this framework is modular! Models look low quality? Use different models which can be animated by FACS (or convert FACS to matching Blend Shapes). You made a better FACS extractor (with e.g. a depth camera)? Use that instead! Want more intelligence, add your own modules for extended functionality!

The modularity is made possible by using ZeroMQ - brokerless messaging library. Data is transfered between sockets in a Publisher-Subscriber pattern. Therefore, modules don't need to know where the data comes from, or who uses their data. This makes it easy to add/remove modules, no matter the programming language.

Functionality

  • Stream your facial expressions in real-time into Unity 3D
  • Set Shape Keys in Blender with your facial expressions for high-quality rendering and/or export your facial animation for classic trigger-based animation in e.g. games. Manuel Bastioni FACS expressions
  • [near-future] Deep Neural Network generation of facial expressions for Human-Agent Interaction.
  • [your modules] Please add your own modules, release your code, and let's expand the functionality of this framework :) More details in the documentation.

Detailed workings (English & 日本語)

FACSvatar details in English and 日本語

More can be found on the project's website: FACSvatar homepage.

Note: The poster still shows Crossbar.io, but this has been replaced with ZeroMQ.

Software

About

An Open Source Modular Framework From Face to FACS Based Avatar Animation (currently in Unity 3d).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C# 46.7%
  • Python 39.7%
  • Jupyter Notebook 10.9%
  • ShaderLab 1.9%
  • GLSL 0.8%