MHC Talker

AI tools for digital humans that bring Metahuman to life

  • Configure MetaHuman roles with one click
  • Dual-channel text/voice input
  • Real-time oral reasoning synchronization
  • Speech repetition linked to head micro-movements
  • Post-processing expression fine-tuning system
  • Conversational logic-driven body movement libraries

One-Click Metahuman Configuration

For individuals or teams of artists, without complex programming, through the visual interface binding Metahuman character models, voice libraries, AI service interfaces, quickly complete the character "intelligent" configuration.

Dual-channel text/voice input

It supports users to interact with Metahuman characters through text input or real-time voice, and the characters can automatically recognize the semantics and generate contextualized voice replies, realizing a truly two-way conversation.

Real-time oral reasoning synchronization

Based on deep learning algorithms, accurately parsing voice content, real-time drive mouth shape and pronunciation highly matching, goodbye to mechanical mouth shape animation, presenting real and natural dialog performance. Full language support.

Speech repetition linked to head micro-movements

Through voice accent recognition technology, it automatically triggers the character's natural head bobbing, head nodding, side tilting and other micro-movements, simulating the subconscious reaction of human conversations, and dramatically improving the realism of interaction.

Post-processing expression fine-tuning control

Provide refined expression control panel, supporting detailed adjustment of AI-generated expressions (e.g., curvature of mouth, eye focus, frown intensity, etc.), realizing accurate portrayal from "basic emotions" to "delicate emotions".

Conversational logic-driven body movement libraries

Characters can automatically trigger matching body movements (e.g. hand gestures, shrugs, hand spreads, etc.) based on the content of the conversation, and support customization of logic rules. For example: head tilted forward when asking a question, waving hands when emphasizing, shrugging shoulders when puzzled, and actions are deeply bound to semantics.

Excellent language generalization ability

Full Language Support
Real time dialog interaction
Real-time interactive dialog

Unreal Engine One-Click Driver

Select a character and right-click to programmatically create a character Intelligent Dialogue, a one-click configuration of intelligent avatars with features including:

Facial animation generation

Maya Facial Animation Generation

Maya expression animation generator tool, generate high quality expression animation by voice, efficient production of character animation, features include:

Private deployment

Project deployment support

Private deployment, modular management, support for TTS, ASR, LLM free replacement, support for multi-client reasoning, to meet the needs of medium- and large-scale commercial project use, animation reasoning as low as millisecond response.

F.A.Q.

common problems

At present, MetaHuman standard program is the best choice for general-purpose binding, at this stage, only MetaHuman standard binding program is supported, and more program support will be launched later.

Support, MHC Talker can receive text and voice, send text or voice to the plug-in interface can be, the specific operation can be viewed in the relevant documents.

Currently, MHC Talker supports Unreal Engine for real-time interaction and Maya for offline animation generation, and Unity and Blender will be updated later.

Support for Unreal Engine versions 5.3, 5.4, 5.5, continuously updated.

Currently supports both Windows and Linux.

Packaged Windows, Linux and Android are currently supported.

Support for Maya versions 2022, 2023, 2024, 2025, continuously updated.

滚动至顶部