Frequently Asked Questions
FAQs
Long Term Support?
Which Platforms Are Supported?
Our technology is agnostic for the client development environment. Core components run on our server and are accessed through our Cloud API
What file format does a didimo come in?
We support FBX and glTF file formats, with accompanying texture maps in png and jpg. We have a glTF extension to allow for specification of extra maps. Standard FBX and glTF loaders will be able to load, render, and play the animations of your didimos, but the extra maps are required for greater rendering quality. Our Unity SDK will handle all this automatically.
Do You Provide an SDK?
Yes. We have built a Unity SDK that provides you with tools and examples to get you up and running at speed.
Do You Support Unreal?
Sort of. We will be building an SDK to make it super easy though in the meantime, developers can connect directly to our API docs and import a didimo using a glTF importer. Sign up for our newsletter to find out when new features will launch.
Do you support PBR materials?
Didimos come in FBX and glTF format. The FBX format doesn't support PBR materials. Although glTF supports PBR, the spec isn't enough to properly render a character's skin nor hair. We assign the standard texture maps we can on the FBX and glTF files, and for glTF we created a didimo extension, that feeds our shaders the extra information required to render the didimos with as much detail and realism as possible.
How does it animate?
Animation is composed of individual animation tracks for different poses. They're mostly the poses defined by ARKit's facial features, visemes, and a few others such as basic facial expressions (smile, surprised, etc). To animate your character, simply blend between these poses. For more information, check the Facial Animations page.
Can I Create a Full-Body Digital Human?
Not yet, but soon. At the moment, our publicly released tools support the generation of human heads. We will expand public support for full-body generation soon. If you need to attach pre-existent bodies to our head mesh, contact us so we can help you set it up and automate the process.
Do You Include Viseme Support?
Yes, for detailed control of the mouth that can match specific vocal phonemes, visemes are a specific set of facial poses to mimic the unique shapes of the mouth when creating these phonemes. Included in many TTS solutions, such as Amazon Polly, is a data stream to drive visemes to a 3D model, thereby generating a perfect lip-sync and believable visual speech. The visemes option will provide your didimo with 21 viseme-specific shapes.
How Much Does it Cost?
What is the technical specification of the Digital Human?
Digital Human Specification
Customer Asset Libraries
What Integrations do you Support?
Speech Amazon AWS Polly
MoCap: ARKit
VR: Oculus Lipsync
What Level of Performance Can I expect?
Realtime Benchmarks
Generation Time
Submit Feature Request or Bug?
Severice Level Agreement?
Service Level Agreements (SLA)
Do You Store The Photos From People Who Create Digital Humans?
Do you Store My Customers Data?
Updated 9 months ago