Bad Image in > Bad didimo out :-(
Sending a selfie image to the API that meets the input requirements outlined below is key to getting the best output.
We suggest building step-by-step in-app capture UX influenced by this guidance.
and additionally supported this with ARKit 'validation locks' which prevent the user from capturing selfies until the criteria listed below are met. This is particularly useful for assess the facial expression, facial orientation, facial position, scene illumination level, and whether the eyes are open / closed. This will only allow for validation of images captured in-app, and not of images selected from the users gallery / photo library.
Didimo provides a simple validation step in the pipeline to find and recognise a face. If this step fails, the pipeline will not process the image and the generation request will be marked as failed, along with a suitable error.
We've generated tens out thousands of didimo's and we've come to know exactly what will get the best output.
The below is a summary of what we would be eternally grateful if you passed on to your customers. We also have some recommendations at the bottom of this page to in regards to validation solutions.
The face should be well lit, with uniform illumination.
Facing a large open window that does not have strong sunlight typically creates optimal results.
Clear View of Face
The face should be completely within the camera frame (a selfie from arms length using a front facing camera is spot on).
Remove all hair from the front of your face. If you have long hair, it is better to tie it back or wear it up, so it does not cover your forehead, ears and shoulders.
Remove Glasses and Accessories
Remove any glasses and other facial accessories such as nose rings, face masks, etc.
Directly Facing the Camera
Ensure you are facing directly on to the camera with your head not tilted up or down, and with your eyes open and looking straight ahead.
Neutral Facial Expression
Keep your facial expression as neutral as possible. Avoid smiling, showing teeth, frowning, etc.
Below highlights some features that can really get the best out of the platform.
Maintain the image EXIF data in the image that is sent for best results as this tells our platform a lot of information about the image that it uses to positively influence likeness levels
Most notably, the 35mm equivalent focal length value (exif:FocalLengthIn35mmFilm). This attribute is the key to FOV which plays a major part in achieving high likeness scores.
Cropping the image to reduce the file size is generally not recommended as it may invalidate the EXIF data.
The pipeline can process input images up to a maximum resolution of 16MP images.
If a device captures at a higher resolution than this, it is recommended to include a resizing step to the input image before sending to the API or it will be rejected.
As an experimental feature, you can optionally send a depth image acquired simultaneously with your photo.
This depth image should be in PNG format. Provided that our pipeline is able to parse the depth image data, this can significantly improve likeness.
You can read more about the API related to generating a didimo here
Mobile Device to capture Images
Mobile devices are the most ideal device to capture a selfie.
Mobile Device depth images capture
A lot of mobile devices now have depth cameras - this really makes a difference to likeness.
ARKit and similar depth tools are great for creating an in-app user experience to validate the input by assessing facial expression, facial orientation, facial position, scene illumination level, and whether the eyes are open / closed. We highly recommend this is something everyone does to ensure quality of likeness is kept to the highest it can be for everyone.
Updated 8 months ago