Meta’s prototype photorealistic ‘Codec Avatars’ now assist changeable hairstyles, individually modeling the pinnacle and hair.
For round a decade now, Meta has been researching and growing the know-how it calls Codec Avatars, photorealistic digital representations of people pushed in real-time by the face and eye monitoring of VR headsets. The highest-quality prototype achieves the exceptional feat of crossing the uncanny valley, in our expertise.
The aim of Codec Avatars is to ship social presence, the unconscious feeling that you just’re really with one other individual, regardless of them not bodily being there. No transport know-how at this time can do that. Video calls do not even come shut.
On this interview, it is doubtless the avatars had been being decoded and rendered by a high-end PC, after each contributors underwent a protracted scan in a multi-camera array.
To ultimately ship Codec Avatars, Meta has been engaged on rising the system’s realism and flexibility, lowering the real-time rendering necessities, and making it potential to generate them with a smartphone scan.
Producing a Codec Avatar initially required an enormous customized seize array of greater than 100 cameras and tons of of lights, however final 12 months Meta moved to solely utilizing this to coach a ‘common mannequin’. After this, new Codec Avatars may be generated utilizing a selfie video rotating your head. Nevertheless, for the total high quality Codec Avatars, this seize takes round an hour to be processed by a high-end server GPU.
A Common Relightable Gaussian Codec Avatar generated by a cellphone scan, rendered in real-time on PC VR final 12 months.
Whereas Meta had proven off lower-quality Codec Avatars generated by a smartphone scan as early as 2022, final 12 months’s work introduced this benefit to the higher-quality Codec Avatars, by transferring to a Gaussian splatting strategy.
In recent times, Gaussian splatting has accomplished for reasonable volumetric rendering what massive language fashions (LLMs) did for chatbots, propelling the know-how from an costly area of interest to transport merchandise like Varjo Teleport and Niantic’s Scaniverse.
These newer Gaussian Codec Avatars are additionally inherently relightable, making them extremely appropriate for sensible use in VR and blended actuality.
Apple can be utilizing Gaussian splatting for its new Personas in visionOS 26, which are not fairly on the identical high quality as Meta’s analysis, however are literally accessible in a transport product.
Meta’s newest analysis, introduced in a paper referred to as “HairCUP: Hair Compositional Common Prior for 3D Gaussian Avatars”, builds on the Gaussian Codec Avatars work from final 12 months by including a compositional break up between the pinnacle and hair.
In a transport system, this might permit the consumer to swap out their coiffure from a library of choices, or their very own prior scans, with no need to carry out a brand new face scan.
By its nature, the brand new strategy additionally improves the seam between the hair and face, similar to the perimeter, and will higher assist hats in future.
Meta is getting nearer than ever to transport Codec Avatars as an precise characteristic of its Horizon OS headsets. Nevertheless, there are nonetheless a number of roadblocks.
For starters, neither Quest 3 nor Quest 3S have eye monitoring or face monitoring, and there is no indication that Meta plans to imminently launch one other headset with these capabilities. Quest Professional had each, however was discontinued in the beginning of this 12 months.
The opposite problem is within the rendering necessities. Whereas Meta confirmed off lower-quality Codec Avatars rendered by a Quest 2 years in the past, the upper high quality variations must date been rendered by PC graphics playing cards. Apple Imaginative and prescient Professional proves that it is potential to render Gaussian avatars on-device, however Quest 3 is barely much less highly effective, and Meta lacks Apple’s full end-to-end management of the {hardware} and software program stack.
Meta Join 2025 Takes Place September 17 & 18
Meta Join 2025 will happen on September 17 and 18, promising to “peel again the curtain on tomorrow’s tech”. Right here’s what we anticipate may be introduced.

One risk is that Meta launches a rudimentary flatscreen model of Codec Avatars first, to allow you to be a part of WhatsApp and Messenger video calls with a extra reasonable type than your Meta Avatar.
Meta Join 2025 will happen from September 17, and the corporate may share extra about its progress on Codec Avatars then.