top of page
Introduction
Besides realistic 3D modeling and manipulation, my major technical research is about generating multiply avatars of people who experienced the work with the same movements from Real-time motion capture by DSLR Webcam.
This research is for the audience to experience infinity fractal selves moving as they are moving in real-time.
Latest progress:
This is the Demo of the multiple real-time avatars in my recent research. I used URP  render pipeline which can directly transfer the real-time mesh changes of the prefab through the material and drew on the screen directly.
It is fast enough for real-time motion capture.  It is over 3000 avatars and still fast enough. But only the avatars which are close enough to the camera works.  And the material can't be customed. My intention is to attach SkinMeshRender component to one avatar and pass it to other avatars and use MeshRender to render them. Only if I use Universal Render pipeline Lit Shader works. And if works even without any scripts. I really wonder why. I am going to figure it out through my later development. 
 

Origin of the research
I came across the limitation of the CPU and GPU capacity when I tried to let multiple of the same characters follow the movements of one character. I tried many ways and learned about Shader
step by step through the website called Catlikecoding during the research.  
ECS,  job systems
I learned a little about build-in HDRP shader graph render pipeline which I can make a better effect. But it caused flickering problems.

I learned about ECS and job systems which is helpful to accelerate the speed.  But the animation doesn't work in it.

 
Custom blend mode
In the meanwhile, I found out the blend mode can create the right effect I need.
I learned about how to use the build-in shader and Amplifier Shader graph. However, it the light addition is adding the back of the object too if I set it to transparent.
So I need to rewrite the Grabpass. And grab the screen a little bit further than the object.
Animation instancing
I can reduce draw calls. And the CPU has been updated. But it seems like doesn’t work for animation. I looked up more and found there is animation instancing and compute Shader.
 
I learned about baking the animation information in a texture and read through them in the Shader would help accelerating the speed of CPU. But this needs to Renderer the texture first. But I need to do it in real time. 
Barracuda real-time motion capture
In the meanwhile, in order to do the motion capture of the people who experience my work in real-time I found out the Unity Barracuda motion capture might work well in it. So I began to learn about the Python, Tensorflow and animation system in order to custom it. Though I am still not capable enough to train a model by far. But I can understand what the script is about but willing to study it more.
Compute Shader
Then I found out the computer shader which can use GPU multithreaded to do the calculation instead of doing it in CPU. Based on some tutorials I began to try to pass the joints position of the avatar and try to use compute shader to generate to vertices position of the mesh.
 
But I found out to be not smart because I think I can only do this once and pass the position of the mesh to the material. And that comes to latest progress above.
Technical  Development
bottom of page