How Can We Implement OpenPose/PoseNet for Blender Motion Capture?

Re-posting this as a new topic:

1 Like

+1 for OpenPose
Only needs a camera, full body tracking.

I think it’s a Blender task , not Armory.

  • OpenPose data and create some action animation with some armature

The work left to do is rig and adapt a 3D model character to the armature.

Another animation feature to consider in Armory and animation, is animations re targeting, like UE4 can do, it’s very useful

Animation retargeting is must have, it would really help for those who can’t animate themself.
About OpenPose, i am not great coder, so i can’t really help much, but if somebody is working on blender plugin, then i think we should wait.Never got my hands on machine learning.

What is animation retargeting? YouTube doesn’t work for me so if somebody doesn’t mind giving me a quick description of what it is. :slight_smile:

Animation retageting is feature in UE4, where you can use animation of another rig on yours.
i.e., let say x-character is already rigged and animated but y-character is rigged but not animated, so you can retarget animation of x-character to y-character(which is not animated) providing that both have same rigging across character.
WHY THE HELL doesn’t youtube works for you.

OK, that is great. I need that for my game for sure. :slightly_smiling_face:

About YouTube, I have strict network restrictions where I am. :lock: :grinning:

Just use vpn bruh :man_facepalming:

So Open Pose apparently requires 4 cameras to do 3D pose generation, and that is not very optimal for a home grown solution. Better than expensive equipment, but still not the best solution possible.

@Didier I think you posted a link at one point about something being able to mimic motions found in YouTube videos. I wonder if we could use Armory with animations and models that look decently enough like real people to train a network to recognize a 3D pose with only a 2D video.

So we would essentially have a bunch of procedurally generated animations with lots of combinations of camera angle, focal length, lighting, environment, and people models, and render videos from Armory with those animations and with the 3D data of the human poses. That data would then be fed to a Neural Network that would be trained to predict a 3D pose from a 2D video.

Any thoughts? :slight_smile: