Armory ignoring the Pose Mode?

Do you know if there is some link beetween what we set in Pose Mode then do in this Mode (like for registering Animations/Actions with IK in armatures), and Armory ?

For those who discover, the animation system in Armory is composed of the animation tools and editors provided in Blender (see Dope Sheet, Action Editor, Timeline, Pose Mode, Parenting, Bone Constraint, …) that mix the skeletal deformation of “Meshes” with the morphological deformation of an “Armature” composed of “Bones” to allow complex animation.

This system can be used by mixing animation sequences which creates customized special movements such as those that exist in the Armory examples animation_xxx here_

The question here is to be able to directly control bone transformations through logic nodes such as “Bone IK” or by creating “Traits” or “Events” that define which animation an object in the scene such as a character or robot should use in a given situation.

It’s in relation with the capabilties of the public function solveIK(effector:TObj, goal:Vec4, precission = 0.1, maxIterations = 6) inside BoneAnimation Haxe file.

Having Blender’s Pose Mode tool is essential to quickly test an IK (Inverse Kinematic) and Constraints on an Armature for example. The fact of finding the same behaviour in Armory afterwards when you move a Bone, and particularly a Control Bone, also seems essential and that’s the question :slight_smile: , to be or not to be like in Pose Mode )

So, what exactly is your question? Are you wondering if Armory is going to pull the bone constraints like IK from Blender? It doesn’t look like Armory currently exports the bone constraints from Blender. That would definitely be something that would make sense to do.

If Armory had “Bone Traits” or something similar and then implemented a corresponding Bone Trait for the majority of Blender bone constraints, then it could work similar to how physics constraints work right now.

1 Like

@zicklag I share the point of view of @MagicLord here How to make a simple robot gripper? and this answer to my question, that is we don’t need to make a step in Blender Pose Mode if there is a real time possibility in Armory.

Actually (but can change), what seems to me to be not often clear in Armory for the moment is the frontier between things to do in Blender or not, in the sense of reusable or not in Armory.

Yeah, I think that is somewhat of case-by-case basis. I mean, you want a whole lot stuff just to translate straight from Blender, and a lot of it already does, but you have to balance what makes sense to do. It can’t always work the same because it is a completely separate implementation. I don’t know how easy it would be to make an exact copy of Blender’s IK and rigging behavior, but I think a best-effort to make it like Blender is a good approach.

I think, even if it wasn’t exactly like Blender, the functionality of bone constraints similar to Blender is definitely a feature to work for.

1 Like

What is super in Armory is that we are not really stuck on a problem because there is always the possibility to create our own Haxe code over the existing source code in Armory, as it is the case for example with solveIK or modify/create logic nodes cf. Bone IK.


What I hope is a suite of logic nodes, without resource intensive needs, allowing us to constrain the angle at which joints bend and in what direction, and to allow systems of bones with multiple targets for different chains of bones.

I suppose we have already all the necessary ingredients in Armory, as we have access to the position of each joint/bone, we have the lengths between each joint/bone and we can set a target goal for the end effector or multiple end effectors.

I’m not sure it’s really performant to make all IK code on Haxe ?
Would you create animation re targetting on Haxe also ?

maybe you could do some test by yourself to verify if Haxe is performant or not :pray:

Making all in missing features in Haxe instead of native C++ is adding a layer, it’s a work around.

This won’t impact small games, and will work perhaps good enough, while the performance impact will be visible in bigger games.
When the game gets more than 10 enemies with scripts for AI, animations, effects, level scripts for game flow, with a complex graphics level, you will see the difference.

Very interesting to know, but not very clear. Tell us more please.

Read the post above, unlike some other users i don’t deleted it.

It’s always better made when this is the engine developers that implement a features (there has been some examples in Godot what users tried to make turned in disaster performance and took too much time, while it turned great when the main developer made it).

I also don’t like work around that looks bad when a feature should be implemented the right way instead.
For example some grass systems work around using particles in Godot looking bad and without the right tools, but looking fantastic for those who never used a more complete 3D engine :joy:

I really don’t like work arounds and i just skip them :laughing:

You are right perhaps, making in Haxe or not , should not have impact because most people won’t try to make a huge game. People making open world or vast games will use instead prooven and stable ones with the right editor features.

I don’ t understand your thread ? It is about Armory bug or about you coding in Haxe some features ?

Sure, it would be surprising to learn that big projects are underway on an Armory not even in v1 :sunglasses:

However, it would be clearer to have concrete performance test results from you to share with us, as for example benchmark between something in C++/UE4 and Haxe/Armory.

It reminds me of the old chapel debates between Assembler vs C, then C vs C++, then C++/Python …

To perform an impartial test, it’s necessary to identify and rework some parts before, like in the case of Haxe/Logic Nodes/Armory test, it could be possible to rework zones like excess object allocations, CPU-intensive bottlenecks, timer nodes, look at C++ code generated by Haxe, etc. that can lead to large performance gaps.

But to return to the Blender/Armory/Haxe production line, what is extremely promising with Armory is the continuity of the production line. All in one.

For example, you know as well as I do that currently exporting a mesh/squellette between Blender and Unreal can become a nightmare.

Then the approach of the current Engine games is via a series of animation sequences … that’s good but I’m not sure it’s the best approach in the long run. This is too much like an evolution of the sprites approach of the past and we have GPU that could benefit with others better approach.
I think Armory’s chance is to be able to keep away from these legacies and rethink things from top to bottom with new, innovative and simpler techniques, more in phase with the AI progress and the power of GPU cards to massively parrallelise things.

Thus I think that the Armory principle can lead to a real disruption in relation to the current offer of the major of this sector.

Although the Blender open-source community is a supporter of this initiative, Lubos must already offer a correct V1 base on a solid base to create an ecosystem that can attract talents around him and Armory for the future. History repeats itself.

But this leads us too far from the original question that remains "relations beetween what we set in Pose Mode in Blender and then can do in it, and Armory (in terms of reusability)?

1 Like

I agree we must create benchmark demo levels. It’s usefull to compare with new Armory updates performance is better or not.

Some benchmark that can be interesting would be Haxe Vs nodes, some demo with many long enough scripts or duplicated ones.

Armory production is the best, everything you create stays in Blender.

About animations, i like Actions in Blender, this keep animations organized instead of having one huge animation track you must declare start to end frames for each animation in the track.
New animations systems for Armory could be :

  • states machines
  • procedural animations
  • animation matching

But you will always need to tell the 3D engine when a character should walk or run, this is animations whatever you use pre made animations or procedural.

Pose Mode is only a Blender feature to create key frames animations.
Ik solver in Blender is used to make animations in an easy way, but it’s also a Blender process; not a general library intended to be used in game engines.
Each 3D engine creates it’s onw IK solvers, perhaps there is some open source ones that could be included in Armory to be used in game (for example foot auto placement).
Unreal 4 for example got IK and animation re targetting features implemented by teams, and lastly they implemented physics animations. Those features are not Blender or Maya features , but in game real time solvers that must be created for the 3D engine.

I moved my reply to a new topic.

1 Like

@MagicLord I would like to take advantage of your interesting detailed answer to explain a little more the idea I have for an “Armory++ Engine Game” that could become a real disruption with what currently exists like in Unreal or Unity.

I imagine very well something like an innovative logical system which means that in this new system ours logic nodes will not appear any more like the actual “logic nodes spagettis”, but rather in the form of a number of statements about “the game world”, forming a knowledge base.

Our 3D applications will then be made by querying this knowledge base, with a new powerful " Armory++ engine game", allowing it to deduce certain facts from those contained in its database of knowledge.

Thus no more the need to tell the 3D engine when a character should walk or run.

And I think that the fastest, simplest and easiest way to do it is through Deep Learning / Neural Networks. Thus ATRAP approach is a good way to train the Neural Network and constitute the necessary database of knowledge.

Someone would have already made it.

This is what Unity ECS is, data separate from object process.

You will run Deep learning to bake data to create the same AI that would have been coded.
You don’t want the learning long process happening during the player game.
For example a game enemy character learning during some days while the player is playing the game.
The enemy getting stuck on obstacle, not able to attack or defend :joy:

While deep learning will be more appropriate for system like advanced dialog, simulation on some environment, different kind of applications.
For fast action games coded AI will be faster and more efficient, it doesn’t need data, it knows when it must perform some action and what action.

This is what we call AI.
You still need to declare some data for AI to know when it must run or walk.
For example :

and you need to declare possible actions :

  • run animations
  • walk animations
  • navmesh to be able to navigate
  • raycasts to detect moving objects around

You will need a process that will use that data and launch walk or run.

This is not a revolution, you are simply avoiding the ai coding and letting some deep learning process bake data to create the same AI as a coded AI.
But you’ll have the same amount of work, you will need to declare same events or conditions ( hit by player, obstacle in front, defend after taking some damage )

Level data is another important feature, for example game Warframe collected where players wall jump and run and use that data for ai. While some game engines have tools to bake that level data.

Game or engine custom version ? :sweat_smile:
Is this a specific game you want to make ?

Armory is alpha, without good physics collisions and behaviours, and without navigation working well, top layer like deep learning will not work because the base is not working.

Sorry @MagicLord , but we’re not talking about the same AI. Rather, we are discussing here a system that is able to acquire and apply knowledge, and thus have a capacity for logical reasoning. It must be linked to understanding, resolution, planning and many other concepts and tasks.

Thus Unity ECS has nothing to do with AI but rather with a software architecture model, comparable to what is found in the VMC model (Vue modèle contrôleur, architecture ou méthode de conception pour le développement d’applications logicielles qui sépare le modèle de données, l’interface utilisateur et la logique de contrôle)

Same thing with Unreal, what is actually called AI seems also to be also the wrong track to make comparisons.

But maybe things could have changed recently and maybe you could shed some light on that?

… top layer like deep learning will not work because the base is not working.

it’s just a matter of time and of thinking big with little eyes and little hands so we can do little things every second, like Lubos is actually doing. :sunglasses:

Lecteur, pour la décontraction de votre intelligence :smiley:

@Didier I really like that you think as big as you do! I have thought of drastic uses of machine learning before and how interesting it could be to try out. I love brainstorming and coming up with crazy, different, and potentially game changing software and techniques. :sunglasses:

For Armory, right now, as someone who is planning on making a game in Armory, I am probably going to look more into making the more straight-forward techniques for game programming, but that doesn’t mean that I think that your AI system is a bad idea in any respect. I think that if you have time and ability to work on this kind of stuff, it would be great to try out.

To @MagicLord’s point, Armory does need some on its base, but that is stuff that I am going to be working on as I go. If you have got a handle on how to implement AI, then you can work on that while Lubos and others work on what they are good at for the core of Armory. We’ll see where my time goes, but I also might want to work with you on a smarter animation system sometime. :smile:

Speaking of animation, I had what I thought was a very interesting idea earlier: imagine what it could mean to have “Bone Shaders”. I’m not exactly sure what that would look like, but just like vertex shaders run on the GPU and calculate massive amounts transformations on vertices, bone shaders would run on the GPU to calculate the transformation of bones. You could potentially use a node-based shader builder system to create constraints for bones. Procedural, AI, and keyframe animation would also have to be worked into the bone shaders somehow. I thought that the potential of such a system could be great, and because it is implemented in Shaders, it would be GPU accelerated for ultra high performance.

1 Like

It’s exactly what i was saying, action games needs instant parametred ai, while deep learning is about learning. While deep learning data can be used for already made ai.
Well … honestly i’m not interested in deep learning, i need instead good ai working as exepected, this is what i create in Unreal 4 with behaviour tree and space query features.
I wish you good luck with deep learning in Armory, perhaps some people will need it for some specific usage.

Unreal 4 is very well rounded for AI.

This exist already in most 3D engines as GPU usage for animations.
About constraints on calculated on GPU, it could be good for ragdoll, IK.
Or physic animations :

There is no problem bringing new features to Armory, that’s cool.
But for me , a solid base is lot lot more important :sweat_smile:

If I may say, and thus too so that novice readers do not get lost and get better understanding of the issues, it can create complete confusion in reading this.

First of all, a deep neural network model is specially designed for the problem area and the available data. It is trained as in ATRAP, for tens of hours to a few days.

In a second step, it is deployed in a production environment where it receives a continuous flow of input data and executes the inference in real time (especially when it comes to driving a car …) for giving a result directly used by system.

For our case in question, that is a Game Engine, a training can be done for the problem area of the tools used for the design and implementation of the game. Another training can be done for the problem area of game scenarios. etc. This training could be done as well by the developper of the game himself, as by a community of Armory users or as a SaaS.

Furthermore, real-time is a cutting-edge subject for Nvidia, Intel ,… with GPU/CPU more and more specialized for this task and software technics like pruning or compression techniques inside th Neural Network.

Thus for example, it would become natural to apply Deep Learning to solve the management of the armature/bones in a character’s hands. A Neural Network with a standard architecture would works remarkably well when it will be applied to this problem. The same thing to realize is actually a nightmare for animators, as the position of a finger could be influenced by environment like temperature, sounds, speed, wind, … or psychology, culture, age, others surronding characters beheaviors, … and our brain is particularly well trained to detect things going wrong in an animation during an action as the position of hands/fingers influenced by parameters sometimes from the domain of intuition or irrational.