IA, Python and Armory 3d/Blender


Yes, it could simplify a lot of works.

The role of the Neural Network is to replace the need to set specific algorithms belonging to those kinds of specific behaviors that you list. All you need is to let your NN plays, learns from it’s loose or win.

For exemple, considering the game tanks, an action gives it a reward or not and the NN learns what actions give best reward only using the picture taken by only one camera in the scene.
The image could be other sensors, or for example in the case of robot, I actually use angles/quaternions in the robot arm.

What is interesting too, is to discover the capacity that our NN has to imagine new scenarios that noone envisages before.
What I observed too during training with tanks is that the NN learns quicker if you help it to win … for example by moving the other tank/target at a good place for getting a hit. It doesn’t matter for it if you moved or not the target. So I think it’s an example of what we will discover with NN inside games in the futur, and creating a good training could reveal a good experience of the game situations. How to win could become how to better train my NN.


Specific behaviour is made to put rules on the game AI , so the player knows how to play and how to get advantage of AI flaws.

There was some people making a military simulation, it was so realistic , most of the time the player lost and the game became frustrating so a bad game.
It’s easy to make some game with AI always finding player and never miss, that’s game people don’t play lol

But i think NN can be used in a way the difficulty can be tweaked.
Well, i hope you’ll make a good demo game AI we could play and see if the AI is good enough.
This could be interesting, for example Metal Gear Solid game AI.


I also share your opinion that a good game isn’t made by a super IA, too performant. I think too that it’s important in those examples of shooting games with “behaviour trees” and “goal oriented action planning” to keep a good feeling of combat.

I propose the use NN applied to another type of gameplays, more in the spirit of Age of Empire, at the limit like WorldOfWarships, or for games where you try to accomplish complex mission task.

My proposal is elsewhere. Take the image of a chess game or football game, but where you don’t move your pieces with your mouse but let the pieces of the chessboard/field equiped with NN to decide itselfs where to move. The interesting part of the game would be how to obtain the best training of the NN. Your are in the position of the manager/teacher of your team.
It could be too for example a game where your engineer increasingly more complex ROVs by training their NN for increasingly more complex mission tasks, inspired by what we find here with existing underwater robots competitions (https://www.marinetech.org/rov-competition-2/) or by escape rooms.


Info about robot arm kinematics progress :slight_smile: : Thereafter a short gif giving an idea of the robot during it’s training (**see explaination at bottom).

Random data of joints angles are sent as an action to the robot arm at the beginning of the training, then as the training progresses, values from the NN replace those random values. Thus during the training, we see the robot arm more and more often finding the right position of the arm in order to reach the given target.


** (In a Forward kinematics problem, the location of the final end cap of the robot arm (Effector) in space, i.e. the position and orientation of the entire arm, is determined according to the joints angles. If you consider a position and orientation of the Effector that reach a Goal, the Inverse kinematics problem is to find the values of the joints that allow the arm to reach this given location. There are various ways to determine the solution of the Inverse kinematics problem and in Armory you have the Forward and IK Node at your disposal, but I invite you to test and verify that it’s not perfect. The NNs are capable of representing those non-linear relationships between input and output data. It’s already demonstrated with Tanks Armory/Atrap, and here the NN is learning through a try/reward mechanism that provides the link between cartesian and joints space required by the Inverse kinematics problem.


Always glad to see your progress. Keep up the good work @Didier! :smiley: