IA, Python and Armory 3d/Blender

When you finish your nobes can you release them for us ( non coder)

@Chris.k Yes the objective is to be used by non coder too and mainly by graphist, designers (game, systems), automation engineers …
If it’s sucessfull I hope that the combination of the NNA (Deep Learning) and the RL (Reinforcement Learning) = DRLA will give us an innovative generalizable way for many domains to learn what are the best actions to do during a game, with a robot, a machine, the IoT, … the actions/states that can then be obtained in 3D environment like Armory are so huge that I am impatient to finish the logic nodes to see what you’ll imagine doing with them :slight_smile:

1 Like

The development of the environment for the DRLA was longer than I expected with some random behaviors in Armory that @Lubos deals with in his bug stack. Thanks again to him!

Here is a small video to show you what I mean with this game environment tuning in Armory, ready soon to start the DRLA training :slight_smile: (Neural Network moving actions management, game score management, environment capture)
https://1drv.ms/v/s!AkfR54v60DIDkIkIh4xREd0yAcypIw

Great to see Didier.

Oooh this is super interesting. Great work.

Might be something i play with in the future.

That’s done. This new kind of 3D Armory training environment for Deep Learning encourages now to test it on several others domain. The next will be enterprise 4.0.

Why is it better ?

  • you don’t need any training data, as it is the DRLA (Deep Reinforcement Learning Armory) that plays, registers and trains the NN (Neural Network) itself in the 3D environment.
  • once the NN is trained, the file can be exported to a real target.

Actual state :

The current DRLA is a mix between:

  • the readability offered by the logic nodes and Traits in Armory,
  • the compactness of the code encapsulated in a specific node DRLA

The actual node trees for the scene tank1 object : :slight_smile:

My actual list of nodes, mainly used by the DRLA :

'''Add custom nodes'''
add_node(ArrayLoopIndiceWaitNode, category='Logic')
 
add_node(StringToArrayNode, category='Array')
add_node(MaterialNamedNode, category='Variable')
add_node(SetMaterialSureNode, category='Action')

add_node(PrintCRNode, category='Action')

add_node(StringSpecialNode, category='Variable')
add_node(GateTFNode, category='Logic')
add_node(StringSeparationNode, category='Variable')
add_node(StringSplitToArray, category='Variable')

add_node(PlaySetTilesheetNode, category='Animation')

'''pour le Deep Learning'''   

add_node(NNJsonGetNode, category='Logic')

add_node(NNFactoryNode, category='Logic')
add_node(NNHiddenLayerNode, category='Logic')


add_node(NNNetworkActNode, category='Logic')

'''pour le Deep Reinforcement Learning'''   

add_node(RLMetteurEnSceneNode, category='Logic')
add_node(RLActionNode, category='Logic')
add_node(RLEpsilonNode, category='Logic')
add_node(RLGameNode, category='Logic')
add_node(RLEnvironmentNode, category='Logic')
add_node(RLStateNode, category='Logic')
add_node(RLAgentTrainingNode, category='Logic')
add_node(RLAgentActiveNode, category='Logic')
add_node(RLBatchLearnNode, category='Logic')
add_node(RLPredictNode, category='Logic')
# add_node(RLReplayExtractNode, category='Logic')
add_node(RLReplayToMemNode, category='Logic')
add_node(RLReplayLoopNode, category='Logic')

add_node(RLBatchNode, category='Logic')

add_node(RLQsaNode, category='Logic')
add_node(RLStoSANode, category='Logic')

add_node(RLCameraNode, category='Logic')

add_node(RLOnRenderNode, category='Event')

Therafter an example of an important node tree: the one in charge of rereading batches / rounds / episodes registered during the training and adjust the weights inside the Neural Network accordingly :slight_smile:

Actually, I test the DRLA on Tanks and adjust some haxe code parts and try to find the best parameters to use (accuracy, speed, memory size, …).

I need too to develop new tools that will help to visualize how the DLRA improves during the training (that is to say the speed of its positioning and shooting on tanks2 during a new round).

But I can already say that Armory is fantastic because it allows me to make a very fast DRLA, and that the memory footprint is reduced and allows to consider very large layers in the NN.
(welcome :wink: if you have infos on how to make Armory /Haxe /Kha optimizations with GPU Nvidia ?)

In addition, the architecture in logic nodes makes it possible to quickly modify / test various solutions, while facilitating readability and reusability.

Another important point and not least, its stability because after several hours of tanks training into Firefox browser, there is no crash to report ! Thanks @lubos for this top work.
(version with Blender 2.8 with only some modifications applied to the getpixels code)

2 Likes

Another way is also opening for DLRA to fuel an art movement based on neural art, using techniques such as DeepDream, Style Transfer and feature visualization. By reading this excellent document you will know more and understand how this could become disruptive in the way we will design higher quality 3D textures. https://distill.pub/2018/differentiable-parameterizations/

1 Like

Looks really promising!
Are you planning to release this as open source (or even provide it as pull request for the Armory3D project)?

@NothanUmber
I need some more time to test/optimize some code, understand the bests tuning with the DRLA parameters, as well as develop tools that will help everyone to understand and have the possibility to follow the DRLA progress during a training. Maybe very simple tool as a first step …

What I would hope it’s we could start a kind of ecosystem of passionates around Neural Networks in the 3D environment Armory that would like to start real application projects. Do you think it’s possible ?

1 Like

That’s done. Tuning the initial parameters and looking results with indicators gives me a better understanding of how the training of the Neural Network behaves :slight_smile:

I start a Youtube chaine here https://youtu.be/ef-S7M6_yEowhere
You will find the first video, that is an overview.

I name it ATRAP.

I hope those videos will create some motivations to participate to the adventure.

3 Likes

I am speechless how fast a new (!) approach of technology can be implemented in things we daily use. I think i understand just a particle of the wole thing, but already bought a few books.

NICE VIDEO by the way!

1 Like

Hello @dugati,
I think you found the right words. It’s a technology that can be implemented in the things we use every day.

A majority of the users of this forum seem to me more coming from the field of 3D arts than that of Engineering and Data Sciences, and yet I hope that many will realize with these videos that through a tool like ATRAP, a growing industry demand is appearing for talents capable of creating 3D environments in which robots, drones, IoT and consors will train their AI/Neurones Networks.

2 Likes

From what I saw on the video this tech should easily be able to be incorporated into games. I will have to see more and probably study it, but I having enemies that learn for themselves is a invaluable part of some games. I plan to follow the videos and see where they go.

Thanks for the comment and interest @Monte_Drebenstedt.

Allow me too to clarify some points. ATRAP is conceived with the idea of training neural networks which will then be installed on target machines like factory robots, thanks to the exceptional possibilities of Armory /Haxe, with performances and portability.

In the second video I put yesterday on youtube, a simple calculation shows that kickly it’s a Romain’s job to populate a database with photos that you need to labellise with targets and rewards data and that then will be used to train the neural network, as it’s the case in the classical Deep Learning aproach.

Thus as you would like, an AI / neural network can also be formed in ATRAP and then be integrated in a game.

If you look at the video of the training until batch 22 (others coming soon), it’s actually already close to this. The indicator with gray curve shows how little by little the training questions more and more the neural network for the choice of the next action (versus a random choice).

1 Like

@Didier Wow, this stuff looks really cool! I’m really interested in how you could use some of this stuff. Me and my brothers are wanting to make our own game studio and the possibility of training a motion capture system from a 2D video could be amazing. Plus having learning AI for video game enemies.

Do you have any example blend + code anywhere or is it to early and volatile to release for others to experiment with yet?

Hello @zicklag,
There are still small adjustments and tests to perform, including porting to a small target. Right now, neural network training is done. Perhaps you will be interested in developing an ecosystem around this approach… you can use the link on the video to discuss it.

@didier I am unable to get to YouTube ( due to local network configuration and rules ) and I am probably going to be too busy to do a whole lot of development around the ecosystem, though I am interested. :slight_smile: I was just wondering if you had the source code for your logic nodes and maybe the blend for your tank game training example somewhere public like GitHub. So that I can test using it for game AI.

@zicklag It will be possible to use it soon I hope (time is something fast as for you) , throught a website I will build soon too :slight_smile: with doc user.

1 Like

Last step was the export of a Neurone Network trained into ATRAP, from one machine to another.

It opens the door to using a crowd of PC/ATRAP with Neural Networks able to exchange their NN during training.

Then as it’s easier to train the Neural Network on small portions of a 3D environment, the next step is to parralelize/combine/merge several neural networks trained into differents 3D environments into only one, thus having the equivalent of a training done on a vast 3D world.

More generally, it’s a way to :

  • to accelerate the creation of a community / ecosystem capable of exchanging their best Neural Networks during a “Distributed Neural Network Training” … like a kind of electrical network on which you can get the best energy at any time for your AI/Neural Network.

  • to distribute among several actors of an ecosystem, the training of AI in the 3D environment ATRAP/ARMORY 3D :
    . some in charge of creating a portion of the 3D environment on Armory 3D.
    . others in charge to specialize in training Neuronal Networks on ATRAP.

1 Like

Ya, It is quite easier to train the Neural Network on the 3D environment, the next step is to merge neural networks trained into different 3D environments into only one. Thanks for providing a solution. By the way, anyone knows how to fix epson error code 0x97 , Its troubling me.

1 Like