Audio-reactive potential?


Hi guys,

I’ve been looking at fooling around with audio-reactive 3d graphics. So far, I’ve been using a combination of clojurescript, blend4web and tonejs to do this. While I really like blend4web’s API, it slows with bigger scenes – and armory looks more performant in this area.

I was looking through armory’s API, and I noticed it has some WebAudio stuff listed, such as here:

My sense is that this fft stuff could be used for audio-visualization. Is this correct?

Thanks – I’m really excited about this project.



Sound API in Kha should be capable to handle this. Any chance you have a minimal demo around that we could try to replicate in Armory? I guess some work will be required in internals & docs to make it more comfortable.


Sure, something like this:

(make sure your mic is enabled, talk into this).


So I retract my previous statement about blend4web not handling object instances – turns out I was just doing bad clojurescript :stuck_out_tongue: Here’s another test showing object instances (repo is here:

That all said, I find this hard to work with and debug, and I’m beginning to think streaming 3d data might solve some of these problems.

Do you think armory would be able to stream 3d data in some way?



Here’s a demo video of what I mean, I know you don’t always have time to launch into these weird usage cases :

You’ll notice it includes live-coding (hot-reloading). Speaking of which, I noticed this hot-reloading package for Haxe today:

Do you think armory JS output would be able to get ‘live-coded’? I’m not asking for any guarantees, I’m just making sure my idea makes sense.