I’ve been looking at fooling around with audio-reactive 3d graphics. So far, I’ve been using a combination of clojurescript, blend4web and tonejs to do this. While I really like blend4web’s API, it slows with bigger scenes – and armory looks more performant in this area.
I was looking through armory’s API, and I noticed it has some WebAudio stuff listed, such as here:
Sound API in Kha should be capable to handle this. Any chance you have a minimal demo around that we could try to replicate in Armory? I guess some work will be required in internals & docs to make it more comfortable.
So I retract my previous statement about blend4web not handling object instances – turns out I was just doing bad clojurescript Here’s another test showing object instances
Maybe I was too restrictive in my previous post. Really, I’m just interested in getting some degree of audio-reactivity. What would be the easiest way? I see that armory has these custom ‘input’ nodes on the Blender-side of things. Would defining one for audio be possible to add? Or maybe a MIDI/OSC listener?
I should mention I’m not asking anyone else to do it - I’d like to try myself. I’d just like to make sure I know what I’m getting into.
This would be a great add on, to make stuff reactive, I know blender has a way to read audio levels.
I used it once for some motion graphics. Can’t remember if it allowed for live audio though.
That would be awesome.
After looking at Armory’s ecosystem, I’m beginning to believe it’d be best to achieve this with a standalone app - something that can read the armory assets (.arm?). Same deal as the armorpaint app. Presumably I’d be able to incorporate http://armory3d.org/manual/api/js/html/audio/AnalyserNode.html stuff there?
I think the challenge is, the underlying Kha libraries audio layer is very simplistic. I think the ultimate solution is a dedicated audio layer, somethings like Google’s Resonance (https://developers.google.com/resonance-audio/). Bummer nobody has implemented Haxe bindings for it.
Resonance would interop nicely with Armory/Blender, as you’d basically just have to map the scene dimensions to your room definitions. Plus Resonance supports all of the Armory platforms, except perhaps some of the console targets. Plus it’s open source.
Took a look at resonance - pretty neat. Sounds like it’s more dedicated audio-generation thing.
Resonance would interop nicely with Armory/Blender, as you’d basically just have to map the scene dimensions to your room definitions.
That’s a neat idea too - the idea of generating audio that has acoustic properties based on the room, material etc. I was just thinking of a one-way street where the beat of an audio source could detected, and then subsequently mapped into one or more properties of an armory 3d scene. I’m not against what you’re describing, but that wasn’t my particular goal.
The process where it will be going to get the part that will manage it around the process that will be hvaing the most of it arouind the user will manifest the enrivched vlauble poart from the MIcrosoft suport wqill guide it to audio reactive potential.