Audio-reactive potential?

Hi guys,

I’ve been looking at fooling around with audio-reactive 3d graphics. So far, I’ve been using a combination of clojurescript, blend4web and tonejs to do this. While I really like blend4web’s API, it slows with bigger scenes – and armory looks more performant in this area.

I was looking through armory’s API, and I noticed it has some WebAudio stuff listed, such as here:

http://armory3d.org/manual/api/js/html/audio/AnalyserNode.html

My sense is that this fft stuff could be used for audio-visualization. Is this correct?

Thanks – I’m really excited about this project.

2 Likes

Interesting!

Sound API in Kha should be capable to handle this. Any chance you have a minimal demo around that we could try to replicate in Armory? I guess some work will be required in internals & docs to make it more comfortable.

Sure, something like this:

https://clojurescript-experiments.neocities.org/

(make sure your mic is enabled, talk into this).

So I retract my previous statement about blend4web not handling object instances – turns out I was just doing bad clojurescript :stuck_out_tongue: Here’s another test showing object instances

https://mikebelanger.github.io/mic-reactive/target/ (repo is here: https://github.com/mikebelanger/mic-reactive)

That all said, I find this hard to work with and debug, and I’m beginning to think streaming 3d data might solve some of these problems.

Do you think armory would be able to stream 3d data in some way?

Thanks

Here’s a demo video of what I mean, I know you don’t always have time to launch into these weird usage cases :

You’ll notice it includes live-coding (hot-reloading). Speaking of which, I noticed this hot-reloading package for Haxe today:

Do you think armory JS output would be able to get ‘live-coded’? I’m not asking for any guarantees, I’m just making sure my idea makes sense.

Maybe I was too restrictive in my previous post. Really, I’m just interested in getting some degree of audio-reactivity. What would be the easiest way? I see that armory has these custom ‘input’ nodes on the Blender-side of things. Would defining one for audio be possible to add? Or maybe a MIDI/OSC listener?

I should mention I’m not asking anyone else to do it - I’d like to try myself. I’d just like to make sure I know what I’m getting into.

1 Like

This would be a great add on, to make stuff reactive, I know blender has a way to read audio levels.
I used it once for some motion graphics. Can’t remember if it allowed for live audio though.
That would be awesome.

Yeah I think it’d be pretty neat.

After looking at Armory’s ecosystem, I’m beginning to believe it’d be best to achieve this with a standalone app - something that can read the armory assets (.arm?). Same deal as the armorpaint app. Presumably I’d be able to incorporate http://armory3d.org/manual/api/js/html/audio/AnalyserNode.html stuff there?

Does my idea make sense?

I think the challenge is, the underlying Kha libraries audio layer is very simplistic. I think the ultimate solution is a dedicated audio layer, somethings like Google’s Resonance (https://developers.google.com/resonance-audio/). Bummer nobody has implemented Haxe bindings for it.

Resonance would interop nicely with Armory/Blender, as you’d basically just have to map the scene dimensions to your room definitions. Plus Resonance supports all of the Armory platforms, except perhaps some of the console targets. Plus it’s open source.

Took a look at resonance - pretty neat. Sounds like it’s more dedicated audio-generation thing.

Resonance would interop nicely with Armory/Blender, as you’d basically just have to map the scene dimensions to your room definitions.

That’s a neat idea too - the idea of generating audio that has acoustic properties based on the room, material etc. I was just thinking of a one-way street where the beat of an audio source could detected, and then subsequently mapped into one or more properties of an armory 3d scene. I’m not against what you’re describing, but that wasn’t my particular goal.

The process where it will be going to get the part that will manage it around the process that will be hvaing the most of it arouind the user will manifest the enrivched vlauble poart from the
MIcrosoft suport wqill guide it to audio reactive potential.