Driving Pyro with Audio using CHOPs - Houdini

General / 14 September 2020


Pyro driven by Audio

Learning CHOPs through experimentation

I've been learning Houdini since the start of the year and have experimented with each context except for CHOPs, until now.

I've seen the different uses for it but what stood out to me the most was audio. I thought, what better way to learn how to use it than to try to incorporate it with something I already know pretty well? Pyro! So I had a look online and found... 

Nothing.


That's right, nothing. I don't know if my Googling skills a limited or what but I thought fuck it, I'll give this a crack. As it turned out, it was easier than I thought and actually probably not the best challenge to learn CHOPs as it barely used them at all. Here's the network:

CHOP network for importing Audio

Most of the complexity there is just filtering the audio to get the desired effect. I mainly wanted to separate out the high frequency from the low and isolate spikes for more punchiness in the simulation. I used the different channels to drive two aspects of the simulation. One was the temperature which was driven mainly by the high frequency (it's the "create_density_newclip1" node, don't @ me I know the node organisation is shite), and the other was a pump to affect the velocity of the sim. The pump was driven by the low frequency spikes.


Network defining the pump behaviour of a pyro simulation

In order to manipulate the velocity, I chose to create a volume to source into the simulation. I set it up so that I could have a rolloff effect whereby the main force of the audio input would be wherever I wanted it to be and it would smooth out/ rolloff from there. Effectively here I just made a circle and extruded it for the main area, and transformed that to use as the high intensity point for the velocity.

The CHOP network directly modifies the fan force parameter on the PARM node, which is multiplied onto the values I initialised the volume with. This is done in the volume VOP node after the volume is rasterised for performance reasons, otherwise the volume would be rasterised every frame which kind of sucks if it's not a necessary thing to do.

The final step was rendering, I actually thought the viewport preview looked alright and because I wasn't spending much time on it, I decided to use the OpenGL renderer for the video. There's no motion blur and it definitely looks worse than if I used Mantra but it's not awful.

Overall, this was a whole lot easier than I expected so I guess I didn't exactly achieve my goal, but it was a whole lot of fun to be playing with audio for a change! It would be very interesting to see what else could be done with audio in Houdini. FLIP? Destruction? Maybe automated lip syncing for character animation?