I've been learning Houdini since the start of the year and have experimented with each context except for CHOPs, until now.
I've seen the different uses for it but what stood out to me the most was audio. I thought, what better way to learn how to use it than to try to incorporate it with something I already know pretty well? Pyro! So I had a look online and found...
That's right, nothing. I don't know if my Googling skills a limited or what but I thought fuck it, I'll give this a crack. As it turned out, it was easier than I thought and actually probably not the best challenge to learn CHOPs as it barely used them at all. Here's the network:
CHOP network for importing Audio
Most of the complexity there is just filtering the audio to get the desired effect. I mainly wanted to separate out the high frequency from the low and isolate spikes for more punchiness in the simulation. I used the different channels to drive two aspects of the simulation. One was the temperature which was driven mainly by the high frequency (it's the "create_density_newclip1" node, don't @ me I know the node organisation is shite), and the other was a pump to affect the velocity of the sim. The pump was driven by the low frequency spikes.
Network defining the pump behaviour of a pyro simulation
In order to manipulate the velocity, I chose to create a volume to source into the simulation. I set it up so that I could have a rolloff effect whereby the main force of the audio input would be wherever I wanted it to be and it would smooth out/ rolloff from there. Effectively here I just made a circle and extruded it for the main area, and transformed that to use as the high intensity point for the velocity.
The CHOP network directly modifies the fan force parameter on the PARM node, which is multiplied onto the values I initialised the volume with. This is done in the volume VOP node after the volume is rasterised for performance reasons, otherwise the volume would be rasterised every frame which kind of sucks if it's not a necessary thing to do.
The final step was rendering, I actually thought the viewport preview looked alright and because I wasn't spending much time on it, I decided to use the OpenGL renderer for the video. There's no motion blur and it definitely looks worse than if I used Mantra but it's not awful.
Overall, this was a whole lot easier than I expected so I guess I didn't exactly achieve my goal, but it was a whole lot of fun to be playing with audio for a change! It would be very interesting to see what else could be done with audio in Houdini. FLIP? Destruction? Maybe automated lip syncing for character animation?
From a Maya user's perspective, Houdini can be horrifying and one of the things that trip up a lot of people when they're just starting out is the multiple contexts within Houdini. Not only do people have trouble differentiating between the contexts, they also might not know what a context is. In this quick tip, I aim to explain the difference between the object context and the geometry context (Also known as SOPS).
Note: this assumes a basic understanding of Maya.
To showcase the difference between the contexts, I've built identical scenes in Maya and Houdini. There are two animated cubes both moved on the X axis 4 units over a period of 24 frames. The blue one is moved in the geometry context, and the red one is moved in the object context. How does this work in Maya? I'll explain...
In the images below you'll see the red cube selected in both Maya and Houdini. In Maya, I selected the cube, keyframed it at position 0 on frame 1, and keyframed it at position 4 on frame 24. You should be able to see this represented in the transform node of the object. Using Houdini, this is equivelent to keyframing the translate attribute on the object node in the object context.
Frame 1, 0 on the X-axis
Frame 24, 4 on the X-axis
As for the blue cube, that was animated a bit awkwardly in Maya to be honest. In the images below, instead of selecting the object, I highlighted all the faces and keyframed those instead. The end result is that the geometry animates yet the pivot point stays in place. It's also slightly more demanding on the computer as it's moving each piece of geometry rather than the whole thing at once. Not typically something you'd want to do, right? Well that's exactly what happens when you move an object in the geometry context in Houdini...
Frame 1, 0 on the X-axis
Frame 24, 4 on the X-axis (note pivot hasn't moved)
Check out how this was done in both programs. In Maya, you select all the faces and move them. In Houdini, you go into the geometry context and place a transform node. Both methods have the same effect and that is reduced performance compared to moving things at the object level.
Frame 16, moving at the geometry level
Moving geometry at the object level in Houdini is equivalent to moving an object using its transform in Maya
Moving geometry the the geometry level in Houdini is equivalent to selected all the faces and moving them in Maya.
A kind Reddit user has enlightened me to a very important point. What I have talked about here is how the programs work, not necessarily how YOU should work. In Maya, selecting all the faces, moving, and keyframing them is a weird and terrible idea, well Houdini is different. It's actually best practice to do your transforms at the Geometry level rather than the Object level as it avoids confusion down the line.
I hope I was able to explain things to you clearly enough, this sort of stuff is notorious for being loaded with industry jargon and Houdini doesn't exactly make things easier in that regard. In saying that, just push through and everything will be alright. I started using Houdini 8 months ago from writing this post and was a Maya user for 4 years before that. It's really not a lot of time as long as you keep going at it!
G'day how're you goin? Really gotta say you oughtta check out this PDG business in Houdini. I've got a nice setup going here that allows me to sim my whole scene and get a render with barely any effort. There are multiple ROP geo nodes that need to be done in a specific sequence. Without PDG, I'd be going to them individually and making sure they're rendered out and updated. Thank god for automation!
Top left in the image below caches the static collision geometry, while top right caches deforming collision geo.
The next segment handles some of the dependency logic and ends up with the result of everything before being required to complete before anything after can start. "Wait for all" ensures everything above is done before anything below can begin, and "partition by index" combined with the "filter by range" above ensures everything is matched up properly for each frame. I've used "attribute create" combined with "sort" and "mapbyindex" to reindex the partitions as well convert them to work items. I needed to reindex so the indexes started at zero rather than one thousand. That was a side effect of the "partition by frame" upstream and would cause issues downstream if I left it as is.
Next step was the simulation. This is a rain sim and for multiple reasons, I wanted to cache the main droplets, splashes, and running water seperately. The simulation needed to run first, but the other steps could be done in parallel. The image below shows my setup for fetching the ROP node and alerting the simulation upon its completion. The "OP Notify" node points to the "File" node and tells it to refresh when a newly completed frame is ready. This ensures it's not using outdated information later on in the chain and also updates everything in the viewport immediately.
After all that, it's as simple as attaching to a "ROP Mantra" node, linking that to a compositing node for post processing, and compiling into a video with ffmpeg at the end! (Don't forget the "wait for all").
PDG really is a useful tool to get a hang of for any Houdini artist, it can do much more than I've shown here. For example, you can use it to automate a connection between Houdini, Maya, and Nuke + anything else that supports Python. I wrote this up mainly because the documentation for PDG is a bit scarse so I reckon the more resources the better. I know I didn't get into amazing detail so if you need me to explain anything, just reach out. Thank you for reading, and catch you next time!
Have you ever sculpted in Maya and wished you had subdivision levels like you'll find in Mudbox or Z-Brush?
Maybe you wanted to have multiple instances of a piece of geometry that can be individually edited? (while retaining the link to the original)
Or just maybe, you were in a situation like me where you wanted a higher detailed and bevelled version of your mesh that can automatically respond to changes made to the original geometry?
Behold! Exactly those things...
Geometry copies take on upstream changes automatically
Working with Houdini for most of the year has changed the way I approach modeling problems; that's how I found this. I was modelling something in Maya for work and found myself jumping into the node editor regularly. I then stumbled upon this trick while trying to find a solution to an issue with the lattice deformer.
In short, you just use the output geometry of the first shape node to drive the input of the second. That's really it, nothing else to it. Here's a short tutorial to illustrate the point, a few ideas on how this could be used, along with a couple of gotchas / things to look out for:
Grab a mesh to serve as the base for the copies, then add any primitive from the shelf. Take a cube to be safe.
Open the node editor, and (while ensuring both meshes are selected) click the button marked with the arrow below.
Connect the mesh output from the first node to the in mesh input of the second. At this point the setup is complete!
Check it out! If you make a change to the first mesh, it'll be reflected in the second. Kind of reminds me of these things called instances 🤔
Well, they're not the same thing I swear it. Here's proof:
Changes flow downstream
See? The second one can be changed independently of the first! You can even alter the geometry of the second one any way you wish. Here I've applied a subdivision. See how it responds dynamically to the changes in the first object?
Subdividing a mesh adds a node into the graph as you might be able to see. This means that it'll respond properly to any change in the first, even deleting or adding geometry. What happens if you do something like sculpting the mesh?
Oh wow! Look at that, it still reacts as expected. From what i can tell, any change on the first one that doesn't involve adding or destroying geometry should work fine. As soon as you do add or destroy geo, it breaks any destructive geometry changes downstream. This is familiar behaviour if you do any modelling in Houdini.
As long as your changes are non-destructive things should go smoothly. Nodes should update automatically and generally just work.
Bevel node works as you'd expect
This can go multiple levels deep, check this out:
Not only can your onion be deep, but it can also be wide with branches!
This technique isn't without its problems. It can be kind of annoying to mess around with the nodes in Maya, especially after getting used to a proper node based program it's really just not very nice.
You also gotta be pretty careful with how you work with nodes that have dependents, some changes can have unintended adverse effects downstream such as deleting or adding geometry.
Don't just duplicate the object and connect the duplicate, it doesn't necessarily work as you'd expect every time. Just connect a cube (or whatever works).
Using this technique as subdivision levels similar to Z-Brush or Mudbox can work from what I see, however the conenction is only one-directional. Your changes to high-resolution geometry won't change the silhouette of the low-resolution geometry. Also, it's very important to not add or remove geo upstream. All your work will be destroyed. I mean probably, you might be able to undo but be careful. (You should also have a backup, like usual. Don't blame me if everything burns to ashes)
Honestly, I haven't researched it too much. For all I know, this could be a terrible idea but it seems to work fine for my purposes so on the toolbelt it goes.
I'd love to hear your thoughts on this. Maybe this feature actually already exists in some other form and I'm not onto anything at all. Well I'd love to know if it does because it'd probably be better! Thanks for reading, I hope this has helped you in some way!
University and beyond, my approach to study and finding work after graduation
Swinburne University of Technology
Honestly, first year of uni I barely studied at all ...
... I don't regret it one bit.
First year was a time of huge personal growth, meeting new people, and really breaking out of my shell (you should know, this was immensely valuable career-wise). I went from being stuck indoors on my computer 24/7 to not touching my PC once in a week (though I really should've for study). I went to events, joined clubs, and volunteered. I signed up for Tae Kwon Do and became friends with an amazing bunch of people.
Taekwondo Grading Ceremony
In the second half of the year, I actually started to study. It's then that I found out that my decision to study computer science alongside game development was not in my best interest. What I found was every unit felt seperate from each other and I had no idea where things were headed. In hindsight this was probably something that would be rectified by the final year, but I didn't want to wait to find out.
Second year, I dropped computer science. Immediately things felt better. You wouldn't believe how much more coherent the units felt.
On top of that, we finally were able to start making games:
We Will Live - 2017 university group project
We Will Live is a game about evacuating clueless beings from burning buildings. It's a bit rough around the edges, I will admit, but I'm proud of what we ended up with. I was responsible for all in game art, FX, and lighting as well as tuning Unity's post processing stack to suit the game's needs. For a second year uni student, I'd say I did pretty well.
My approach to study has always been self-focussed. At uni, I massively reconfigured my study plan and did units out of order. I applied for multiple pre-requisite wavers just so I could do the units I thought would help me the most. I also took part in cross-institutional study which was an ordeal but I ended up learning alot from the unit I picked up. I took on a unit at the University of Melbourne about the impacts on deafness from a teaching perspective.
Otosclerosis visualisation (exaggerated)
If there's one thing that I suggest you do if you're a student, it would be to take charge of your studies. Your uni won't teach you what you need to find a job, you have to do that yourself. Uni provides resources and connections. Other than a possibly decent structure to serve as a backbone to your own studies, uni won't provide you with anything else.
PAX Australia 2018. This was the year I exhibited at PAX. One of the unique opportunities provided by the Swinburne games degree is the chance to showcase at PAX. This was the real deal, we had one year to develop a game with October 26th serving as a hard deadline.
Halfway through the year, this is what we had come up with:
Sol Floreo alpha build (Wreath)
We had our core mechanics in the build. As the player, you control the sun guiding a small plant with your beams of light to its goal. The game was something, it had achieved our aim of being a relaxing puzzler but we felt changes needed to be made. It was visually incoherent and much more could be done.
Behold! PAX build Sol Floreo in all its glory!
Sol Floreo PAX Trailer
We made a major shift away from the 2.5D aesthetic towards full 3D. Like in my second year project, I was responsible for modelling, animation, lighting, and FX. Additionally, I developed a system that allowed the developers to easily transition the game between day and night, as well as allowing the atmosphere to grow the more the player revived the world. I'm very proud of what I (and the rest of the team achieved with this project). One major point pushed by the team's leadership was a no crunch strategy. They actually did a great job at limiting the stress inherent with a major project such as this. They ended up finishing us up a week before the deadline. It gave us an opportunity to spend more time on other subjects and overall made life easier.
The project was a huge success!
Playtesters intuitively understood the game's mechanics and nearly everyone was impressed on some level by the visuals. We were also covered by game magazine Superjump.
After uni, I regrettably was a bit too relaxed in finding proper work. I felt self concious about my portfolio as I knew that what was on there both wasn't good enough, and also just not enough in general. I had barely anything. Over the next year, I worked on my portfolio and picked up a few quick gigs on the side. I developed augmented reality applications for RMIT as well as CG Futures, produced product renders for a brand concept, and kept up self-study learning new skills I thought would be useful.
Constellation Australia - brand concept
Financial issues finally started kicking me in the groin and I pushed myself to get goin'. I spruced up my resume, built up a brand and a website using Artstation, and started applying for jobs. To my surprise, an opportunity came my way but from where I least expected it. While weathering through a typhoon in an AirBnB in Japan, I got a message from someone I worked with in the past in my volunteering days. She told me there was an opportunity that might suit me and asked me to come along to a meeting in a couple of days. Being in Japan at the time and suffering through a typhoon, I thought that it best to say yes! I said that I would come along so long as I wasn't killed by windy weather.
The meeting time was set for less than two hours after I was due to land back at Melbourne. As you might expect, I flew economy and needless to say I was truly, utterly fucking tired beyond belief. I sat there in that meeting trying my very best to stay present. Luckily it wasn't boring, and was actually very exciting and engaging. Not only that, but I was invited back for an interview and got the job at Soundfirm where I work now.
Where I am now
I've been at Soundfirm for nearly a year now and have absorbed an incredible amount of knowledge in that time. They've got me doing RnD for new workflows involving Unity. I'm in a very interesting and unique situation as they're a post-production studio and I'm the only game developer there. It means that I'm left relatively alone and have freedom to come up with new techniques and workflows. I'm constantly researching ways I can bring my skillsets to the business while also picking new skills along the way.
While at Soundfirm, I picked up skills in Houdini and I'll say right now it's bloody amazing. I can fully see myself sticking with Houdini for a large part of my career at least. As an artist with a technical way of looking at things, Houdini is my jam. It's the perfect combination of logical and artistic thinking.
First attempt passing data between solvers in Houdini
My future blog posts will definitely be shorter than this one and be more focussed on the interesting things I discover while working. As I progress in my career, so too will the type of content I choose to share. I hope to one day soon provide tutorials and resources to help you out if you need it. Thanks for following along, I hope this has been at least somewhat interesting. Feel free to shoot through any questions you might have and I'll for sure try to provide some kind of useful answer. Hopefully it's useful anyway 😶
UPDATE (12th of August, 2020):
I just wanted to add that I have omitted a lot of personal aspects of my journey. I went through serious financial and emotional trauma, lost a close family member, and got into my first relationship (been together a few years at this point, moved in together and still going strong!).
I don't want to pretend that everything has been perfect and I don't want to hide these aspects of my life. At the same time, a lot of it is very personal and I don't yet feel comfortable sharing that on the internet. Thank you for your understanding, can't wait to see what the future holds!
From lost, high-school aged gamer with little direction, to ambitious technical artist finally breaking into the industry; this is me.
Woolamai House, Cape Woolamai, Victoria
I went to high school in Wonthaggi. A regional country town known by some for its rich mining history. To me, it was depressing with a culture that very much perpetuated tall poppy syndrome. Any time you stand out from the crowd you'd be shut down immediately. I don't know what things are like there now but I assume it's the same but with more drugs. If I was to say one positive thing about the area, it would have to be the environment. The coastline is stunning, and that's where I found peace outside of gaming and reading.
In my final year of VCE, I really got cracking with my studies. My major project for media was star trail photography and I really got a lot out of it. One thing I really didn't expect was how close to nature it gets you. I found that I ended up internalising the phases of the moon and had a real firm understanding of the weather conditions and if everything would line up, allowing me to get the shots I needed.
Rhyll Jetty, Rhyll, Victoria
On top of my star trail project, I also had a major project for a multimedia unit. Using Adobe Flash, my goal was to create a cross-platform game of some kind. The core requirements of the project were rooted in animation, so a fair bit of work and planning went into that. The intro was hand animated in flash and fully story-boarded in my handy notebook.
I wasn't satisfied with just meeting the core requirements though. I wanted to smash them. I had a plan to create a cross platform game with a few mechanics I personally thought were interesting at the time. Hindsight is 20/20 though and I can definitely see some shortcomings now that's for sure!
The teacher for the unit was very helpful and took care of programming for most of the project (along with all the other students!). Understandably, he didn't have time to fully accommodate the requirements of the project so I took up the reigns in those instances and hacked through actionscript on my own.
It ended up being very popular. It went so well that it got selected to be shown at the VCE Seasons of Excellence Exhibition at the Melbourne Museum. This was a great experience that ended up teaching me the value of doing some kind of playtesting, because every kid I saw playing it got incredibly frustrated and had no idea what was going on. I've embedded a gameplay video below, maybe you can see why?
Melon Runner (2016), gameplay video
Creating Melon Runner gave me a taste of something bigger. I knew I enjoyed making that game, maybe I'd enjoy it as a career? This pushed me down the path of becoming a game developer. I signed up for University and was formally accepted into a Bachelor of Games and Interactivity/ Bachelor of Computer Science double degree at Swinburne University...
Part 2 coming next week, this was honestly a lot more content than I expected so I'm splitting it. Got work to do 😅 If you actually took the time to read through this, I really appreciate it. It's personal stuff, and I'm talking a lot about me which feels weird and uncomfortable but it feels good getting some of this out there. Hopefully you are finding my content interesting so far!
G'day folks, I'm Kkye, a VFX artist with social anxiety ready to push my boundries and try new things. As such, I'm starting my first blog! I'm not mentally prepared for a roasting so be kind 😉.
Historically I've been a very closed off person and haven't really ever participated in communities. Two years ago, I made my first step and created social media accounts (mainly Twitter, and it's actually not been too bad! @kkye_hall if you're interested)
My first attempt at social media @kkye_hall
This blog is my way of opening up to the world and showing off what I got. Whereas Artstation main + my website are focussed on "finished" or portfolio pieces, this blog will be more raw and showcase content such as:
Works in progress
Research and development
Things I find that I'm generally excited about (could be new technology or just breakthrough things I've recently learnt
Anything else that I feel like honestly, hopefully you find it interesting!
I studied game development at Swinburne University of Technology in Melbourne. While there, I became familiar with Maya and Unity and dabbled a bit with Mudbox. After leaving, I really started to become intimate with the two packages. I realised that despite learning so much while studying, there is a mountain of knowledge waiting to be discovered.
In 2019, I got my first industry job after finishing my degree. I became a VFX artist at Soundfirm. While there, I doubled down on my quest to gain experience with Unity. Hopefully I can show off some of the amazing content I worked on while there but at this point, it's under NDA.
It's at Soundfirm where I got my first hands on taste with Houdini, and I'm telling you it was delicious! Before I first noodled nodes in Houdini, I didn't really know where I fit in in the industry. I know I love making art, but I'm technical by nature. This line of thinking also pulled me into the idea of shader development but I'm yet to really persue that (I can say though, it is one of those amazing things I've been working on at Soundfirm).
My plan for my upcoming posts is to introduce myself a bit, and show my journey to where I am now. As I am just starting out in the industry, it's very much a journey you can follow along with in real-time. You might be a student just starting your degree, a seasoned professional, or someone at the same stage as me. Regardless of the stage you're at, I hope you can find some level of entertainment from my content and maybe even learn something along the way!