Debrief: Assholes, meat, and shit ideas (also Aussie internet sucks)

General / 17 February 2021

Well that was interesting! I just finished up working on the first feature film of my career and... it was what some might call "interesting" and what others might call "normal" 😅 . I can't get into the details unfortunately but let's just say that there's a reason my last blog post was in November and it's not just because I'm slack!

PSA

With that out of the way (believe it or not) I'm not writing this just to gain invisible sympathy from the one or two of you that are reading this. This post is a run down of some technical oddities I've stumbled upon over the course of the project. I've seen a distinct lack of people talking about this stuff online, and unfortunately it seems mostly down to one of three things:

  1. First, NDAs. A lot of what goes on during a project is unfortunately confidential and can't be shared. There just isn't a way around it.
  2. Secondly, people are busy. It's hard to find time to interact with the community when you're already working overtime and crunching (not condoning this by the way)
  3. Finally, some people don't want to share their secrets because of fears that someone will steal their job. This one's actually simple and there's already a well known name for people like this 👉 

a s s h o l e s ™

Seriously people, go fuck yourself with that smug ass shit. If giving away your "secret" means that your job is at risk of being taken, maybe you're just not good enough to do your job? Get better.

Meat

That project was a learning experience to say the least. We went into it thinking we would use Embergen for the fire effects, and for a bit it was alright. However, there was one main drawback. The way Embergen (at the time) handled exporting meant that it was very difficult to get things lined up properly in other software. Maybe there was a way around it, but Houdini got a major update, and we just thought "fuck it, let's do that it looks better" (We're small so can afford to be a bit more agile). 

GPU minimal solver

What I found was that the GPU minimal solver, though designed to be used for lookdev, is actually capable of outputting production quality results given appropriate hardware and proper constraints. As when simulating on the GPU using the standard OpenCL method, the biggest limiting factor when using the minimal solver is GPU memory. Luckily the cards we were using at the time (RTX6000 and P6000) had decent enough memory for most use cases, however I still hit limits with large sims.

The issues with the minimal solver didn't end there. There was also a problem where any setting for the start frame other than default would flat out not work. I worked around this using the time shift node, but that was a sad solution in all honestly. It worked, but luckily this problem has been fixed in a later update.

Another problem was with wind. I soon learned that a lot of the pyro forces and nodes don't work with OpenCL. By "a lot" I mean basically all of them. That includes wind. I worked around that by (don't cringe) changing the direction of the gravity. It was good enough for those use cases in particular, but it definitely wasn't accurate. The reasoning being is that though changing the direction of gravity does push the fire and smoke in a certain direction, it does so by affecting the buoyancy. Real wind doesn't do that. The correct behaviour is that hot air/smoke rises up and as it cools down, the effect of the wind appears more dominant. 

Are you ready for the solution? Two words: "wind" and "tunnel". If you dive into the pyro solver and then dive in one more layer, you should find the smoke object. One of the parameters on this node is called "wind tunnel" and all it does is add a wind force along whatever vector you specify. We did this, and it was easy. There's also another solution to the problem that we found but didn't use. Just create a new velocity field wherever you want it and source it in. It acts in the same way, but you have more control (at the cost of a more complicated setup).

PilotPDG

When it comes to this kind of work, my goal is to keep my machine's resource utilisation at 100% at all times. That means that if one GPU is being used and the other is idle, something is wrong (same goes for CPU). This thing is made to be used and damn it I'm going to use it! That's where PilotPDG comes into play.

The way I organised my project was that each shot got its own project folder and file. $JOB was set to a shared directory with assets that were to be used between all .HIP files. 

Here's where PilotPDG became very useful:

  1. PilotPDG supports cooking nodes from external .HIP files meaning, with one graph, you could hook up dependencies and queue up all of your simulations and renders at once.
  2. PilotPDG is lightweight compared to Houdini, meaning more VRAM can go toward actually cooking the nodes.
  3. You're separating cooking from developing. PilotPDG starts separate processes for each job whereas Houdini cooks in process.

Those are the benefits I found, here is how they helped us:

  1. Using environment edit and Python nodes, you can tell specific branches of your node tree to use specific hardware devices. There were times when I had multiple simulations that needed simulating and renders that needed rendering. As my machine had two GPUs, I created two branches for simulating. One on the RTX6000 and the other on the P6000. Then, when it came to rendering the branches combined to render using Redshift on both cards simultaneously. There was also the option of keeping the branches separate for the render too, however that is a bit more taxing on other parts of the system. For some machines, that would be the preferred option. To be clear, I did go that route sometimes.
  2. Normally with Redshift you need to restart Houdini if you want to swap render device. In PilotPDG, that's not the case. As it starts a new process each time you run a job (by default, it's not actually a requirement to work that way), it's effectively the same as a restart as far as Redshift is concerned. For me that meant that I could just have an instance of PilotPDG running and use it to render my current working file, swapping cards whenever it became necessary.
  3. Rendering with PilotPDG also meant that if the render crashed for any reason, the worst it could do is take down PilotPDG. Houdini, and by extension my working project, would be left unscathed.

PilotPDG wasn't without problems however. It was a lot more buggy than Houdini unfortunately and I found that swapping between multiple cooking graphs and generally just interacting with the UI had a non-zero percent chance of blowing it up. It also doesn't really feel like its own program in any way. Not only that, but it's really just Houdini with most of its features taken away from it. In practice that clearly wasn't an issue, but it just felt off. Also, opening a new network window would default to the OBJ context despite it not actually existing in that program.

Cloud Rendering

We soon came to the realisation that our render power wasn't quite enough for what we wanted to do, and we were faced with the reality that cloud rendering was the best way forward. Well, we would've been correct but in hindsight anything internet related in Australia rarely goes to plan.

We found a good (and cheap) cloud rendering platform based in Vietnam. They had weirdly good support (always checking in on us using WhatsApp) and most importantly cloud rendering servers set up with 6 x RTX 3090 GPUs!

We thought we hit the jackpot and a quick test render proved that these servers were godly. But here comes the issue. Even though we could finish rendering all of the shots we wanted to render in less than 3 hours, uploading and downloading was killer. We had 200 GB of cache files to upload, and I shit you not it took multiple days. Not to mention the time it took to download the finished renders! In the end, it was still faster than rendering locally (mainly because we were rendering other shots locally at the same time).

It was a taste of what could be, and it left us wishing for a service like that based in Australia. If there was one, it might even be a possibility that we could work directly off the cloud machines and forget about our local machines altogether!

In saying that, I have recently found out that there might actually be similar services in the country after all. I think I heard Digistor do something similar. If you know of any others, please let me know!

What I would do differently

The way I used PilotPDG was cool and all, but it was high maintenance and was error-prone. I was always finding that I would accidentally be rendering the incorrect file or frame range or some other silly mistakes. All because the system was too complicated and not automated enough.

I've started looking into Deadline as an alternative to PilotPDG. Even on a single machine, I think it could be useful as it solves most of the problems I was trying to solve with PilotPDG while being more simple in practice. Another benefit to Deadline is that it's scalable so if we need more render power, we just add a couple of licenses in and bring more nodes online. This could also negate the need to go to cloud rendering.

What's up next?

Next steps for me are Unreal virtual production and learning Blender! I'm going to pick up Blender as a replacement for Maya in my workflow. Maya is too expensive considering I rarely use it anymore and it's less versatile than Blender. The main benefit Maya has over Blender in my eyes is animation tools and I don't really do that anymore. If I wanted to, I'd check out Houdini because they've recently started adding some interesting animation tools to their package.

As a bit of a sneak peek into what I'll be posting next up, over the past couple of weeks I've been working on a janky virtual production setup using HTC Vive trackers and NDisplay. What I have working currently is two computers networked together. One machine provides: tracking data, two outputs, multi-user server, and VR scouting. The other machine is a render node, purely there to run four more outputs. I'm happy to say it's all working, and I'll provide some details next time I post!

Resources

Houdini - git versioning .hip and HDAs

General / 05 November 2020

This week I tried something different. I used git version control to keep track of changes to my .hip files.

Imagine being able to go back to some obscure version of your .hip file from way back before you made that terrible change. With git, you can! It's better than saving multiple versions of your file because it allows you to see a graphical history (as long as you have a GUI installed) of your changes. Not only that, but every change gets committed with a (hopefully) meaningul description that you write so you don't forget what you did.

What is git?

If you don't know what git is, it's a tool mainly used in software and game development to keep track of changes in the project. It can handle edits from multiple users and provides functionality for rolling back to previous versions if you make a mistake. It doesn't duplicate the project every time you "commit" a change, it only stores info about what changed making it fairly lightweight. 

This only works for files that aren't stored in a binary format. This means that by default, .hip files aren't compatible. Good thing you can save them in text format!

How do I use it?

The way that I have things set up now means I'm using git alone. And it's still useful! Just today it came in real handy when iterating on a scene with my supervisor. We got up to version 6 of a render when he said "hey, version 4 is better". I agreed, our changes ruined it! Luckily it was easy enough to go back in time just like nothing happened. From there, we created a new branch so as to not lose the history of our other work and everything worked perfectly!

Source: https://www.atlassian.com/git/tutorials/using-branches

The image above is a good visualisation of what I'm talking about. You could even also make a new branch just to try something without risking damage to your work. A very common thing to do in the development world actually.

Give it a go! You might like it! Just try it out on test files though, it is possible to incinerate you work if you're careless.

Houdini - Instancing Redshift proxies

General / 24 October 2020

Geoing along with the theme of abusing my machine's GPU, I've moved onto Redshift. Combined with Embergen (and now Houdini's minimal solver), my GPU is now officially the heavy hitter in my machine. Gotta find some work for the CPU to do! Initially, I was worried that 24GB of graphics memory wouldn't be enough for heavy scenes but it seems Redshift can smash through it no worries (with the correct scene setup). Here's what I've found so far:

Redshift is fast

Holy fuck was this a surprise. Coming from Mantra and Arnold, I was honestly shocked. A couple of weeks ago, I used Arnold to render out a single frame of a simulation done in Embergen, and it took 3 hours. Not only that, but it was still noisy! That was with Arnold GPU, Arnold CPU took around 30 minutes for the same noise level. When I set the same scene up in Redshift, the entire render was done in less than 20 seconds with NO noise! Redshift and Embergen together means a full simulation can be designed from scratch, simulated, and rendered in less than an hour.

Redshift's documentation sucks

Ok, it's not THAT bad but coming from Maya and Arnold it's seriously lacking. I miss having example files and images on every page of documentation. I didn't realise how much I took for granted the ability to just look up a menu item and have documentation pop up instantly. In fact, a lot of the time there wouldn't be any specific documentation at all. One other thing that they can't be faulted for is how whenever you try to Google Redshift help, the search results are overrun by Amazon Redshift and astronomical phenomena! 

Instancing large amounts of geometry

When I first tried out Redshift, one of the first things I tried was seeing how it handled instanced geometry. There's no guarantee that it'd work well but, you guessed it, it did. I found that Redshift is compatible with native Houdini instancing which was great at first. Made it pretty easy to just get going without much thinking. Using this method, I was able to get my scene up to 20 billion polygons without either Houdini or Redshift caring in any real way. The only issue was a large reduction in interactivity in the Redshift viewport. Luckily, this was easily solved using Redshift proxies.

Redshift proxies

One thing I found when trying to get Redshift proxies working in Houdini was that it was pretty easy to get them working. But! It wasn't quite as easy to get them working, instanced. You can't just load them in with a file SOP and if you use the visualise proxy node, it just brings it in as regular geometry negating any real benefit. The way that I found works is by using the instancefile attribute. To be fair, this is mentioned in the documentation, they just neglected to mention the best way to actually use the attribute. In hindsight, it seems to be the same way you'd use it in Houdini normally, but I'd never done it before so? Anyway, here's how it works:

  1. Save out your proxy files you want instanced. Make sure you're saving out .rs files and make sure their filenames are sequenced. I like to pad them to 4 digits just because why not? Remember to name them in sequence because it'll make it easier to load in later on.
  2. Slap down an attribute create on the points you want to instance to.
  3. Create a string attribute called "instancefile"
  4. Add the path to the file's you saved out including the filename and extension.
  5. Replace the sequence part of the filename with an expression that gives you a random number padded to the same number of digits as you did in the filenames. Assuming you have 3 .rs files you want to load and you padded them with 4 digits, here is an example expression:
padzero(4, fit01(rand(@ptnum),1,4))

I haven't actually verified the above expression because I'm away from my work machine at the moment (also why there's no screenshots), but it should get you going in the right direction.

After that, you just need to verify the Redshift instancing settings are correct at the object level and it should work fine when you render! Only issue here is you can't visualise it in the viewport. Except you can: by seperated out the render flag in the SOP level, you can make a seperate node branch just for viewport visualisation and another for rendering. The way I do it is put the render flag on the points I want to instance to and put the viewport flag on an instance to points node where the geometry being loaded in is a viewport compatible version of the redshift proxy. Depending on the setup, it might not be the exact same. As an example, unless you've used the same attribute for both branches, the instances you see at render time might not be the same ones you see in the viewport.

Conclusion

Sorry for the severe lack of imagery this time around, I had no access to my work machine and my macbook is currently incompatible with Redshift. Seems like that won't be true for long though with Metal support coming to Redshift in the near future! I hope this has been at least a bit helpful. It's fairly basic stuff but it could help someone I don't know? Catch you next time!


Embergen by JangaFX - First thoughts

General / 12 October 2020

After a few months getting familiar with Houdini, I came across an interesting program called EmberGen. A couple of days later I'm using it in production. Here are my thoughts:

Background

Embergen example renders (https://jangafx.com/software/embergen/)

Here's the rundown if you're feelin a bit too lazy to check out the website. 

EmberGen enables rapid iteration of volumetric fluid simulations by running on the GPU.

There's nothing much else to it really, except it runs in real-time...

Thoughts

Honestly speaking, I was sceptical at first. Who can honestly believe that there's a program out there that can do what Houdini does but instantly?

Here's the thing, it kind of does what it says on the box. I'll get to cons in a bit but here's some of my perceived benefits after a couple of weeks with the program.

Pros

  1. Iteration. The fact that you can slide sliders around and get instant feedback is amazing. Not only that, but the sliders are a bit more intuitive than Houdini (in my subjective opinion).
  2. Game dev tools appear great. I haven't touched them, but there seems to be a few decent options for the game dev side of things. You can export sprite sheets with multiple passes including: scattering, emission, depth, and normals. Simulations can also be very easily looped.
  3. Damn, can't think of any more benefits sorry 😐 

Cons

So I've mentioned what I reckon is pretty sweet about EmberGen, time to shit on it real quick:

  1. Is it just me or is it very difficult to set scene scale and match things up in external software? God damn I thought this would be easy but nope. EmberGen not only imports at weird scales, it also exports at weird scales. There's no difference between increasing the resolution of the volume and upscaling. Both options just scale the volume up. There's no option to set pivot points so when you export to other programs, you have to go through a process of manually aligning the volume both in translation and in scale.
  2. No sparse solving (yet)
  3. No undo? It's a funny one but yeah, a very useful feature to just not have yet?
  4. Limited support for animation and camera imports.
  5. I could go on but it's beta software so maybe not?

Conclusion

EmberGen is not production ready by a long shot. I say this, yet I am using it in production 🤨? Well I'll tell you why. For some certain situations, it is honestly a heck of a lot faster to use EmberGen than Houdini. From a film point of view, it comes in real handy when you have a small number of relatively low complexity simulations to pump out. Depending on your GPU power you can ramp this up to higher complexity and that's where it starts to get really powerful. As an example, I run a Quadro RTX 6000 with 24 GB of VRAM and damn, like I certainly max it out but that's almost enough! 

EmberGen definitely has it's place in my small setup I've developed over the past week. With the help of Houdini and Redshift, I've been able to get a system going where all I need to do is spend my day making different simulations, and let the computer automatically render them all out over night. EmberGen smashes out sims, Redshift smashes out renders, and Houdini glues everything together.

Where EmberGen falls apart however, is when you need a LOT of different simulations or you need massive complexity. Houdini is better in those cases because you can set it up to automate thousands of different variations. In the case of high complexity sims, well, Houdini is more accurate plus you have access to system memory.

A lot of what I mentioned is addressed in the EmberGen roadmap. Apparently they plan to introduce sparse solving which has the potential to really alleviate memory related pain, not to mention introduce performance related gain. Seems like undo is a planned feature too!

On the Houdini side of things, we have 18.5 coming soon and rumour has it they plan to introduce some EmberGen-like features. What a time to be in this industry! Catch ya's next time!

Machine Learning - First Experience

General / 28 September 2020

So I tried machine learning for the first time. I thought, "Hey! This is a cool new tech, let's apply it to my own work!". Well as you can imagine, it ain't so simple...

TecoGAN example

TecoGAN

I stumbled upon TecoGAN decided that it'd be damn cool to get it up and running and maybe use it on a few sample shots. 

Spoiler:

I failed hard and didn't end up getting anything to work.

So what happened? After a week of just trying to get things up and running, I discovered that Windows isn't really the best platform for machine learning. Even though it is supported by most major libraries, it seems that for one reason or another things are easier on Linux. As an example, TecoGAN requires Tensorflow to work. Tensorflow can be set up to run on Windows no worries, however on Linux things are easier because there's a Docker container available to get things going straight off with hardly any hassle. Docker is available on Windows too, but the Tensorflow container is not compatible on the platform.

Here's the real kick in the nuts though: after getting everything set up and sorting out a few issues relating to company proxy settings, I realised that TecoGAN was written for Linux and only Linux (maybe I'm wrong, let me know if I am). The first thing that tipped me off to this was every Python reference was to Python3. Python3 isn't a valid command in Windows, just use Python. The second thing was the dependency on wget in the source code despite not being listed in the requirements anywhere. This is no worries, just download it right? Yeah that worked but more issues kept popping up until it just felt like I was re-writing the source code entirely to suit Windows.

Virtual Machine

Since I wasn't about to reconfigure my main work system for dual-boot, I decided to install Linux into a VM. I heard Ubuntu was the most supported distro for data science so I went with that. No idea why, but it literally would not boot in Virtualbox? Sure, let's try Hyper-V! Wow, buttery smooth! Cool as! Very nice except there's no access to CUDA for some reason. I abandoned that and went for VMware Workstation. It wouldn't run because of Hyper-V, yeah cool man let's just disable that AND still doesn't work because of Hyper-V. Apparently updating Windows was the solution because well, it was and VMware worked. Very smoothly might I add. Next problem was that CUDA still wasn't available. I looked into it and realised that GPU access in a VM is actually a large topic and not easy, especially in Windows. Only really possible in Linux even. 

Time to give up. With more time (and Linux) this could've worked out. I could've at least played around with machine learning on the CPU but with an RTX card in the system, it felt like a waste of time.


I hope my experience with machine learning has been interesting for you like it was for me. It definitely won't be my last time playing around with it so expect to hear more in the future! See you next time!

Driving Pyro with Audio using CHOPs - Houdini

General / 14 September 2020


Pyro driven by Audio

Learning CHOPs through experimentation

I've been learning Houdini since the start of the year and have experimented with each context except for CHOPs, until now.

I've seen the different uses for it but what stood out to me the most was audio. I thought, what better way to learn how to use it than to try to incorporate it with something I already know pretty well? Pyro! So I had a look online and found... 

Nothing.


That's right, nothing. I don't know if my Googling skills a limited or what but I thought fuck it, I'll give this a crack. As it turned out, it was easier than I thought and actually probably not the best challenge to learn CHOPs as it barely used them at all. Here's the network:

CHOP network for importing Audio

Most of the complexity there is just filtering the audio to get the desired effect. I mainly wanted to separate out the high frequency from the low and isolate spikes for more punchiness in the simulation. I used the different channels to drive two aspects of the simulation. One was the temperature which was driven mainly by the high frequency (it's the "create_density_newclip1" node, don't @ me I know the node organisation is shite), and the other was a pump to affect the velocity of the sim. The pump was driven by the low frequency spikes.


Network defining the pump behaviour of a pyro simulation

In order to manipulate the velocity, I chose to create a volume to source into the simulation. I set it up so that I could have a rolloff effect whereby the main force of the audio input would be wherever I wanted it to be and it would smooth out/ rolloff from there. Effectively here I just made a circle and extruded it for the main area, and transformed that to use as the high intensity point for the velocity.

The CHOP network directly modifies the fan force parameter on the PARM node, which is multiplied onto the values I initialised the volume with. This is done in the volume VOP node after the volume is rasterised for performance reasons, otherwise the volume would be rasterised every frame which kind of sucks if it's not a necessary thing to do.

The final step was rendering, I actually thought the viewport preview looked alright and because I wasn't spending much time on it, I decided to use the OpenGL renderer for the video. There's no motion blur and it definitely looks worse than if I used Mantra but it's not awful.

Overall, this was a whole lot easier than I expected so I guess I didn't exactly achieve my goal, but it was a whole lot of fun to be playing with audio for a change! It would be very interesting to see what else could be done with audio in Houdini. FLIP? Destruction? Maybe automated lip syncing for character animation?


  

Maya to Houdini, Quick Tip - Object context VS Geometry context [BEGINNER]

Tutorial / 31 August 2020

SOPs, DOPs, LOPs?

From a Maya user's perspective, Houdini can be horrifying and one of the things that trip up a lot of people when they're just starting out is the multiple contexts within Houdini. Not only do people have trouble differentiating between the contexts, they also might not know what a context is. In this quick tip, I aim to explain the difference between the object context and the geometry context (Also known as SOPS). 

Note: this assumes a basic understanding of Maya.

Setup

To showcase the difference between the contexts, I've built identical scenes in Maya and Houdini. There are two animated cubes both moved on the X axis 4 units over a period of 24 frames. The blue one is moved in the geometry context, and the red one is moved in the object context. How does this work in Maya? I'll explain...

Object Context

In the images below you'll see the red cube selected in both Maya and Houdini. In Maya, I selected the cube, keyframed it at position 0 on frame 1, and keyframed it at position 4 on frame 24. You should be able to see this represented in the transform node of the object. Using Houdini, this is equivelent to keyframing the translate attribute on the object node in the object context. 

Frame 1, 0 on the X-axis


Frame 24, 4 on the X-axis

Geometry Context

As for the blue cube, that was animated a bit awkwardly in Maya to be honest. In the images below, instead of selecting the object, I highlighted all the faces and keyframed those instead. The end result is that the geometry animates yet the pivot point stays in place. It's also slightly more demanding on the computer as it's moving each piece of geometry rather than the whole thing at once. Not typically something you'd want to do, right? Well that's exactly what happens when you move an object in the geometry context in Houdini...


Frame 1, 0 on the X-axis


Frame 24, 4 on the X-axis (note pivot hasn't moved)


Check out how this was done in both programs. In Maya, you select all the faces and move them. In Houdini, you go into the geometry context and place a transform node. Both methods have the same effect and that is reduced performance compared to moving things at the object level.

Frame 16, moving at the geometry level

TL:DR

Moving geometry at the object level in Houdini is equivalent to moving an object using its transform in Maya

Moving geometry the the geometry level in Houdini is equivalent to selected all the faces and moving them in Maya.

EDIT:

A kind Reddit user has enlightened me to a very important point. What I have talked about here is how the programs work, not necessarily how YOU should work. In Maya, selecting all the faces, moving, and keyframing them is a weird and terrible idea, well Houdini is different. It's actually best practice to do your transforms at the Geometry level rather than the Object level as it avoids confusion down the line.


I hope I was able to explain things to you clearly enough, this sort of stuff is notorious for being loaded with industry jargon and Houdini doesn't exactly make things easier in that regard. In saying that, just push through and everything will be alright. I started using Houdini 8 months ago from writing this post and was a Maya user for 4 years before that. It's really not a lot of time as long as you keep going at it! 

Houdini PDG, Automated simulation and Rendering

General / 25 August 2020

One click sim to mp4

G'day how're you goin? Really gotta say you oughtta check out this PDG business in Houdini. I've got a nice setup going on here that allows me to sim my whole scene and get a render with barely any effort. There are multiple ROP geo nodes that need to be done in a specific sequence. Without PDG, I'd be going to them individually and making sure they're rendered out and updated. Thank god for automation!


Top left in the image below caches the static collision geometry, while top right caches deforming collision geo.

Partitioning logic

The next segment handles some of the dependency logic and ends up with the result of everything before being required to complete before anything after can start. "Wait for all" ensures everything above is done before anything below can begin, and "partition by index" combined with the "filter by range" above ensures everything is matched up properly for each frame. I've used "attribute create" combined with "sort" and "mapbyindex" to reindex the partitions as well convert them to work items. I needed to reindex so the indexes started at zero rather than one thousand. That was a side effect of the "partition by frame" upstream and would cause issues downstream if I left it as is.

Simulation

Next step was the simulation. This is a rain sim and for multiple reasons, I wanted to cache the main droplets, splashes, and running water seperately. The simulation needed to run first, but the other steps could be done in parallel. The image below shows my setup for fetching the ROP node and alerting the simulation upon its completion. The "OP Notify" node points to the "File" node and tells it to refresh when a newly completed frame is ready. This ensures it's not using outdated information later on in the chain and also updates everything in the viewport immediately.


Render

After all that, it's as simple as attaching to a "ROP Mantra" node, linking that to a compositing node for post processing, and compiling into a video with ffmpeg at the end! (Don't forget the "wait for all").

Conclusion

PDG really is a useful tool to get a hang of for any Houdini artist, it can do much more than I've shown here. For example, you can use it to automate a connection between Houdini, Maya, and Nuke + anything else that supports Python. I wrote this up mainly because the documentation for PDG is a bit scarse so I reckon the more resources the better. I know I didn't get into amazing detail so if you need me to explain anything, just reach out. Thank you for reading, and catch you next time!

Maya Geometry Variants / Subdivision Levels

General / 21 August 2020

Geometry Variants and Subdivision Levels in Maya

Have you ever sculpted in Maya and wished you had subdivision levels like you'll find in Mudbox or Z-Brush?

Maybe you wanted to have multiple instances of a piece of geometry that can be individually edited? (while retaining the link to the original)

Or just maybe, you were in a situation like me where you wanted a higher detailed and bevelled version of your mesh that can automatically respond to changes made to the original geometry?

Behold! Exactly those things...

Geometry copies take on upstream changes automatically

Working with Houdini for most of the year has changed the way I approach modeling problems; that's how I found this. I was modelling something in Maya for work and found myself jumping into the node editor regularly. I then stumbled upon this trick while trying to find a solution to an issue with the lattice deformer.

In short, you just use the output geometry of the first shape node to drive the input of the second. That's really it, nothing else to it. Here's a short tutorial to illustrate the point, a few ideas on how this could be used, along with a couple of gotchas / things to look out for:

Tutorial

Step 1

Grab a mesh to serve as the base for the copies, then add any primitive from the shelf. Take a cube to be safe.

Step 2

Open the node editor, and (while ensuring both meshes are selected) click the button marked with the arrow below.



Step 3

Connect the mesh output from the first node to the in mesh input of the second. At this point the setup is complete!



Show off

Check it out! If you make a change to the first mesh, it'll be reflected in the second. Kind of reminds me of these things called instances 🤔

Well, they're not the same thing I swear it. Here's proof:

Changes flow downstream

See? The second one can be changed independently of the first! You can even alter the geometry of the second one any way you wish. Here I've applied a subdivision. See how it responds dynamically to the changes in the first object?

Destructive edits

Subdividing a mesh adds a node into the graph as you might be able to see. This means that it'll respond properly to any change in the first, even deleting or adding geometry. What happens if you do something like sculpting the mesh?

Oh wow! Look at that, it still reacts as expected. From what i can tell, any change on the first one that doesn't involve adding or destroying geometry should work fine. As soon as you do add or destroy geo, it breaks any destructive geometry changes downstream. This is familiar behaviour if you do any modelling in Houdini.

Non-destructive changes

As long as your changes are non-destructive things should go smoothly. Nodes should update automatically and generally just work.

Bevel node works as you'd expect

Advanced

Deeper

This can go multiple levels deep, check this out:

Wider

Not only can your onion be deep, but it can also be wide with branches!

Caveats

This technique isn't without its problems. It can be kind of annoying to mess around with the nodes in Maya, especially after getting used to a proper node based program it's really just not very nice. 

You also gotta be pretty careful with how you work with nodes that have dependents, some changes can have unintended adverse effects downstream such as deleting or adding geometry. 

Don't just duplicate the object and connect the duplicate, it doesn't necessarily work as you'd expect every time. Just connect a cube (or whatever works).

Using this technique as subdivision levels similar to Z-Brush or Mudbox can work from what I see, however the conenction is only one-directional. Your changes to high-resolution geometry won't change the silhouette of the low-resolution geometry. Also, it's very important to not add or remove geo upstream. All your work will be destroyed. I mean probably, you might be able to undo but be careful. (You should also have a backup, like usual. Don't blame me if everything burns to ashes)

Honestly, I haven't researched it too much. For all I know, this could be a terrible idea but it seems to work fine for my purposes so on the toolbelt it goes.

I'd love to hear your thoughts on this. Maybe this feature actually already exists in some other form and I'm not onto anything at all. Well I'd love to know if it does because it'd probably be better! Thanks for reading, I hope this has helped you in some way!

From there to here: Part 2

General / 10 August 2020

Part 2

University and beyond, my approach to study and finding work after graduation

Education

Swinburne University of Technology

First year

Honestly, first year of uni I barely studied at all ...

... I don't regret it one bit.

First year was a time of huge personal growth, meeting new people, and really breaking out of my shell (you should know, this was immensely valuable career-wise). I went from being stuck indoors on my computer 24/7 to not touching my PC once in a week (though I really should've for study). I went to events, joined clubs, and volunteered. I signed up for Tae Kwon Do and became friends with an amazing bunch of people.

Taekwondo Grading Ceremony

In the second half of the year, I actually started to study. It's then that I found out that my decision to study computer science alongside game development was not in my best interest. What I found was every unit felt seperate from each other and I had no idea where things were headed. In hindsight this was probably something that would be rectified by the final year, but I didn't want to wait to find out. 

Second Year

Second year, I dropped computer science. Immediately things felt better. You wouldn't believe how much more coherent the units felt. 

On top of that, we finally were able to start making games:

We Will Live - 2017 university group project

We Will Live is a game about evacuating clueless beings from burning buildings. It's a bit rough around the edges, I will admit, but I'm proud of what we ended up with. I was responsible for all in game art, FX, and lighting as well as tuning Unity's post processing stack to suit the game's needs. For a second year uni student, I'd say I did pretty well.

My approach to study has always been self-focussed. At uni, I massively reconfigured my study plan and did units out of order. I applied for multiple pre-requisite wavers just so I could do the units I thought would help me the most. I also took part in cross-institutional study which was an ordeal but I ended up learning alot from the unit I picked up. I took on a unit at the University of Melbourne about the impacts on deafness from a teaching perspective. 

Otosclerosis visualisation (exaggerated)

If there's one thing that I suggest you do if you're a student, it would be to take charge of your studies. Your uni won't teach you what you need to find a job, you have to do that yourself. Uni provides resources and connections. Other than a possibly decent structure to serve as a backbone to your own studies, uni won't provide you with anything else.

Third Year

PAX Australia 2018. This was the year I exhibited at PAX. One of the unique opportunities provided by the Swinburne games degree is the chance to showcase at PAX. This was the real deal, we had one year to develop a game with October 26th serving as a hard deadline.

Halfway through the year, this is what we had come up with:

Sol Floreo alpha build (Wreath)

We had our core mechanics in the build. As the player, you control the sun guiding a small plant with your beams of light to its goal. The game was something, it had achieved our aim of being a relaxing puzzler but we felt changes needed to be made. It was visually incoherent and much more could be done.

Behold! PAX build Sol Floreo in all its glory!

Sol Floreo PAX Trailer

We made a major shift away from the 2.5D aesthetic towards full 3D. Like in my second year project, I was responsible for modelling, animation, lighting, and FX. Additionally, I developed a system that allowed the developers to easily transition the game between day and night, as well as allowing the atmosphere to grow the more the player revived the world. I'm very proud of what I (and the rest of the team achieved with this project). One major point pushed by the team's leadership was a no crunch strategy. They actually did a great job at limiting the stress inherent with a major project such as this. They ended up finishing us up a week before the deadline. It gave us an opportunity to spend more time on other subjects and overall made life easier.

The project was a huge success!

Playtesters intuitively understood the game's mechanics and nearly everyone was impressed on some level by the visuals. We were also covered by game magazine Superjump.

Sol Floreo - Front page on Superjump magazine

Sol Floreo at PAX Australia 2018

If you want to see more about Sol Floreo, check out our Twitter page at https://twitter.com/Sol_Floreo_Game.

Beyond studies

After uni, I regrettably was a bit too relaxed in finding proper work. I felt self concious about my portfolio as I knew that what was on there both wasn't good enough, and also just not enough in general. I had barely anything. Over the next year, I worked on my portfolio and picked up a few quick gigs on the side. I developed augmented reality applications for RMIT as well as CG Futures, produced product renders for a brand concept, and kept up self-study learning new skills I thought would be useful.

Constellation Australia - brand concept

Financial issues finally started kicking me in the groin and I pushed myself to get goin'. I spruced up my resume, built up a brand and a website using Artstation, and started applying for jobs. To my surprise, an opportunity came my way but from where I least expected it.
While weathering through a typhoon in an AirBnB in Japan, I got a message from someone I worked with in the past in my volunteering days. She told me there was an opportunity that might suit me and asked me to come along to a meeting in a couple of days. Being in Japan at the time and suffering through a typhoon, I thought that it best to say yes! I said that I would come along so long as I wasn't killed by windy weather.

The meeting time was set for less than two hours after I was due to land back at Melbourne. As you might expect, I flew economy and needless to say I was truly, utterly fucking tired beyond belief. I sat there in that meeting trying my very best to stay present. Luckily it wasn't boring, and was actually very exciting and engaging. Not only that, but I was invited back for an interview and got the job at Soundfirm where I work now.

Where I am now

I've been at Soundfirm for nearly a year now and have absorbed an incredible amount of knowledge in that time. They've got me doing RnD for new workflows involving Unity. I'm in a very interesting and unique situation as they're a post-production studio and I'm the only game developer there. It means that I'm left relatively alone and have freedom to come up with new techniques and workflows. I'm constantly researching ways I can bring my skillsets to the business while also picking new skills along the way.

While at Soundfirm, I picked up skills in Houdini and I'll say right now it's bloody amazing. I can fully see myself sticking with Houdini for a large part of my career at least. As an artist with a technical way of looking at things, Houdini is my jam. It's the perfect combination of logical and artistic thinking. 

First attempt passing data between solvers in Houdini

Future

My future blog posts will definitely be shorter than this one and be more focussed on the interesting things I discover while working. As I progress in my career, so too will the type of content I choose to share. I hope to one day soon provide tutorials and resources to help you out if you need it. Thanks for following along, I hope this has been at least somewhat interesting. Feel free to shoot through any questions you might have and I'll for sure try to provide some kind of useful answer. Hopefully it's useful anyway 😶

UPDATE (12th of August, 2020):

I just wanted to add that I have omitted a lot of personal aspects of my journey. I went through serious financial and emotional trauma, lost a close family member, and got into my first relationship (been together a few years at this point, moved in together and still going strong!).

I don't want to pretend that everything has been perfect and I don't want to hide these aspects of my life. At the same time, a lot of it is very personal and I don't yet feel comfortable sharing that on the internet. Thank you for your understanding, can't wait to see what the future holds!