The Final Stages | Outcome | Self Reflection

The Final Stage

Now that everyone has completed their part in preparation and creation of the assets to be inserted into the scene, the final stage was to combine everything together in Nuke. The scene ended up being quite heavy in Maya, so it took all together roughly 36 hours to render out the 1676 frames that we had in the end. Once everything was compiled together, I was in charge of compositing for the project. Jane provided the animated painting sequence, the parallax painting and the cleaned-up background shot, whilst Giulia was in charge of compiling and rendering out the scene in Maya.

Nuke node graph

I started off by writing in the passes (Beauty, Diffuse, Shadow Matte and Specular) which were then merged together and combined with the cleaned up version of the original footage. I then also added a blur effect, as there were a couple of seconds when the camera wasn’t in focus, and motion blur to match the movement. Due to some circumstances, there were missing frames for the fire render. This mistake was solved by exclusively rendering out just the fire volume with the stand for those specific frames and later merging them in Nuke. I used a planar track for the roto shape of the asset to have the perfect movement and insertion of the fire stand into the missing frames.

Upon merging the sequence of Maya models, I then focused on placing the paintings into the shot. For the perfect track of the frames it was intuitive to use the planar track, alongside with the key framing of corner pins of the drawing for subtle adjustments. Placing the parallax painting was the final missing piece and it took me a few tries to figure out the best way possible. At first I had an idea of planar tracking the painting to the shot, writing out the sequence and then creating cards and following the steps from our Nuke week 6 and 7 exercises. But it was quite hard to place the cards, as the room in the painting itself is small and it was tough to see correct perspective and imagine how such a picture would act in a physical space. Instead, I went through with a second way, where I rotoscoped out the walls and the floor on 4 various cards, arranged those in the best suitable way and then key framed the corner pins of those parts, to fit with correct perspective in the shot.

As a final sprinkle, Jane added the sound effects to give it a more submerged feel that the person is in the gallery. It guides the audience to the uniqueness of each piece, and the whole composition feels more finished.

The Outcome

The Final shot
Comparison at frame 0
Comparison at frame 999

Self Reflection

Coming from different backgrounds and having experience in various software, it only played to our advantage as we successfully managed to split the responsibilities and give each other tips. One of such examples that I could remember of was when Giulia explained and guided Jane how to create and import the animated sequence of Keith Haring figures on the texture of the statue. Giulia was invested in modelling and compiling everything together in Maya, whilst Jane really enjoyed providing the hand-drawn paintings and animations, alongside with Nuke work on the clean up of the entire footage. As I was enjoying the recently introduced Houdini classes the most, I was really excited to create the fire simulation and adjust it accordingly given the girls reviews.

Trying to manage the workload and organization of the project, we tested various websites and online services. But in the end the most used was the WhatsApp messenger and weekly Teams meetings, in which we would discuss questions, potential ideas and how to execute them, what and whom to ask for any problems, etc. As the resource share, it was incredibly useful to have shared folder on OneDrive, as we would drop everything there. The schedule was kept in check and when needed we would agree on excluding anything that we might have wanted to do originally, but couldn’t execute due to the lack of time.

Judging my own work, I was pleased with how the fire turned out, however it was a shame that I didn’t manage to finish the moving particles around the statue base, as originally thought of. Starting out the script in Houdini and research I managed to get to a point of having some particles moving around an object, but work had yet to be done to get the better look. Due to the time restrictions it was decided to leave it and focus on the rest of the project. With the compositing, again, the final result was satisfying, but there were a few mistakes that I only saw after writing the sequence out. One of them would have been creating the roto for the first and last few frames to make a gradient for the planes created in Maya, served as shadow mattes, which have outlines that are seen.

Overall, I really enjoyed this project and am very happy with the outcome. Having not achieved everything that we wanted, it was still a great experience to go through with it, understand teamwork, communication and time management. From the beginning our group was very well balanced, we agreed upon a lot of ideas and provided great support and help to each other. It was quite easy to get a constructive criticism and we would always agree on any changes required.

Final Result | Self Reflection – week 11

Final Process

After showing Nick the render from previous week, it was pointed out to me that the lighting was incorrect. I adjusted the lighting (see below), to have more defined shadows of the machine appearing on the backwall and the left side wall. Due to the lack of time, I only rendered out the new shadow pass, such that the the new shadows would get combined with the beauty pass of previously lit up scene. Here is the result.

Final shot

I also rendered out a frame of the new beauty pass, to see how it would have looked in the scene with the new lighting:

Different lighting render

When the rendered files were imported inside Nuke, I didn’t break them down into different passes and combined back together, as the final output would look different to the beauty pass. The only exception was for the shadow layer, as the beauty and shadow render layers were exported separately, thus giving more control over the 3D asset.

I did use the ID passes, to affect the colour of the wheel, make it slightly deeper and darker tone. Same was applied to the golden features of the machine on top of it. An attention had to be payed specifically for couple of objects that appear to be in front of the machine. For those, I created a roto shape, which I then planar tracked and combined together using the merge node with operation ‘over’. Inside the lamp, however, is supposed to be some glass object, maybe see-through space, so a roto for it had to be cut out from the previously combined objects. All these alternations then were used as an alpha channel for the mask, to put original footage over the merged.

Nuke node graph

Self Reflection

Looking back at the entire process I have learnt a lot about the integration of modelling and compositing in such an extensive project. I learned more about lighting, animation, rendering in itself, how to manipulate various passes and adjust potential mistakes later in Nuke. There were a few things which I was proud of, but not without a couple of mistakes and few realizations of what and how it could be done better.

Firstly, I could have implemented various machine parts, without attempting to copy entirely a whole machine from the original source. If I had done more of research and understanding of where it would have been placed, it would have been more clear to me of whether having a lot of small details was necessary. After all, the machine model itself has a couple of detailed bits which are barely seen or not at all. Secondly, I would have wanted to implement more of rigging, rather than just animating parts. Having encountered problem of understanding how to rig specific parts, I ended up using constraints, joints and IK only on the curve-shaped rod attached to the rotating wheel.

After a couple of weeks, modelling of non-organic objects became easier and easier, so I really enjoyed modelling the smaller bits or unusually shaped parts. When unwrapping the UVs it took me a couple of tries to understand how that would be working. As most of the parts weren’t seen much or actively animated, I attended the parts that needed most attention. Texturing was fairly easy and straightforward, especially with an outside look: pointing out what is unrealistic and how to make it look more worn down. Using Mudbox to make my own textures for the wheel and screws gave me some room for creativity and unique attention to those parts.

At the final stages of lighting the scene and trying various renders, we found that the HDRI was from the first room and thus created incorrect lighting for the scene inside the second room. Via trial and error, I placed lighting around the scene and experimented with the settings of exposure and intensity, to match the lighting reference of the rocks on the ground. I rendered out the variations of the scene and placed into Nuke script, to see how it would look in the end. The biggest mistake that I made in my final render was the use of HDRI to cast shadows and not blocking out enough light from it. The light coming from the left side of the room would create the highlights and specular details on the left side of the machine (e.g. edge of the wheel), which don’t match the shadows that are stretched out towards that direction. As I adjusted the lights after constructive criticism from Nick, I rendered out a single frame to see how the lighting would work for the whole scene, whilst rendering out the entire sequence just for the shadows. Overall, the shadows do look better, but they contrast slightly with the specular details which don’t belong there.

All things considered, I would re-organize my own time and shift my focus on breaking project down into a more detailed plan, such that I could see overall picture and leave reasonable time for render tests. Despite all the mistakes and issues along the way, I am pleased and satisfied with the result. This project gave me some ideas of what I want to do further in the software and what I want to learn about more.

Machine render – week 10

After completing the model, texturing, UV unmapping, animating and creating the lights in the scene, I took sometime adjusting the lights and seeing how variously placed shadow mattes (as planes) would affect the object appearance in the scene. I would also adjust the intensity, colour and exposure of my main three light sources and render out the single frame to see how it would fit in the final composition. Here are my few test renders.

Test 1 – too dark and harsh textures
Test 2 – better light, but textures are too harsh. The light source on the right is too yellow.
Test 3 – textures are better, light as well. Still feels a bit too yellow.
Test 4 – slightly dark, but better tone to the source light.

Whilst this was the final render, I still noticed that I was missing a roto on the rock, as for the first few frames the machine is standing on top of it, rather than being blocked out by it. During the last few seconds the roto around the lamp goes slightly weird, so I would need to adjust it, as well as the roto of the inside of the lamp, such that the machine bits could be seen better through it. I am happy with the colouring and lighting of the shot. Looking at the node script I noticed that the ReGrain node is missing for the shadows, so that would need to get adjusted as well.

Nuke node graph

Green screen extraction – week 9

Continuing on from the last the week, we proceed on adjusting the despill, i.e. light reflected of the green screen that would illuminate skin or clothes with a greenish tone. As there were subtle differences in the green screen itself, I adjusted it to have a same colour with the use of IBK keyer, as well as the nodes Merge with operations: “average”, “minus” and “plus” from the original footage. I then created the core, base and hair mattes, alphas of which were then piped into the despill footage. As the woman’s head is placed over the blue mountains, I created a separate graded edge alpha with blue undertones, whilst for the body part I used a more green targeted colour correction.

After copying in the alpha, I used a rough roto to cut her out of the footage, colour graded to the background, transformed the image to match the proportions, premultiplied and grained the clip.

Nuke script

Machine model (rough renders) – week 9

This week I focused on the animation of machine parts and addition of texture to the bits of it, so it would not look as perfect or brand new machine out of the factory. Before exporting the exr files from Maya to Nuke, I played around with lighting and placing shadow planes into the scene as well as placing some walls that would block out the lights. Here are a few test renders.

Blue: 1 and 2 are the planes that act as shadow mattes. I decided to removed a shadow matte wall from the right side of the room, as it was blocking out too much light. 3 is a blocking wall, that has a simple lambeth material assigned to it.
Red: 1 is a more yellow area light; 2 is an orange-based softer area light.

Scene planes and light placement

I then rendered out the exr files from the scene in Maya and imported them into Nuke. First, I realised that I wasn’t too happy with the lighting from SkyDome. Second, that I forgot to remove the background image plane, so the machine was rendered out with it as well. Hence the background is seen and due to aspect ratio and format difference (HD_540 for render), there is a slight mix-up with the image. The roto of the hole in wall also requires some work, as in some frames the machine is not as clearly visible.

Nuke node graph

Nuke: all exercises

Week 3: Beauty and marker removal

Week 3: Projection

Week 4: 3D Camera Tracking

Week 5: CGI asset placement

Week 6: CGI asset placement to a point

Week 7: 3D projections – cleanup

Week 8: Green screen extraction (basics)

Week 9: Green screen extraction

Project progress

During the first week of March we focused on finishing the tracking in 3DEqualiser and setting up the scene in Maya. The three of us all attended another few sessions with Dom, where we all tried perfecting the track of the shot. After looking at the footage followed by an advice from Dom, we decided to cut down the shot by a thousand frames and have it last for 44 seconds, rather than 68 seconds. This way the attention of the audience would not get lost and the shot would be more interesting.

The tracking information was then exported into Maya, after which we placed cards to represent the wall and the floor, as well as few cubes to represent the black stands. We also added lights, position of which was based on the light pattern falling on walls and the floor. Test objects such as cube and few copies of a torus were placed around the scene for a render run.

Scene recreation in Maya.
Render of the placed objects.

Earlier, during the project planning stage, Jane found an Egyptian statue. When we met last week, we decided to add sequences of dancing characters (Keith Haring inspired) to the texture of the statue. Proceeding with the task, Jane created a png sequence of those with the transparent background. She also re-did the animated painting of the her earliest work with the newer characters.

Updated version of the animated video (by Jane)

As the clip was cut down we had to adjust the plan for object placement and question if all the original pieces should be placed in the scene. Since we all equally liked the Egyptian statue, it was decided to keep it but we did cut down the amount of pictures from 3 to 2. The first one to be the painted version (by Jane) of the Van Gogh famous painting “Bedroom in Arles”, whilst the second to be the animated painting. The classical statue was kept in the same place and the display case with the fire to be placed between the two paintings.

Giulia was the one who collected most of the free assets for plants, as well as modelled the pyramid shaped display case. Furthermore, she put herself forward to model the neon signs which would be positioned around the classical statue by the black stands. Various tests for quality of neon lights were carried out by her, which we all later discussed during our recent call and came to conclusion of what look we all preferred.

For the particle work, I organized a session with Mehdi, who guided and helped me to understand how to use the object as boundary for the fire simulation. After achieving the fire look that I wanted, I also played around with material and colours of it, to found what we liked the most.

Fire: Node Network

One of the difficulties that I came across was how to read the VDB files inside Maya and later how to export the volume material from Houdini and read that in Maya. Still in the process of finding the solution, my next steps will be trying out the methods I found in various forums, contacting Mehdi and, if all fails, exporting the render from Houdini rather than Maya. For that, I would need to export the tracked information again from 3DEqualiser to Houdini, export the objects and lights that were set in Maya to Houdini, such that the scene set up would be exact as in Maya. The display case would also be needed to get imported to Houdini and be rendered there as well, as the light is interacting with the glass properties of the case.

Material properties window

Other than figuring out the proper way to render the fire, my next steps involve finding all the references for particle movement around the Classical statue stand and setting up the node network in Houdini. For those, I believe, placement of material would be easier, so that could be rendered inside Maya.

Green screen extraction (basics) – week 8

There are various nodes one can use inside Nuke to extract the green screen. Whilst the Image-Based Keyer (IBK) nodes were as less complicated and more straightforward in their application, the Keylight node required more attention and experimentation with the screen colour and screen matte attributes.

After attempting the shot during the first time, the constant problem that occurred was in how bright the red top was and how much contrast it created with the background. After finding a plate which suited my vision for the shot, I focused on using an imported node, called ColorPickerID, which allowed me to adjust a specific colour, without affecting others as much. It took me sometime to figure out of where in the script to place my adjustments, but it worked out in the end.

Looking at the footage, it still appears like it doesn’t suit the shot well enough, as the lighting of when it was shot differs to the lighting from the background.