Continuing on from the last the week, we proceed on adjusting the despill, i.e. light reflected of the green screen that would illuminate skin or clothes with a greenish tone. As there were subtle differences in the green screen itself, I adjusted it to have a same colour with the use of IBK keyer, as well as the nodes Merge with operations: “average”, “minus” and “plus” from the original footage. I then created the core, base and hair mattes, alphas of which were then piped into the despill footage. As the woman’s head is placed over the blue mountains, I created a separate graded edge alpha with blue undertones, whilst for the body part I used a more green targeted colour correction.
After copying in the alpha, I used a rough roto to cut her out of the footage, colour graded to the background, transformed the image to match the proportions, premultiplied and grained the clip.
There are various nodes one can use inside Nuke to extract the green screen. Whilst the Image-Based Keyer (IBK) nodes were as less complicated and more straightforward in their application, the Keylight node required more attention and experimentation with the screen colour and screen matte attributes.
After attempting the shot during the first time, the constant problem that occurred was in how bright the red top was and how much contrast it created with the background. After finding a plate which suited my vision for the shot, I focused on using an imported node, called ColorPickerID, which allowed me to adjust a specific colour, without affecting others as much. It took me sometime to figure out of where in the script to place my adjustments, but it worked out in the end.
Looking at the footage, it still appears like it doesn’t suit the shot well enough, as the lighting of when it was shot differs to the lighting from the background.
This time we proceeded to learn how to make projection in a 3D space. Given the previous footage, the task was to clean up markers in the crypt, as well as use a shot of our own and remove whichever bits we wanted to.
Node graph
The principle behind creating projects was quite similar of a 2D clean up mixed with use of camera which was created with the use of Camera Tracker node. Calling it a Propjection, the information from the tracked camera would be applied to a clean plate, rotoscoped and painted over at a specific frame, such that this card could be tracked and fixed to correct position in the 3D space.
The node Project3D was fairly simple and straightforward, leaving us to simply constantly recreating the process and the principle structure of the workflow behind the 3D projections.
Specifically for the crypt, as I was creating one card for all the points on floor in the second room, and hence for a few frames I had to roto out the objects that were in foreground of the markers.
Museum
For the second shot I chose this clip from the museum, for which I created Camera Tracker and placed cards before. In this scene I decided to remove a switch on the wall, as well as pushing myself and removing the furthest chair in the second hall. In my opinion, the grading could be further improved and adjusted better to the changing exposure.
Plan for removalNode graphPatches node graph close up
Last week we were given more explanation of how to use the LensDistortion node and how a compositor would import and use one that would be shared inside a company for same shot (ST Map). Further proceeding on learning how to use the ‘Points To 3D’ node, we quickly grasped the idea of how to use the supplied footage, camera information, how to create axis and how to connect the transformation information to a CGI object.
Just as a practice I quickly created and applied the Camera Tracker node, from which I then created a Camera node (originally expected to be given to the compositor artist). I then chose 3 various points in the scene I wanted to use for attaching the objects and with the application Points To 3D obtained the needed information for transform. Then the axis were created and linked to the CGi objects.
Following from the last week, learning about camera tracking and creating point clouds in a 3D space, we moved onto final positioning a 3D asset into the scene. Originally, I wanted to create a levitating droplet of water which would be placed in the center of the shot or even a few copies of it around the place, but having troubles in exporting the AOV passes from Maya and incorrect sequence length, I used the asset provided by Aldo instead.
Firstly, the footage was read in, all the settings were updated (frame rate, sequence length), I denoised and created a camera track using the result. I also applied the LensDistortion node to the provided Lens Grid card. For the Camera tracking I masked out the lake, as we should not be using the reflection based information. Then I wrote out the camera information using the WriteGeo node.
Then I broke down all the passes of the 3D object and colour graded them accordingly to match the scene. Since the HDRI that was applied to it was taken in a different lighting setting (the forest) and having troubles exporting objects from Maya, I couldn’t have changed the HDRI lighting, so I was trying to adjust the colours of reflection and diffuse passes instead. I copied the alpha back into the object, applied distortion and noise back to the asset and merged with the background.
Another problem that I noticed was the incorrect movement of the assets compared to the scene. Since the object I used had the tracking information of a different shot, hence the way that it is situated is wrong compared to this shot.
This week we learnt about the 3D motion tracking. Having worked a bit in the 3D space in Nuke, it was important to understand the Camera Tracking node, all the settings within it, how the tracked information was incorporated into the scene and further combined with the original footage. I practiced with the two shots: first being a shot of a museum room and the second being the shot of Crypt we have been provided with.
The procedure went as follows: the footage was denoised and read back into the script. Then it was adjusted to be clearer for the camera tracked and tracked. I roughly knew the camera it was originally shot on, so that information was filled in accordingly. After that, with a general error being close to 1, with increasing the tracker length and adjusting the numbers for min and max errors, I got it down to 0.77, making the camera tracking information better as a result. That took a lot of trial and error procedure. Then I created the cards and placed them accordingly in the 3D space, alongside with the cones.
I also roto-ed out the sign in the left side of the space as well as the door and stone wall of the arch, as the camera moved in, revealing the second room.
Node graph
For the Crypt, I followed the same procedure, except I had to break it into 2 parts. Firstly, denoised and tracked the shot. Then, I created and placed the cards and cones into the first part of the space, rotoscoping out the big hole in the wall and placing it accordingly over various frames. After writing out the sequence with newly introduced assets, I read it back in the project and did the similar process for the room being revealed. Except this time I applied an Invert node to the roto of the hole in the wall, such that the planes and cones could be placed inside that area.
Nuke has the ability to operate and compose in 3D space and this week we were learning more about it. We learnt of the importance of the nodes such as Camera, Scene and ScanlineRender and had to complete an exercise in projecting a 2D image onto a 3D space. I chose this image of a abandoned room to project.
Practicing to align the card node with the use of checkerboard, it took me a while to understand how to place it correctly in the scene. Before any placement I had to evaluate the camera lens that was used, and I chose the focal length to be 16mm, as it looks like it was a wide lens with big distortion towards the edges. The walls appear to be have a different incline in vertical directions and not entirely parallel to each other.
As the scene has smaller details, such as sneakers on the floor, a cable hanging from the ceiling or the opened windows – a further work using other cards with a roto of these elements could be done.
It may be asked by an artist to do some touch-up on the actor’s face, such as removal of blemishes, wrinkles, de-aging, or something else. This week we focused on beauty work as well as some markers removal.
Beauty
I cleared out some redness from the nose, cheeks and forehead as well as the areas under eyes, to make her appear more rested and youthful. But it feels that due to movement of the face, the area that I cleaned up underneath the right eye appears to have darker tones in the first few seconds. I tried adding keys to the grade node, but it was affecting all the areas that were adjusted using rotopaint.
Marker
It is a common practice to remove markers from a face, body, object and this week we’ve been practicing removing the markers from an original shot footage. There were two ways we could go about it: the traditional where we create a rotopaint, roto, track it and merge with the original footage and a faster way where we use the VectorDistort node. The advantage of the latter is that it can be more accurate in storing the tracking information especially for a shot of a face/body.
In this exercise I practiced both ways and it can be seen in the node graph from below. A few challenges that I faced was the change in light and some physical difference in planar information for few dots.
Node graph
Traditional way: close-up
Faster way: close-up
As you can see on the right side, for the 2nd dot, I used a Grade node, settings of which I keyframed and adjusted to suit the changing lighting conditions. A similar problem was found the marker on right side of the mouth, where due to the folded part of the skin half of it had different light. To tackle this problem, I used a second grade node to which I applied a mask of a roto. This way I changed the grading of lighter and darker portions of the skin.