Projections in 3D – week 7

Crypt

This time we proceeded to learn how to make projection in a 3D space. Given the previous footage, the task was to clean up markers in the crypt, as well as use a shot of our own and remove whichever bits we wanted to.

Node graph

The principle behind creating projects was quite similar of a 2D clean up mixed with use of camera which was created with the use of Camera Tracker node. Calling it a Propjection, the information from the tracked camera would be applied to a clean plate, rotoscoped and painted over at a specific frame, such that this card could be tracked and fixed to correct position in the 3D space.

The node Project3D was fairly simple and straightforward, leaving us to simply constantly recreating the process and the principle structure of the workflow behind the 3D projections.

Specifically for the crypt, as I was creating one card for all the points on floor in the second room, and hence for a few frames I had to roto out the objects that were in foreground of the markers.

Museum

For the second shot I chose this clip from the museum, for which I created Camera Tracker and placed cards before. In this scene I decided to remove a switch on the wall, as well as pushing myself and removing the furthest chair in the second hall. In my opinion, the grading could be further improved and adjusted better to the changing exposure.

Plan for removal
Node graph
Patches node graph close up

Placement of an element onto a specific point in clip – week 6

Last week we were given more explanation of how to use the LensDistortion node and how a compositor would import and use one that would be shared inside a company for same shot (ST Map). Further proceeding on learning how to use the ‘Points To 3D’ node, we quickly grasped the idea of how to use the supplied footage, camera information, how to create axis and how to connect the transformation information to a CGI object.

Just as a practice I quickly created and applied the Camera Tracker node, from which I then created a Camera node (originally expected to be given to the compositor artist). I then chose 3 various points in the scene I wanted to use for attaching the objects and with the application Points To 3D obtained the needed information for transform. Then the axis were created and linked to the CGi objects.

Node graph

Getting footage and tracking – weeks 2/3

The main focus of the last two weeks was on getting the footage and organizing sessions with Dom in order to track it in 3DEqualiser. With variety of shots got by Christos, who followed the latest camera previs clip, both captured on Blackmagic Pocket Cinema Camera 4K and Canon 6D, the three of us agreed on using the very first shot done of the latter camera. The crop factor on BMPC4K was quite high, x1.9, meaning it was cutting out a lot information which was required for tracking.

We then managed to get a learning session with Dom, in order to utilize 3DEqualiser for a better tracking on Monday last week. During the session we were practicing on the first 2000 frames, meaning there are about 1000-1500 more that should get tracked and have the information exported from. Another session was planned, where Dom would show us how to export all the material with tracks into Maya and how we would placed objects around.

At the meantime, Jane managed to test the parallax effects in Nuke on the footage and would send us her trials, such that we could decide on the changes where necessary.

During our team meeting on Friday, we figured out and noted down the main steps of the project workflow and how would we divide the workload between ourselves. As a conclusion this was the rough outline of what had to be done:

  1. The track needs to be finished up and perfected in 3DEqualiser.
  2. The footage has to be cleaned-up.
  3. The objects with burning fire inside the display case and with the particle movement around the stands underneath statues needed to be completed in Houdini.
  4. The 3D objects need to be placed into the scene in Maya, with the planes representing walls, other cubes to represent the black stands, lighting and maybe an additional wall behind the black stand. Their AOVs, movement and placement in the scene would then be rendered out (without the image sequence), undistorted, as a jpeg (or similar) sequence, such that that footage could be merged with cleaned-up plate in Nuke and AOVs could be adjusted accordingly where we want them to be done so.
  5. The parallax paintings would be placed as cards in Nuke with the tracked information exported from 3DEqualiser.
  6. We would then also apply colour grading to give the shot a more cinematic/artistic view.

The final outline for all objects placed and their purpose in the scene was the following: the first and second paintings having a parallax effect, where Jane would originally draw those paintings based on some pictures; the first classical statue would be done in the Vaporwave artistic style, for which Giulia would model neon signs and lights and place them accordingly; the 3rd painting to have some animation inside of it; the Egyptian statue to have animated texture (both done by Jane) and have a few 3D projections placed over it; the display case to be modelled by Giulia and the fire added by me in Houdini, as well as the particle movement to the stands for statues. It was also decided for the tracking to be attempted by all three of us and the best track to be chosen and used for the project.

When discussing of what is needed to be done to the statues and what would be in the paintings, we had an idea for the 3rd painting to implement some animation that Jane made.

Inspired by the artwork of Keith Haring, it was decided to create a thematic side for the 3rd painting and the Egyptian statue to be used. The animated characters would be implemented both in the painting and added as animated texture on the statue. Watching this video, we also were inspired to add some sounds to the scene and maybe even the music accompaniment as it is in the video.

Smoke, fire and explosion – week 5

This week we got to learn about creation of smoke, fire and explosion with the use of smoke solver and pyro solver. Breaking down the basics, it appeared that the main principle behind all of these was getting down to use of smoke solver.

To create the smoke and fire we used the DOP network, while the creation of explosion didn’t require us to do so. However, most of the procedures followed the same steps:

  1. Create a geometrical shape (sphere or circle);
  2. Scatter points inside that shape;
  3. Create a new attribute (the one you want the software to be working on, e.g. density);
  4. Use Attribute Noise to make it more chaotic, rather than follow a pattern;
  5. Use Color for visualization;
  6. Apply Volume Rasterize Attribute (to 4), such that you are creating a volumetric representation;
  7. Finally, use a Null node, to have a clear visual directory when referencing the SOP path later in the DOP nodes.

Then you would create the DOP network, inside which you would set up using smoke object, volume source, smoke solver and some mini solver for gas, like gas turbulence or gas wind (for SMOKE). The set-up is primitively the same for fire, except you would be using a pyro solver instead and have to be aware that you are also using a temperature attribute as well for a better fire simulation.

However, for the explosion we didn’t use a extra DOP network, it was simply set-up in just the Geo node. One of the most important target fields to add in sourcing inside pyrosolver was the Divergence as it defined of how big in size was the initial explosion.

Explosion geo network

The second branch for pyrosolver with a sphere object was created to show how the smoke would be interacting with some sort of collision force. However, it appears that the placement of my camera didn’t manage to catch that output.

Machine (progress) – week 6

Following from the last week, I modelled extra few parts for the machine, mostly trying to finish up the bits that will be rigged and animated. I also practiced placing the object into the scene, as well as put in some cards with shadow surfaces. The visualization of the position made me re-think about the wooden platform below the machine, so I decided to exclude it from the model overall.

There is still some work left to do for modelling, which I will be focusing on during this week. The plan is also to check and remap the UV maps where needed, place textures on a few parts, adjust lighting and the settings for correct AOVs to be used for the render. I will also have to re-model two parts, as I didn’t like their original modelling and appearance, and will need to update the details from original source of reference.

Lighting in movies – week 1

Starting off the course in Katana, we were talked through understanding the lighting in general, the physical properties of it and how it appears to our eyes. A difference between key, fill and rim light was explained and main aspects to successful recreation of the scene lighting were pointed out. As an exercise we had to find 3 shots from various movies and try to assess the position, direction, intensity and properties of the light.

2001: A Space Odyssey

Looking at how the face is lit and the way the shadows are falling, there is a light source directly above the character as well as coming from the background of the blue-ish tone of the screen. The blue tones are reflected from the skin of the chair and in the bottom center of the shot (by his left hand, on the surface). Judging by the back of the head and top of the shoulders, there maybe a fill light placed from behind of the character. As the contrast is low between the key and fill light, we may conclude that it is a low key, soft lighting. The shot seams to be having cool tones, roughly in the range of 4000K.

Alice in the Wonderland

Immediately the light seems to be quite soft and low key. There is a source coming from above of the character as we have shadows underneath her eyes, on the neck and her chin. It feels like there is no fill in light, maybe quite a subtle one. Looking as if it is a cloudy daylight, the temperature of the shot seems to be in the range of 6000K.

Tomb Raider

Shot out in sunlight, this shot has warm undertones. The light is harsh, but appears to be high key as there isn’t as sharp contrast between the highlights and fill in light. As the face is seen fairly good, there is definitely a fill light coming from the front of him, adding warmer tones to his skin (that judgement is made by looking at the fill in highlights on his nose, cheeks, neck all with yellow/orange warmth). The sun seems to be almost directly above him, meaning it was shot at maybe earlier or later than midday.

All about rendering – week 4

This week we learnt more about various renders, how the image is being adjusted by software for the final output and how it is perceived by our eyes from the monitor screen. Proceeding with learning about the built-in Houdini renderer, Mantra, and it’s according nodes, be it for lighting or materials, we also noted about differences of Arnold renderer in Houdini and the according nodes and tools.

Playing around with the materials and their properties, these were the test images that I made.

Here is a rendered out sequence of the task from second week of studying Houdini. For it I added to the script the Arnold shader, lights and Arnold render node in the ‘out’ directory.

Render of first 165 frames of the second week exercise.

Machine (progress) – week 5

Continuing from the last time, I proceeded to model the next part of the machine, one of which helped me to understand the movement more and break down which parts are connected to which and what are the points of rigging, joints, constraints, etc.

Following the comment from previous post, I cut out a circular hole in one of the parts that is rotating, modelled the column and supporting beams, along side with adjusting the size of the platform and machine base, to make it thinner and to suit the motion and position of the pushing rods. To practice rigging and animation, I set the animation for all the parts that will be rotating and then created the rig for one of the rods, which is pushing the cog to rotate.

Placing CGI asset into the scene – week 5

Following from the last week, learning about camera tracking and creating point clouds in a 3D space, we moved onto final positioning a 3D asset into the scene. Originally, I wanted to create a levitating droplet of water which would be placed in the center of the shot or even a few copies of it around the place, but having troubles in exporting the AOV passes from Maya and incorrect sequence length, I used the asset provided by Aldo instead.

Firstly, the footage was read in, all the settings were updated (frame rate, sequence length), I denoised and created a camera track using the result. I also applied the LensDistortion node to the provided Lens Grid card. For the Camera tracking I masked out the lake, as we should not be using the reflection based information. Then I wrote out the camera information using the WriteGeo node.

Then I broke down all the passes of the 3D object and colour graded them accordingly to match the scene. Since the HDRI that was applied to it was taken in a different lighting setting (the forest) and having troubles exporting objects from Maya, I couldn’t have changed the HDRI lighting, so I was trying to adjust the colours of reflection and diffuse passes instead. I copied the alpha back into the object, applied distortion and noise back to the asset and merged with the background.

Another problem that I noticed was the incorrect movement of the assets compared to the scene. Since the object I used had the tracking information of a different shot, hence the way that it is situated is wrong compared to this shot.

Production set in motion – week 1

We started this week off by scheduling and holding a meeting with Christos to get the idea of the project approved and any further advice that would help us to formulate the plan better. It was concluded that the footage would be obtained by the tutor, filmed in one of UAL Halls, meaning it was our objective to created as much detailed previs as possible for the shot.

It was essential for to make a list of all the assets and props we wanted to use, which ones would have to be modelled and which we could get as CGI models online. We also had to narrow down what it was specifically we wanted to appear, happen, grow out or over and what statues we would be implementing in the shot. We were given an idea to search for second hand picture frames, roughly of A1 size. After two days going over Ebay and online shops, whilst Jane checking actual vintage stores and exploring the idea of using frames supplied by Giulia, we decided that those would need to be modelled.

For a great shot with good placement of the assets we needed to understand the best conditions and requirements for tracking and exporting that information to Maya, so we were advised to schedule a meeting with Dom, who gave us few sessions on 3DEqualiser last term. Contacting him in Discord, we managed to set the meeting for Friday, before which I had a few goes at rough camera movements in Maya.

At first, we had to decide on the lens to be used for the filming, so I made a comparison video of a 24mm and 35mm lens. As we are working remotely and don’t have the exact measurements of the room, it would be on a safer side to use a wider lens. Dom pointed out that it was necessary to have the floor in the shot at all times, as it is important for the software to know the ground position for accurate CGI placements.

Our 3rd team meeting took place of Wednesday, when we all came back together noting the progress that was made. I received the comments back on which lens they preferred, which camera movements they liked and which weren’t as good. Giulia showed the 3D statues that she managed to acquire, talked about the problem of a triangular mesh and gave some ideas of how to resolve those. Jane also found an Egyptian statue which we all liked. We then thought how to implement the idea of parallax movement inside the pictures into the scene and noted down any questions we had as well as whom we should be asking those questions. I had an idea, that it would be placed a following way into the scene in Nuke, but wasn’t too sure. We further discussed on how would a 2D animation be placed on an object like a statue and had a concept of animated texture explained to us by Giulia.

Upon further exploration of what could be placed in the shot and how it should be constructed, Giulia mentioned an art style called Vaporwave and we though of would could be potentially done to maybe implement it in the project, be it creating new lighting sources in Maya or adjusting the the colors through various passes of the statues AOVs. Another idea, inspired by one of the inspiration posts from Instagram, was to add particle movement around the base of the statues (https://www.instagram.com/p/CGgCLfNJw1M/?igshid=1jfq3fz5thdgi), which we all noted down as a question to ask Medhi, next live Q&A session.

Floor plan with the positioned assets

The meeting with Dom on Friday was very informative and helpful as it gave us a better in insight at what makes a much better tracked shot. The advice he has given was targeted at markers placement, how many and where they should be positioned, the actual camera movement and how it was important to make the shot simpler and shorter. He pointed out to which parts of the previs could be re-made and suggested we have a meeting next week, after we gather the footage. I attempted the 4th time at the previs, following his advice and now animating Camera and Aim, rather than just Camera (to avoid the crazy, exaggerating movements appearing at 17th and 25th second), got to the most recent version. The simple 3D objects were also replaced by cards and locators, which are representing the markers. The shot was also cut down to roughly 40 seconds, instead of a minute and 10 seconds. To fit the time, we decided to scrap the idea for a looped video focusing on a simpler but more effective shot.

Given all the placements and position of the assets, I also created a rough plan with all the supposed measurements, given that it will fit in the physical space.

Measurements and markers count