Smoke, fire and explosion – week 5

This week we got to learn about creation of smoke, fire and explosion with the use of smoke solver and pyro solver. Breaking down the basics, it appeared that the main principle behind all of these was getting down to use of smoke solver.

To create the smoke and fire we used the DOP network, while the creation of explosion didn’t require us to do so. However, most of the procedures followed the same steps:

  1. Create a geometrical shape (sphere or circle);
  2. Scatter points inside that shape;
  3. Create a new attribute (the one you want the software to be working on, e.g. density);
  4. Use Attribute Noise to make it more chaotic, rather than follow a pattern;
  5. Use Color for visualization;
  6. Apply Volume Rasterize Attribute (to 4), such that you are creating a volumetric representation;
  7. Finally, use a Null node, to have a clear visual directory when referencing the SOP path later in the DOP nodes.

Then you would create the DOP network, inside which you would set up using smoke object, volume source, smoke solver and some mini solver for gas, like gas turbulence or gas wind (for SMOKE). The set-up is primitively the same for fire, except you would be using a pyro solver instead and have to be aware that you are also using a temperature attribute as well for a better fire simulation.

However, for the explosion we didn’t use a extra DOP network, it was simply set-up in just the Geo node. One of the most important target fields to add in sourcing inside pyrosolver was the Divergence as it defined of how big in size was the initial explosion.

Explosion geo network

The second branch for pyrosolver with a sphere object was created to show how the smoke would be interacting with some sort of collision force. However, it appears that the placement of my camera didn’t manage to catch that output.

Machine (progress) – week 6

Following from the last week, I modelled extra few parts for the machine, mostly trying to finish up the bits that will be rigged and animated. I also practiced placing the object into the scene, as well as put in some cards with shadow surfaces. The visualization of the position made me re-think about the wooden platform below the machine, so I decided to exclude it from the model overall.

There is still some work left to do for modelling, which I will be focusing on during this week. The plan is also to check and remap the UV maps where needed, place textures on a few parts, adjust lighting and the settings for correct AOVs to be used for the render. I will also have to re-model two parts, as I didn’t like their original modelling and appearance, and will need to update the details from original source of reference.

Lighting in movies – week 1

Starting off the course in Katana, we were talked through understanding the lighting in general, the physical properties of it and how it appears to our eyes. A difference between key, fill and rim light was explained and main aspects to successful recreation of the scene lighting were pointed out. As an exercise we had to find 3 shots from various movies and try to assess the position, direction, intensity and properties of the light.

2001: A Space Odyssey

Looking at how the face is lit and the way the shadows are falling, there is a light source directly above the character as well as coming from the background of the blue-ish tone of the screen. The blue tones are reflected from the skin of the chair and in the bottom center of the shot (by his left hand, on the surface). Judging by the back of the head and top of the shoulders, there maybe a fill light placed from behind of the character. As the contrast is low between the key and fill light, we may conclude that it is a low key, soft lighting. The shot seams to be having cool tones, roughly in the range of 4000K.

Alice in the Wonderland

Immediately the light seems to be quite soft and low key. There is a source coming from above of the character as we have shadows underneath her eyes, on the neck and her chin. It feels like there is no fill in light, maybe quite a subtle one. Looking as if it is a cloudy daylight, the temperature of the shot seems to be in the range of 6000K.

Tomb Raider

Shot out in sunlight, this shot has warm undertones. The light is harsh, but appears to be high key as there isn’t as sharp contrast between the highlights and fill in light. As the face is seen fairly good, there is definitely a fill light coming from the front of him, adding warmer tones to his skin (that judgement is made by looking at the fill in highlights on his nose, cheeks, neck all with yellow/orange warmth). The sun seems to be almost directly above him, meaning it was shot at maybe earlier or later than midday.

All about rendering – week 4

This week we learnt more about various renders, how the image is being adjusted by software for the final output and how it is perceived by our eyes from the monitor screen. Proceeding with learning about the built-in Houdini renderer, Mantra, and it’s according nodes, be it for lighting or materials, we also noted about differences of Arnold renderer in Houdini and the according nodes and tools.

Playing around with the materials and their properties, these were the test images that I made.

Here is a rendered out sequence of the task from second week of studying Houdini. For it I added to the script the Arnold shader, lights and Arnold render node in the ‘out’ directory.

Render of first 165 frames of the second week exercise.

Machine (progress) – week 5

Continuing from the last time, I proceeded to model the next part of the machine, one of which helped me to understand the movement more and break down which parts are connected to which and what are the points of rigging, joints, constraints, etc.

Following the comment from previous post, I cut out a circular hole in one of the parts that is rotating, modelled the column and supporting beams, along side with adjusting the size of the platform and machine base, to make it thinner and to suit the motion and position of the pushing rods. To practice rigging and animation, I set the animation for all the parts that will be rotating and then created the rig for one of the rods, which is pushing the cog to rotate.

Placing CGI asset into the scene – week 5

Following from the last week, learning about camera tracking and creating point clouds in a 3D space, we moved onto final positioning a 3D asset into the scene. Originally, I wanted to create a levitating droplet of water which would be placed in the center of the shot or even a few copies of it around the place, but having troubles in exporting the AOV passes from Maya and incorrect sequence length, I used the asset provided by Aldo instead.

Firstly, the footage was read in, all the settings were updated (frame rate, sequence length), I denoised and created a camera track using the result. I also applied the LensDistortion node to the provided Lens Grid card. For the Camera tracking I masked out the lake, as we should not be using the reflection based information. Then I wrote out the camera information using the WriteGeo node.

Then I broke down all the passes of the 3D object and colour graded them accordingly to match the scene. Since the HDRI that was applied to it was taken in a different lighting setting (the forest) and having troubles exporting objects from Maya, I couldn’t have changed the HDRI lighting, so I was trying to adjust the colours of reflection and diffuse passes instead. I copied the alpha back into the object, applied distortion and noise back to the asset and merged with the background.

Another problem that I noticed was the incorrect movement of the assets compared to the scene. Since the object I used had the tracking information of a different shot, hence the way that it is situated is wrong compared to this shot.

Production set in motion – week 1

We started this week off by scheduling and holding a meeting with Christos to get the idea of the project approved and any further advice that would help us to formulate the plan better. It was concluded that the footage would be obtained by the tutor, filmed in one of UAL Halls, meaning it was our objective to created as much detailed previs as possible for the shot.

It was essential for to make a list of all the assets and props we wanted to use, which ones would have to be modelled and which we could get as CGI models online. We also had to narrow down what it was specifically we wanted to appear, happen, grow out or over and what statues we would be implementing in the shot. We were given an idea to search for second hand picture frames, roughly of A1 size. After two days going over Ebay and online shops, whilst Jane checking actual vintage stores and exploring the idea of using frames supplied by Giulia, we decided that those would need to be modelled.

For a great shot with good placement of the assets we needed to understand the best conditions and requirements for tracking and exporting that information to Maya, so we were advised to schedule a meeting with Dom, who gave us few sessions on 3DEqualiser last term. Contacting him in Discord, we managed to set the meeting for Friday, before which I had a few goes at rough camera movements in Maya.

At first, we had to decide on the lens to be used for the filming, so I made a comparison video of a 24mm and 35mm lens. As we are working remotely and don’t have the exact measurements of the room, it would be on a safer side to use a wider lens. Dom pointed out that it was necessary to have the floor in the shot at all times, as it is important for the software to know the ground position for accurate CGI placements.

Our 3rd team meeting took place of Wednesday, when we all came back together noting the progress that was made. I received the comments back on which lens they preferred, which camera movements they liked and which weren’t as good. Giulia showed the 3D statues that she managed to acquire, talked about the problem of a triangular mesh and gave some ideas of how to resolve those. Jane also found an Egyptian statue which we all liked. We then thought how to implement the idea of parallax movement inside the pictures into the scene and noted down any questions we had as well as whom we should be asking those questions. I had an idea, that it would be placed a following way into the scene in Nuke, but wasn’t too sure. We further discussed on how would a 2D animation be placed on an object like a statue and had a concept of animated texture explained to us by Giulia.

Upon further exploration of what could be placed in the shot and how it should be constructed, Giulia mentioned an art style called Vaporwave and we though of would could be potentially done to maybe implement it in the project, be it creating new lighting sources in Maya or adjusting the the colors through various passes of the statues AOVs. Another idea, inspired by one of the inspiration posts from Instagram, was to add particle movement around the base of the statues (https://www.instagram.com/p/CGgCLfNJw1M/?igshid=1jfq3fz5thdgi), which we all noted down as a question to ask Medhi, next live Q&A session.

Floor plan with the positioned assets

The meeting with Dom on Friday was very informative and helpful as it gave us a better in insight at what makes a much better tracked shot. The advice he has given was targeted at markers placement, how many and where they should be positioned, the actual camera movement and how it was important to make the shot simpler and shorter. He pointed out to which parts of the previs could be re-made and suggested we have a meeting next week, after we gather the footage. I attempted the 4th time at the previs, following his advice and now animating Camera and Aim, rather than just Camera (to avoid the crazy, exaggerating movements appearing at 17th and 25th second), got to the most recent version. The simple 3D objects were also replaced by cards and locators, which are representing the markers. The shot was also cut down to roughly 40 seconds, instead of a minute and 10 seconds. To fit the time, we decided to scrap the idea for a looped video focusing on a simpler but more effective shot.

Given all the placements and position of the assets, I also created a rough plan with all the supposed measurements, given that it will fit in the physical space.

Measurements and markers count

Destruction of the house – week 3

This week was quite tough but fun, as we were learning how to destroy the wooden cabin, which we had built previously. Before creating the procedure to destroy the building, I went back to the Houdini script and slightly rebuilt the house, creating a much neater network. I also added the glass to the house, which previously wasn’t there.

Practicing on just the cube, it was quite informative learning about the basics behind the original creation of an already destructed object, as it cannot be easily programmed for it to break into various pieces. Those breaks and lines where the material will crack must be created by the user before any force could be applied.

Whilst following the lesson copying the guided procedure, I came across a problem when breaking my pre-built wooden cabin. For the pieces that are supposed to be active, the glue constraint isn’t working and hence the parts start to fall due to gravity before the ball comes in contact with the building. Hence the result. There is still some collision happening towards the end of the shot, as when it comes in contact with further pieces they get moved.

After attending the live Q&A session with Medhi, I managed to solve my problem by checking nodes for Constraints from Rules, ConstraintsProperties and Assemble. The problem was down to ticked boxes in the latter node. With fixing and setting up the correct requirements, the wooden cabin destruction was fixed.

Machine modelling – week 4

This week I was focused on starting my machine model and creating all the necessary parts. I broke the original design down in 3 main parts on which I planned to work solely at different to each other times.

The breakdown

This week I focused on working at the first part, i.e. the wooden base, the metal base, the wheel and rod attached to it, with all the according parts which are positioned on the rod. I managed to finish most of the parts except facing a few challenges and hence not completing the entire plan.

Now, on this side of the wheel, in the metallic base underneath the 2 cogs that will be rotating there is a hole cut out in the specific shape, which I will be focusing on cutting out when I will animate the movement of the cogs, so it will be known how deep they would need to be. I also have bad wireframing on 2 parts of the rotating cog, specifically the elongated beam and the one of rectangular shape. These will be on my list of remaking the parts.

It came to my attention that on the left side of the main rod, the furthest element has a cut out hole into it, such that it looks that the screw is inside of it. I targeted that question to Nick during today’s class and got some ideas of how to solve this problem, without having to remodel the entire part again.

The plan for the following week is to focus on building the most of 2nd area of the machine, especially the mechanisms which have to be animated and finish on few missing parts from the first section. Then rig and animate some of the movement, so that I could place the rough version of the model into the actual scene, just as we practiced today during the Maya lesson.

Planning phase

Last week, Jane, Giulia and I had a Zoom call together to discuss and plan the idea for our collaboration project. Being all VFX students, we were inspired by 3D and 2D projections of small animated objects in a pre-filmed 2D shot. As the main reference we took this video, from kidmograph account on Instagram:

@kidmograph, Instagram

The idea was to film or create an Art Gallery room, where the viewer (i.e. the camera) is taken around and shown various paintings, which have this parallax effect as the angle of perception changes, and 3D sculptures which have organic material growth around them. As the camera moves around the room, it ends by coming to the point of entrance where the final frame would loop in with the first frame of the video. I also wanted to put some particle movement, such as burning fire or flowing water in an enclosed space, such as museum glass display, quite similar to this post.

https://www.instagram.com/reel/CJxP9rJpOaT/?igshid=1llhuwvwttulp

When meeting for the 2nd time, we all have shown the visualization tasks we did as well as complied a document with all the resources and references we found. Whilst Giulia came up with a rough previsualization picture of the look of the room, Jane put up some 3D projections of a Vincent Van Gough painting in Nuke, to show that changing perspective. On my end, I created a rough storyboard of the camera movement in the room.

After the second Zoom meeting, we noted all the possibilities of acquiring the footage of a museum hall. The various ideas we came up with were:
1) seeing if a Tate modern was open and it was possible to arrange to film inside it;
2) finding a big hall with an access so that we could make it look like an art gallery room;
3) filming inside UAL;
4) using some stock footage;
5) getting a VR model of a museum hall;
6) OR modelling the entire hall ourselves.

We then scheduled a call with our tutor, Christos, to check all the possibilities with him and get any advice possible on the workflow of the project and how to do certain things, for instance camera tracking, which software to use for it, how to use that tracking information in Maya for models.