Finally finishing up on the model with additional details, I moved onto addition of textures to the parts and corrected UV maps where it was needed to do so.
Machine model close-up End section of the machine
Back Front
Finally finishing up on the model with additional details, I moved onto addition of textures to the parts and corrected UV maps where it was needed to do so.
Now we got to the exciting part, where we were shown how to combine all the previously obtained knowledge into something that would generate much cooler effects.
The first exercise was creation of a disintegration effect where a physical object would break into smaller pieces and those would fly apart, whilst decreasing in size. I tried it on a torus shape, then applied on the human object varying some of the attributes, such as: at what kind of rate the pieces would disappear.
For the man example I wanted the pieces to last slightly longer, hence changing my value from 0.92 inside the scale to 0.96 in the primitive node in SopSolver inside the DOPNetwork.
We then moved onto using the smoke to direct a path for particles, thus trying to create a more magical effect, similar to the aura from the second week exercise.
I wanted the smoke not to be as grouped as Mehdi’s, so I changed a few settings around, making it slightly more wider in the simulation and dissolution. I also added some colour for the density inside the volume visualization node. I tried out various ways of how to add colour to the particles, that were translated into points when using the ‘CopyToPoints’ node, but didn’t manage to figure how to do so yet.
In the second half of the lesson we moved onto a bigger project: big building destruction. The final objective of this example, is to have a meteor colliding with the building and having explosions at the points of entry and exit.
The first part of it was finding all the problems with the geometry, fixing them and creating the pieces of how it will get destructed. It was noted which parts had defects, however would not be involved in the collision so that we didn’t have to fix them necessarily (e.g. air conditioning unit at the top).
Then we created a proxy version of the meteor, as the original meteor was quite a heavy object with a lot of details and polygons. Various ways were shown of how to subdivide the polygons.
We then started creating the RBD geometry, inside which there will a big graph telling how to break down pieces for various parts of the building. For the outside and inside walls we setup the following way:
The plan is to continue creating destruction of the building various parts, such as glass, inside bits like floor, inner walls. For those, example from week 3 when we destroyed the cabin could be used.
Continuing on from the last time, I modelled the final parts of the machine that would be rigged and animated, as well as a few smaller additional details, like screws of different appearance or any extra required cogs, beams and circular shaped bits.
Then, thinking about adding textures and seeing which UVs would I need to fix, I added a couple metal based textures to bigger parts. For the screws, which I counted there are 41 of the basic shape, it is easier to unwrap and correct a UV map of one screw, then export the object and paint in Mudbox, export out the new texture and apply it to the other screws. But that also means, that the other screws would be needed to be copied from the corrected one and placed in their previous positions.
When adding the planes for walls as shadows and adjusting the intensity of the skydome light (to 10), I still noticed that the blue-ish reflection from the machine base and highlights around the column, meaning I need to correct the walls and block out more light, as well as place an area light in front of the object, to the right side. Looking at the floot, one can see harsh shadows falling from the rocks in a specific direction and hence I need to recreate that lighting for this room, as the HDRI used is actually taken from the first room.
Noting down of the rest of the work, there are still some things that are needed to be modelled or re-modelled, UV maps fixed and textures placed. Given that I have all the parts that will be moving, I need to rig and animate them correctly, which will be the main focus for me this week. I would also need to rotoscope out the part of the wall in front of the machine, such that when placed in Nuke, it would block parts of it and hence look like it was there originally.
Looking at the reference model and what I have learned in other classes, I had an idea of additionally modeling a see-through water tank and placing it in the first room and as the pump is moving, the water is travelling from a pre-modelled tube to the tank and being added to the tank. The water simulation has to be completed in Houdini and later added to Maya for rendering, but I will see with the workload and time limit if I will be able to do so.
Crypt
This time we proceeded to learn how to make projection in a 3D space. Given the previous footage, the task was to clean up markers in the crypt, as well as use a shot of our own and remove whichever bits we wanted to.
The principle behind creating projects was quite similar of a 2D clean up mixed with use of camera which was created with the use of Camera Tracker node. Calling it a Propjection, the information from the tracked camera would be applied to a clean plate, rotoscoped and painted over at a specific frame, such that this card could be tracked and fixed to correct position in the 3D space.
The node Project3D was fairly simple and straightforward, leaving us to simply constantly recreating the process and the principle structure of the workflow behind the 3D projections.
Specifically for the crypt, as I was creating one card for all the points on floor in the second room, and hence for a few frames I had to roto out the objects that were in foreground of the markers.
Museum
For the second shot I chose this clip from the museum, for which I created Camera Tracker and placed cards before. In this scene I decided to remove a switch on the wall, as well as pushing myself and removing the furthest chair in the second hall. In my opinion, the grading could be further improved and adjusted better to the changing exposure.
Last week we were given more explanation of how to use the LensDistortion node and how a compositor would import and use one that would be shared inside a company for same shot (ST Map). Further proceeding on learning how to use the ‘Points To 3D’ node, we quickly grasped the idea of how to use the supplied footage, camera information, how to create axis and how to connect the transformation information to a CGI object.
Just as a practice I quickly created and applied the Camera Tracker node, from which I then created a Camera node (originally expected to be given to the compositor artist). I then chose 3 various points in the scene I wanted to use for attaching the objects and with the application Points To 3D obtained the needed information for transform. Then the axis were created and linked to the CGi objects.
The main focus of the last two weeks was on getting the footage and organizing sessions with Dom in order to track it in 3DEqualiser. With variety of shots got by Christos, who followed the latest camera previs clip, both captured on Blackmagic Pocket Cinema Camera 4K and Canon 6D, the three of us agreed on using the very first shot done of the latter camera. The crop factor on BMPC4K was quite high, x1.9, meaning it was cutting out a lot information which was required for tracking.
We then managed to get a learning session with Dom, in order to utilize 3DEqualiser for a better tracking on Monday last week. During the session we were practicing on the first 2000 frames, meaning there are about 1000-1500 more that should get tracked and have the information exported from. Another session was planned, where Dom would show us how to export all the material with tracks into Maya and how we would placed objects around.
At the meantime, Jane managed to test the parallax effects in Nuke on the footage and would send us her trials, such that we could decide on the changes where necessary.
During our team meeting on Friday, we figured out and noted down the main steps of the project workflow and how would we divide the workload between ourselves. As a conclusion this was the rough outline of what had to be done:
The final outline for all objects placed and their purpose in the scene was the following: the first and second paintings having a parallax effect, where Jane would originally draw those paintings based on some pictures; the first classical statue would be done in the Vaporwave artistic style, for which Giulia would model neon signs and lights and place them accordingly; the 3rd painting to have some animation inside of it; the Egyptian statue to have animated texture (both done by Jane) and have a few 3D projections placed over it; the display case to be modelled by Giulia and the fire added by me in Houdini, as well as the particle movement to the stands for statues. It was also decided for the tracking to be attempted by all three of us and the best track to be chosen and used for the project.
When discussing of what is needed to be done to the statues and what would be in the paintings, we had an idea for the 3rd painting to implement some animation that Jane made.
Inspired by the artwork of Keith Haring, it was decided to create a thematic side for the 3rd painting and the Egyptian statue to be used. The animated characters would be implemented both in the painting and added as animated texture on the statue. Watching this video, we also were inspired to add some sounds to the scene and maybe even the music accompaniment as it is in the video.