Lesson 6 : Rendering and Compositing
3D rendering relies on creating three-dimensional images from mathematical models that describe the geometry, textures, lighting, and physical properties of virtual objects. This method allows for the representation of scenes and objects in a three-dimensional space, offering dynamic viewpoints and immersive interactions. Using sophisticated algorithms, 3D rendering accurately simulates how light interacts with surfaces and materials, creating photorealistic or stylized renders according to artistic needs.
It is very important to discuss this part of the workflow because it is crucial, and if it is not well understood, it will impact your renders.
Table of Contents
1. Path tracing Ray tracing
2. Sampling
3. AOVS and Compositing
4. Sources
1. Path tracing Ray tracing
Path tracing and ray tracing are two fundamental techniques in the field of computer graphics, used to simulate the interaction of light with objects in a scene. Here is a detailed explanation of each of these techniques.
rendering and compositing is a really important part of 3d, knowing how to sample and compose an image is important.
Ray Tracing:
The idea behind ray tracing dates back several centuries and is actually attributed to the German artist Albrecht Dürer and his perspective machine. The first computer usage of this technique dates back to 1968. However, it was the work of Turner Whitted in the 1970s that later allowed computer-generated imagery to be faithful to reality thanks to ray tracing. Requiring significant computational power, it is only recently that we have seen this technology emerge on gaming consoles and PC graphics cards.
What is the purpose of ray tracing ?
Ray tracing is a rendering technique that simulates the propagation of light in a scene by tracing rays of light from the camera's viewpoint to the objects in the scene. Renderman is a good example of this.
Here are the key steps of the ray tracing process:
-
Primary Ray Casting: Rays are cast from the viewpoint of the camera (the viewer's eye) through each pixel of the image. These rays are called "primary rays."
-
Intersection with Objects: Each primary ray is tested to determine if it intersects with an object in the scene. If it does, the intersection point and the hit object are identified.
-
Lighting Calculation: For each intersection point, the lighting model is applied to determine the color of that point. This involves considering light sources, reflections, and refractions.
-
Shadow Calculation: It is checked whether the intersection point is directly illuminated by a light source. If not, it means it is in the shadow of an object and thus less illuminated.
-
Reflection and Refraction: If the object is reflective or transparent, reflected or refracted rays may be traced from the intersection point to compute reflection and refraction effects.
-
Recursion: The process can be recursively repeated to simulate effects like multiple reflections, refractions, and global illumination.
-
Image Assembly: The computed colors for each ray are combined to form the final image.
In addition to ray tracing, other technologies like denoising and photon mapping are employed. Here is an example of an image using ray tracing. We can see that thanks to this algorithm, we have a very realistic rendering with beautiful refraction, and transparent materials are very well executed.
Path Tracing
On the other hand, path tracing is a more advanced method based on the principle of casting random rays from the camera. These rays bounce randomly throughout the scene, thus simulating the effects of indirect lighting, multiple reflections, and diffusion. Path tracing takes into account in a more precise manner the complex interactions of light that occur in a scene.
Path tracing is an extension of ray tracing that more realistically simulates how light interacts with the scene. Here are the key points of the path tracing process:
-
Primary Ray Casting: Like in ray tracing, rays are cast from the camera through each pixel.
-
Intersection with Objects: Each primary ray is tested to determine if it intersects with an object in the scene.
-
Secondary Ray Sampling: Instead of stopping at the first intersection, path tracing traces an additional ray from the intersection point in a random direction. This simulates light diffusion.
-
Lighting Calculation: Similar to ray tracing, the color of the intersection point is calculated, taking into account light sources and reflections.
-
Recursion: The process can be repeated multiple times, with new secondary rays sampled at each step, to simulate more complex effects like global illumination.
-
Sample Averaging: To reduce noise, colors obtained from multiple samples are averaged
Path tracing is particularly effective for simulating complex lighting effects such as caustics (light effects produced by the refraction of light through transparent surfaces), soft shadows, and global illumination. However, it is more computationally intensive than traditional ray tracing.
In summary, the main difference between ray tracing and path tracing lies in how they model the propagation of light. Ray tracing follows rays deterministically, while path tracing uses random rays to simulate more complex effects of indirect lighting. As a result, path tracing tends to produce more realistic images but also demands more computational power.
2. Sampling
Sampling in computer graphics refers to the process of selecting and evaluating a sample of information to represent an image. This is crucial because it is often impossible to calculate all the details of a scene exhaustively, due to the complexity and amount of data involved. Instead, strategic samples are taken to estimate these missing values.
The purpose of sampling, in simple terms, is to find the best settings to create a beautiful image with minimal noise in a short amount of time. It is a very important parameter to adjust before launching the final rendering in production. The goal is to optimize the process.
For example, this render has one of the most common issues - insufficient sampling leads to noise. Noise is not very well-received.
Renderman sampling
The choice of sampling parameters, such as the number of rays to be cast per pixel, the sampling method, and the level of adaptivity, has a significant impact on the final render. Properly configuring the sampling leads to a sharp and detailed image while minimizing unwanted noise.
Renderman tutorial for a first approach :
We will look together at the different values that are most commonly used and their results :
15s
5 minutes
30s
Here we can clearly see the differences between the applied samplings for each image. What you need to know is that a value of 0 for minsamples does not actually represent zero; it indicates that the minimum will be the square root of the max samples. The values 0 and -1 are special values in Renderman.
Here is a diagram taken from their documentation which is very interesting. It clearly shows the influence of pixel variance. It represents the tolerance for noise in the image. The smaller it is, the less noisy the image will be. However, as you decrease it, you become more demanding, and rendering times can skyrocket. It's important to adjust it slowly and be careful not to overdo it.
Analyze
It is possible to quickly and clearly view the statistics of your images to assist us in analyzing our parameters. A new, modern JavaScript-based view of statistics files is included to display statistics for RIS renders. This new system for visualizing statistics files separates the viewing of statistics files from XML data and allows users to easily customize how they visualize the data.
In conclusion, sampling plays a crucial role in the computer graphics rendering process, allowing for the realistic representation of images despite the complexity of scenes. Adjusting parameters such as max sample, min sample, and pixel variance is crucial for achieving a balance between image quality, rendering time, and noise management.
The max sample determines the total number of rays emitted per pixel, directly influencing the quality of the final image. The min sample acts in conjunction by determining the minimum number of rays to be cast, often linked to the square root of the max sample, to ensure a certain level of precision.
Pixel variance is a measure of tolerance for noise in the final image. Reducing pixel variance leads to a sharper image, but it can significantly increase rendering times, so careful adjustment is essential.
Finding the right balance between these parameters is an art in itself, requiring a deep understanding of the scene to be rendered and the specific requirements of the project. A well-tuned sampling configuration allows for sharp, detailed, and realistic images while minimizing undesirable noise.
Arnold
In Arnold, sampling is a crucial element of the rendering process that determines the quality and accuracy of the generated image. It refers to the number of rays emitted per pixel to calculate illumination and reflections in a 3D scene.
The most fundamental sampling parameter in Arnold is Camera (AA) samples. It sets the number of primary rays cast from the camera into the scene. A higher value produces a sharper image, but it also entails longer rendering times.
Next, there is Diffuse samples, which controls the quality of diffuse reflections. A higher number of samples will result in more accurate reflections, but it will also increase rendering time.
There are also other sampling parameters specific to certain features, such as Specular samples for specular reflections, Transmission samples for transparent materials, and Volume samples for volumetric effects.
Finding the right balance of sampling is crucial. Too few samples can lead to noise in the image, while too many samples can significantly increase rendering times.
Arnold also offers adaptive sampling features, which automatically adjust the number of samples based on the scene's complexity. This helps optimize rendering times while maintaining high image quality.
In summary, sampling in Arnold is a key parameter that directly influences the quality and rendering time of an image. Striking the right balance between rendering quality and efficiency is essential for achieving optimal results.
This video demonstrates how to adjust your settings properly to avoid noise :
3. AOVS and Compositing
AOVs, or Arbitrary Output Variables, are a crucial component of the rendering process in graphic production environments. They allow for the generation of separate images containing specific information, such as luminance, depth, reflections, shadows, and more, instead of producing a single complete image.
The significance of AOVs lies in their role in the compositing pipeline. Once AOVs are generated during rendering, they can be used individually or in combination in compositing software like Nuke, After Effects, or Fusion. This provides extremely precise control over each element of the final image.
Renderman Wrokflow :
For instance, by using AOVs, an artist can independently adjust lighting, color, shadows, and other properties while retaining the flexibility to modify these elements after the initial render.
Furthermore, AOVs can be used for advanced operations such as relighting, matting, or creating complex visual effects.
In summary, AOVs are custom outputs that provide specific information from the render, enabling precise control and flexible adjustments in the compositing process, which is crucial for creating high-quality final images in professional graphic productions.
Arnold workflow
The advantage of outputting the main passes is that in compositing, we will be able to change anything we want, such as color, texture, depth of field, and many other elements. We will have total control over the image.
AOV example:
Diffuse
Indirect Diffuse
Specular
Indirect Specular
RenderMan and Arnold are very similar in the way they handle AOVs.
It's important to deactivate the "as RGBA" (Renderman) parameter so that there are no bugs in compositing and we can extract the passes using a Shuffle.
Compositing
Compositing in 3D is a crucial step in creating high-quality images and videos. It involves integrating various elements generated in 3D (such as models, special effects, virtual backgrounds) into a scene to produce a realistic and coherent final render. This often entails adding lights, shadows, visual effects, and color corrections.
Compositing allows for the seamless blending of these elements, creating the illusion that they coexist in the same environment. It plays an essential role in the film, animation, video game, and advertising industries by ensuring visual consistency and optimizing the overall quality of the final production.
You'll need to export the image with Aov in Openexr Float to get all the values.
Then drag it into the Nuke window.
Here's a short tutorial explaining how to reconstruct your beauty in Nuke using shuffles :
This set-up clearly shows the process. After recomposing our beaty we can start having fun and changing the parameters we want. There's a node in renderman that allows you to create a map that separates all the mesh is in the scene.
This way we can tweak each mesh one by one. This is Cryptomatte. We activate it in settings, then in feature. It won't be compiled in the exr and will have to be fetched from where the project was set up.
Good tips: you can find the path or the crytomatte and batch render images !
Here is an example of a crytomatte and its connection.
Each mesh has a different color and by pressing Ctrl, we can select the mesh we wish to modify. it's a very powerful map !
Here's a way to get a good look at the effect of cryptomatte and its effectiveness :
Here's the set-up we'll use to rebuild the beauty in a simple and readable way :
Once the set-up is complete, we can start compositing to create a superb image. Here are some little cards I made during my apprenticeship, showing how to connect a deph of field, create an alpha quickly and other...
Shuffle workflow
Glow workflow
Motion Blur workflow
I hope this lesson has served you well !
4. Sources
Sources
https://en.wikipedia.org/wiki/Rendering_%28computer_graphics%29#/media/File:Glasses_800_edit.png
https://blogs.nvidia.com/blog/2022/03/23/what-is-path-tracing/
https://en.wikipedia.org/wiki/Path_tracing#/media/File:Path_tracing_001.png