What are the types of rendering and visualization techniques?

The term rendering defines the automatic process of generating digital images from three-dimensional models, by means of a special software. These images simulate project or 3D model’s photorealistic environments, materials, lights and objects.

Particularly, rendering is a computer-generated image following three-dimensional modelling based on project data. The geometric model created is coated with images (textures) and colours, which are identical to real materials, and that can be illuminated with light sources reproducing natural or artificial ones.

If the rendering parameters are accurately set to match those in nature, then the quality of textures and different perspectives of the final render can be considered photorealistic.

Types of rendering

There are 2 main types of renderings. The difference between the two lies in the speed at which the computation and finalization takes place.

Rendering Real Time

Real-time rendering is mainly used in gaming and interactive graphics, where images are calculated from 3D information at a very fast pace. As a result, dedicated graphics hardware have improved the performance of real-time rendering ensuring rapid image processing.

Rendering offline

Offline rendering is a technique mainly used in situations where the need for processing speed is lower. Visual effects work where photorealism needs to be at highest standard possible. There is no unpredictability, unlike real time.

Rendering: visualization techniques

Z-Buffer

It is one of the simplest algorithms for determining visible surfaces and it uses two data structures such as the z-buffer (a memory area that keeps the z coordinate closest to the observer for each pixel) and the frame-buffer (which contains the colour information related to the pixels contained in the z-buffer). For each pixel, the largest z value is stored (assuming that the z axis goes from the screen towards the observer’s eyes) and at each step, the value contained in the z-buffer is updated only if the point in question has the coordinate z larger than the one currently in the z-buffer. The technique is applied to a polygon at a time. When scanning a polygon, information relating to the other polygons is not available.

Scan line

It is one of the oldest methods and it blends the algorithm for determining the visible surfaces with the determination of the reported shadows. The image-precision algorithms that work on the scan line determine the spans (intervals) of visible pixels for each scan-line. It differs from the z-buffer in that it works with one scan line at a time.

Ray casting

It is an image-precision mechanism that allows the detection of visible surfaces. The whole process refers to a projection centre and a screen in arbitrary position designed as a regular grid. Elements correspond to the pixel size of the desired resolution. Imaginary rays of light are traced, from the observation centre towards the objects present in the scene, one for each cell of the window itself.

The basic idea of ​​ray casting consists in starting the rays from the eye, one per pixel, and finding the closest object that blocks the path (think of an image as a grid, in which each square corresponds to a pixel). An important advantage offered by ray casting compared to the older scanline algorithm is its ability to easily manage solid or non-flat surfaces, such as cones and spheres. If a mathematical surface can be hit by a ray, ray casting can draw it. Complicated objects can be created using solid modelling techniques to then be easily rendered.

Ray tracing

Ray tracing is a rendering technique that can produce incredibly realistic lighting effects. Ray tracing generates lifelike shadows and reflections, along with much-improved translucence and scattering, taking into account light phenomena such as reflection and refraction. Essentially, it is an algorithm that can trace the path of light, and then simulate the way that the light interacts with the virtual objects and ultimately hits in the computer-generated world. The light rays can reach the observer both directly and through the interactions with other surfaces. This is the idea behind the ray-tracing method : geometric rays are traced from the eye of the observer to sample the light (radiance) travelling toward the observer from the ray direction.

The popularity gained by ray tracing lays the foundation in the realistic simulation of light compared to other rendering models (such as scanline rendering or ray casting). Effects like reflection and shadow, difficult to simulate with other methods, are the natural result of the algorithm. A relatively simple implementation leads to impressive results, ray tracing often represents the access point to the study of graphic programming.

Radiosity

Another image-precision method that brings further improvements to the photorealistic quality of the image since it takes into account the physical phenomenon of inter-reflection between objects. Radiosity simulates the diffuse propagation of light starting at the light sources. In fact, in the real world, when a surface has a component of reflective light, it not only appears in our image, but also illuminates neighbouring surfaces. The re-radiated light carries information on the object that raised it, in particular the colour. Thus, the shadows are “less black” and the colour of the near-well-lit object is perceived, a phenomenon often cited as “colour leakage”. The radiosity algorithm, as a first step, identifies and decomposes the surfaces into smaller components and then distributes the direct light energy; as a second phase it calculates the diffused energy, transmitted and reflected on the hypothesis that the surfaces reflect light in the same way. In addition, it calculates the surfaces that reflect more energy and redistributes it.