Invitation

Whoever wants to contribute to this blog , posting his experiences on the same place, please send me a message !

Friday, May 25, 2012

Design Dilemma : Hybrid Rendering ?

The  workload until now lies on the Optix code which runs totally on the GPU , using the host code on the CPU for coordination and organization purposes only. As it meant to be by Optix.

Last days I read some papers(and still reading...) on hybrid rendering , either with cuda/opencl or other languages.

One of the papers [Combinatorial Bidirectional Path-Tracing for Efficient Hybrid CPU/GPU Rendering] of Anthony Pajot, Loïc Barthe, Mathias Paulin, and Pierre Poulin describes a way to exploit both processors efficiently. It proposes that the path creation can be made on the CPU and the combination of path on the GPU(Pure cuda code). It proved to be 12 times faster than CPU alone.

I was thinking of doing the opposite.

That is :

Create the paths on the GPU using optix and save all that i need on a common CPU/GPU buffer , which i already do . 
Then execute the all-path combinations on the CPU. To offload some work from the GPU. 

Unscientific as it , commenting out of the Optix code the combinatorial step , it speeds up the path creation 3-3.5 times.


My thought was : IF the CPU is better at executing code in "for" loops, would the acceleration from using a second processor would be greater than the overhead of a busy memory bus?? 

AND is this system not-extensible enough??(if one processor would be ok for one card , what about 2-3-4 cards for processing ???)

What do you think ? should i try it or is it a lost cause?

Sunday, May 20, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 14 (Schlick specular gloss)

A parallel Android project , prevented me from posting earlier but I believe that i have sorted out the specular gloss effect as described by Schlick approximations of the fresnel equations.

I have used the most simple  fresnel approximation as provided by nvidia in the fresnel_schlick() function.

On the way i found some interesting stuff for reading , please check out:  of the copyrights belong to the respective owners

On the left is the Schlick approximation of the gloss material , on the right is the Phong default gloss

MY PROGRESS FROM NOW ON UNTIL NEXT WEEK WILL BE SLOW TO NONE !

Friday, May 11, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 13 ( MATERIALS : Specular material || better refractions and reflections)



In this post , i present to you the specular glossy material according to the Phong model. Next step is the Schlick "metalness" attribute.

I have also corrected some details in the reflection material and every detail that was not as it should be in the refraction material.

Here are some examples :

The box is specular , the sphere is refractive with index of refraction  1.33

The box is specular , the sphere is refractive with index of refraction  1.6


Same materials as before but three lights , also the  left wall is perfectly reflective

Darker scene with reflective (back) specular ( right) and refractive (ball)  materials







This is a render with low exponent in the Phong specular gloss material 

Bidirectional Path Tracing using Nvidia Optix , Part 12 (Depth of field))

The current installment in my project was Depth of Field effect. This simple effect in path tracing is the effect of simulating the focus ability of any kind of non-perfect lens such as the camera lens or the human eye.

http://en.wikipedia.org/wiki/Depth_of_field

Using this effect , the final image has a focused region and two regions that are out of focus, the regions before and after the focused region.

In path tracing this effect is very simple as it is inherent in path tracing and to be exact nvidia provides those 10 lines in the "Cook" example. What i have done is making those lines fit in my code. here they are :


in the BDPT function:
....
...
...
                  // pixel sampling
 float2 pixel_sample = make_float2(bufferLaunch_index) + make_float2(jitter.x, jitter.y);
 float2 d = pixel_sample / make_float2(screen) * 2.f - 1.f;
 // Calculate ray-viewplane intersection point
 float3 ray_origin = eye;
 float3 ray_direction = d.x*U + d.y*V + W;
 float3 ray_target = ray_origin + focal_scale * ray_direction;

 // lens sampling
 float2 sample = optix::square_to_disk(make_float2(jitter.z, jitter.w));
ray_origin = ray_origin + aperture_radius * ( sample.x * normalize( U ) +  sample.y * normalize( V ) );
 ray_direction = normalize(ray_target - ray_origin);

/////////////////////////////////////////////////////////////////////////////////
//1st Step : Eye-Path
bool hitLight=true;
int eyeDepth = pathtrace_camera(ray_origin,ray_direction,hitLight, seed);
...
...
...
The light path is next and the combination of all the paths...


With lower aperture

With higher aperture

Tuesday, May 8, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 11 (Light sampling)

What should happen if several lights appear into the scene ?

I mean : how do we select the appropriate light to get a sample from where the light path starts?

To solve this problem i used this simple two-legged method :

First Case: having only lights covering a non-zero area (no point lights)


If the lights in the scene  are not point lights(which do not have volume or cover any area) then i select the appropriate light according to this procedure :

a.I compute the area of the surface that each light covers
b.I compute the total area of the lights
c. Each light has the probability  pi  to be chosen depending on the percentage of the total light area belonging to it. pi = lightArea/totalLightArea

Second  Case : Scene contains (zero-volume and area) point lights


In the case that a point light resides in the scene, then we have the problem using the above procedure there is no possibility to be selected for sample creation. This is logical due to the fictional nature of the point lights.

The usage of very small lights (e.g. sphere lights with the smallest possible diameter) is not a solution because the image converges extremely slow with high variance.

So i select them according to the brilliance of each light. The probability to pselect each light is
pi= lightEmmision / totalLightsEmission

Where "lightEmmision" is the light emitting power (emission color)of each light and where " totalLightsEmission" is the sum of the emission of all the lights.

First Case : Three lights 2 spheres , 1 area light 


Second Case: there is a purple-ish point light at the black dot (added to the image for illustrative reasons)

ΝΕΧΤ POSTs: Schlick specular gloss , Depth of field , Better refractions and reflections  

Saturday, May 5, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 10 (Importance sampling)

Up until the last post , i have been using a very naive importance sampling. I found the several combinations of paths for each pixel , the color of which was finally the average value of the light gathered from the various paths. This had the problem that a very probable path e.g. a path with 3 hops from the eye hitting the light , had the same weight in the total light gathering with a very long path which was a lot less probable to happen.

I have changed the weights of the paths in a manner that the more probable the paths are, the more is the contribution of that path into the final gathering.

So i used , a weighted average such as this : 


In this figure the w(x) is the weight of the path x , and p(x) is the probability of this path actually happening
So i had these results with the exact same materials and scene : 




Without Importance Sampling (1/N)


With weighted importance sampling


NEXT POST : I have also implemented the specular gloss material ,as described by Phong model, and i am implementing the Schlick metallness coefficient right now

Bidirectional Path Tracing using Nvidia Optix , Part 9 (Textures)

I have finally implemented the use of textures in my scene. The textures are in the form of .ppm files ,the same as the Nvidia-examples used .

Note that the material on the .Obj is not a full material as in this example i read and use only the texture file associated with the obj! Other shapes, such as spheres , or anything else can be textured or passed more complex material as long as a texture coordinate is "attributed" to the renderer from the .cu file of each shape