Invitation

Whoever wants to contribute to this blog , posting his experiences on the same place, please send me a message !

Friday, May 25, 2012

Design Dilemma : Hybrid Rendering ?

The  workload until now lies on the Optix code which runs totally on the GPU , using the host code on the CPU for coordination and organization purposes only. As it meant to be by Optix.

Last days I read some papers(and still reading...) on hybrid rendering , either with cuda/opencl or other languages.

One of the papers [Combinatorial Bidirectional Path-Tracing for Efficient Hybrid CPU/GPU Rendering] of Anthony Pajot, Loïc Barthe, Mathias Paulin, and Pierre Poulin describes a way to exploit both processors efficiently. It proposes that the path creation can be made on the CPU and the combination of path on the GPU(Pure cuda code). It proved to be 12 times faster than CPU alone.

I was thinking of doing the opposite.

That is :

Create the paths on the GPU using optix and save all that i need on a common CPU/GPU buffer , which i already do . 
Then execute the all-path combinations on the CPU. To offload some work from the GPU. 

Unscientific as it , commenting out of the Optix code the combinatorial step , it speeds up the path creation 3-3.5 times.


My thought was : IF the CPU is better at executing code in "for" loops, would the acceleration from using a second processor would be greater than the overhead of a busy memory bus?? 

AND is this system not-extensible enough??(if one processor would be ok for one card , what about 2-3-4 cards for processing ???)

What do you think ? should i try it or is it a lost cause?

Sunday, May 20, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 14 (Schlick specular gloss)

A parallel Android project , prevented me from posting earlier but I believe that i have sorted out the specular gloss effect as described by Schlick approximations of the fresnel equations.

I have used the most simple  fresnel approximation as provided by nvidia in the fresnel_schlick() function.

On the way i found some interesting stuff for reading , please check out:  of the copyrights belong to the respective owners

On the left is the Schlick approximation of the gloss material , on the right is the Phong default gloss

MY PROGRESS FROM NOW ON UNTIL NEXT WEEK WILL BE SLOW TO NONE !

Friday, May 11, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 13 ( MATERIALS : Specular material || better refractions and reflections)



In this post , i present to you the specular glossy material according to the Phong model. Next step is the Schlick "metalness" attribute.

I have also corrected some details in the reflection material and every detail that was not as it should be in the refraction material.

Here are some examples :

The box is specular , the sphere is refractive with index of refraction  1.33

The box is specular , the sphere is refractive with index of refraction  1.6


Same materials as before but three lights , also the  left wall is perfectly reflective

Darker scene with reflective (back) specular ( right) and refractive (ball)  materials







This is a render with low exponent in the Phong specular gloss material 

Bidirectional Path Tracing using Nvidia Optix , Part 12 (Depth of field))

The current installment in my project was Depth of Field effect. This simple effect in path tracing is the effect of simulating the focus ability of any kind of non-perfect lens such as the camera lens or the human eye.

http://en.wikipedia.org/wiki/Depth_of_field

Using this effect , the final image has a focused region and two regions that are out of focus, the regions before and after the focused region.

In path tracing this effect is very simple as it is inherent in path tracing and to be exact nvidia provides those 10 lines in the "Cook" example. What i have done is making those lines fit in my code. here they are :


in the BDPT function:
....
...
...
                  // pixel sampling
 float2 pixel_sample = make_float2(bufferLaunch_index) + make_float2(jitter.x, jitter.y);
 float2 d = pixel_sample / make_float2(screen) * 2.f - 1.f;
 // Calculate ray-viewplane intersection point
 float3 ray_origin = eye;
 float3 ray_direction = d.x*U + d.y*V + W;
 float3 ray_target = ray_origin + focal_scale * ray_direction;

 // lens sampling
 float2 sample = optix::square_to_disk(make_float2(jitter.z, jitter.w));
ray_origin = ray_origin + aperture_radius * ( sample.x * normalize( U ) +  sample.y * normalize( V ) );
 ray_direction = normalize(ray_target - ray_origin);

/////////////////////////////////////////////////////////////////////////////////
//1st Step : Eye-Path
bool hitLight=true;
int eyeDepth = pathtrace_camera(ray_origin,ray_direction,hitLight, seed);
...
...
...
The light path is next and the combination of all the paths...


With lower aperture

With higher aperture

Tuesday, May 8, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 11 (Light sampling)

What should happen if several lights appear into the scene ?

I mean : how do we select the appropriate light to get a sample from where the light path starts?

To solve this problem i used this simple two-legged method :

First Case: having only lights covering a non-zero area (no point lights)


If the lights in the scene  are not point lights(which do not have volume or cover any area) then i select the appropriate light according to this procedure :

a.I compute the area of the surface that each light covers
b.I compute the total area of the lights
c. Each light has the probability  pi  to be chosen depending on the percentage of the total light area belonging to it. pi = lightArea/totalLightArea

Second  Case : Scene contains (zero-volume and area) point lights


In the case that a point light resides in the scene, then we have the problem using the above procedure there is no possibility to be selected for sample creation. This is logical due to the fictional nature of the point lights.

The usage of very small lights (e.g. sphere lights with the smallest possible diameter) is not a solution because the image converges extremely slow with high variance.

So i select them according to the brilliance of each light. The probability to pselect each light is
pi= lightEmmision / totalLightsEmission

Where "lightEmmision" is the light emitting power (emission color)of each light and where " totalLightsEmission" is the sum of the emission of all the lights.

First Case : Three lights 2 spheres , 1 area light 


Second Case: there is a purple-ish point light at the black dot (added to the image for illustrative reasons)

ΝΕΧΤ POSTs: Schlick specular gloss , Depth of field , Better refractions and reflections  

Saturday, May 5, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 10 (Importance sampling)

Up until the last post , i have been using a very naive importance sampling. I found the several combinations of paths for each pixel , the color of which was finally the average value of the light gathered from the various paths. This had the problem that a very probable path e.g. a path with 3 hops from the eye hitting the light , had the same weight in the total light gathering with a very long path which was a lot less probable to happen.

I have changed the weights of the paths in a manner that the more probable the paths are, the more is the contribution of that path into the final gathering.

So i used , a weighted average such as this : 


In this figure the w(x) is the weight of the path x , and p(x) is the probability of this path actually happening
So i had these results with the exact same materials and scene : 




Without Importance Sampling (1/N)


With weighted importance sampling


NEXT POST : I have also implemented the specular gloss material ,as described by Phong model, and i am implementing the Schlick metallness coefficient right now

Bidirectional Path Tracing using Nvidia Optix , Part 9 (Textures)

I have finally implemented the use of textures in my scene. The textures are in the form of .ppm files ,the same as the Nvidia-examples used .

Note that the material on the .Obj is not a full material as in this example i read and use only the texture file associated with the obj! Other shapes, such as spheres , or anything else can be textured or passed more complex material as long as a texture coordinate is "attributed" to the renderer from the .cu file of each shape


Saturday, April 28, 2012

No srgb support in HD graphics..



Through a problem that i encountered  I learned that in the config below HD graphics cannot show sRGB color mode so it is advisable , in order to have the same output with every card, to disable sRGB mode in your code :

    GLUTDisplay::setUseSRGB(false);

in your main function.


( My  PC has the monitor connected to an Intel HD graphics iGPU and using two Geforces as  dedicated cuda devices)

Solution-Sponsor : Kyle Hayward




Friday, April 27, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 8 (Tile-based rendering)

As I have stated before , a problem with implementing BDPT using the Optix API was that it needed to have in memory two buffers sized WIDTH X HEIGHT X MAX_RT_DEPTH each , in order to store the paths from the eye and light respectively.

I had some technical difficulties at first but with the help and encouragement of Kyle Hayward 
I have overcome them.

To solve the problem i  cut the output buffer into several (user-defined number) tiles  , which are rendered one after another instead of the  output buffer as a whole.
____________________________________________________________________________

void PathTracerScene::trace( const RayGenCameraData& camera_data )
{
  _context["eye"]->setFloat( camera_data.eye );
  _context["U"]->setFloat( camera_data.U );
  _context["V"]->setFloat( camera_data.V );
  _context["W"]->setFloat( camera_data.W );

  Buffer buffer = _context["output_buffer"]->getBuffer();
  RTsize buffer_width, buffer_height;
  buffer->getSize( buffer_width, buffer_height );

  if( _camera_changed ) {
    _camera_changed = false;
    _frame = 1;
  }

  _context["frame_number"]->setUint( _frame++ );


  for(int i=0;i<NoOfTiles.x;i++)
  {  
for(int j=0;j<NoOfTiles.y;j++)
{
 
_context["NoOfTiles"]->setInt( i,j);
    _context->launch( 0,
                    static_cast<unsigned int>(launch_index_tileSize.x),
                    static_cast<unsigned int>(launch_index_tileSize.y)
                    );
}
  }

}
_________________________________________________________________________

This is a code snippet  of the Trace C++ - function that calls the optix launches for each tile(bold)


The  full results with multiple Optix Launches on a single GTX-560ti GPU rendering the Cornell scene are here 

And to be honest they are better than i expected.

One  example is this :

1024 X 1024 and maxdepth for each sub-path(light and eye) is 7


Number of tiles fps MB Vram
1 2,45 824
2 2,45 460
4 2,43 278
8 2,4 187


Pay attention to the great decrease in memory consumption.

One remark is also that the percentage of the GPU usage is falling (at least in my system) when i am using multiple GPUs . Using more GPUs (in my case one GTX560ti and one GTX460) didn't increase the performance in a linear way

This is a table for the Cornell Scene in 1024 X 1024 X (5+5) on two GPUs.

Tiles ||  GTX-560ti usage%  || GTX-460 usage% 
1      ||           96                   ||              87
4 ||           93                   ||              73
8 ||           83                   ||              62

I am sure if this is due to the overhead in optix or if the GPUs are different
I thought that the decrease in usage would be smaller.



Thursday, April 26, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 7 (Scene Loader)

My next installment in the thesis project is a scene loader.

I used the free open source Pugi XML API which is very easy , magnitudes easier than anything else I tried. And i describe my scene using an xml -document. My code is simpler now and any changes can be made quickly.

An simple example of an XML file with the scene parameters and materials only (no geometry , to maintain simplicity ) is this.

I remind you that i haven't found a way to do the tile processing , the problem that i described in the latter post. So, any help is still welcome.

Sunday, April 22, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 6 (Correct Refractions)

My last accomplishment on my way to create this renderer is the correction of the refractive materials.

In the following pictures, the material of the ball is totally refractive so it is not to be mistaken for real-world glass for example. In the last picture , the refractive material is exaggerated to highlight the appearance of colored caustics







At the moment I am creating an xml scene loader for easy scene parameter and geometry  loading.


Thursday, April 12, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 5 (Better Reflections, organization,bugfixes)

After a bit lazy fortnight , I have made some slow but steady progress , it is not much but nevertheless...

The rendering results have not changed  much but i have made some improvements . 

Now in the case of reflections i have corrected the mistake which occurred when we had an eye-path that contained a point with reflective material. In that case , if the next hitpoint after the reflection was a point on the light then the light would not be seen (appeared to be black) . That happened because the possibility to hit the random light point of the light-path, was practically zero. So in that situation, i consider only  one path (eye-path +that point only the light) and not the all-by-all combinations , practicing path tracing.

I have also made some optimization concerning the memory consumption. Especially , with the usage of materials in the cuda code. I have also added the creation of .Obj objects but i have some problems with the materials on the rendered objects so proper objs will be shown on the next post.

Also added point lights into the mix

I have tried to use two cards for the rendering ,a GTX 560ti and a GTX 460 , totaling 722 cuda cores , with very nice results. The scaling is not perfect but in my code is good enough. 100% gpu usage for the main card ( GTX 560ti  ) and about 85%-92% for the secondary (GTX 460) 


560 on the left-460 on the right
Next steps are : 

a.To create a proper Scene Loader

        b.Materials on obj s  ,usage of more complicated material properties , material containing textures etc.
       c.Importance sampling for many lights.
       d.Correct refractions because the present code contains a load of small errors.



Point Light example




Correct (or so it seems) reflections 

See you next week!
  
  


Friday, March 30, 2012

GTX 680 is out !

In case you missed it Nvidia's GTX-680 with incredible specs is out... with the GTX-690  with double the specs on its way!

1536 cuda cores , 2GB VRAM and 1GHZ core frequency !

http://www.guru3d.com/article/geforce-gtx-680-review/2

And i thought that my rig was nice....

Bidirectional Path Tracing using Nvidia Optix , Part 4 (Multiple lights and some bug fixes)

Hello again,

This time I have very little stuff to show couple of bugs , several performance issues and multiple lights :

With the help from the nvidia developers forum i corrected a bug in my random generator. I passed my random generator "seed" from hop to hop in the path by value, whereas by reference was the best solution to update the seed variable.

On top of that I have made several performance enhancements so the project in the simple cornel scene runs at about twice the speed(and some "change") at 33fps (512X512 scene with 4 hops at each path) instead of 13 at the same configuration Rendering is here 

I have also made spherical lights and organized the Light Class so that the addition of new types will be quick and easy


Here is the rendering with the three lights at 1024X1024 resolution

Next step will be to correct my errors in the BRDFs of the reflective and refractive materials

Monday, March 12, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 3 (Simple Refractions)

Hello !

Today i managed to create the first simple refractions into my project.The refractions are far from finished yet but i think they are good enough to show them to you. As you can see i have also implemented spherical lights

The material has refractions only , without anything else and it is also simple ,without any tricks yet.

_____________________________________________________________________
SAMPLE CODE :


 //refraction coefficient


current_prd.countEmitted = true;
float ior = 0.0;
 if(!current_prd.inside)
ior=1.33;
 else
ior=1/1.33;

if(refract(current_prd.direction, ray.direction, ffnormal, ior))
{
if(current_prd.inside)
{
current_prd.cos1 = dot( -ffnormal, -ray.direction );
current_prd.cos2 = dot( ffnormal, current_prd.direction );
}
else
{
current_prd.cos1 = dot( ffnormal, -ray.direction );
current_prd.cos2 = dot( -ffnormal, current_prd.direction );
}

}
else//internal reflection
{
current_prd.direction = reflect(ray.direction,- ffnormal);

current_prd.cos1 = dot( -ffnormal, -ray.direction );
current_prd.cos2 = dot(- ffnormal, current_prd.direction );
}


current_prd.attenuation = diffuse_color*Kt;

if(current_prd.inside)
{
 //Compute Beer's law
current_prd.attenuation = current_prd.attenuation * powf( diffuse_color , t_hit);
}

current_prd.inside = !current_prd.inside;

current_prd.hitDistance =   length(ray.origin - hitpoint);
current_prd.ffnormal = ffnormal;

 _________________________________________

Please , i need your comments to make it better and find any mistakes that i make !

Render 2

Nvidia Optix 2.5 is out !

Two weeks ago , nvidia has updated Optix. The new version is way faster than 2.1 and has several bugfixes and new features ... Update your system as soon as possible!

It will speedup your applications by 40% to 3 times faster on the same hardware , depending on the code.

The 2 features that are still missing and would be useful to have in a future release are:

*Compatibility with Nvidia's Nsight debugging utility
* And mainly the ability to declare buffers within the optix code.

That would be very handy in my application!

Sunday, February 19, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 2 (Diffuse + reflections)

After minimal changes , I present you my first reflections (Diffuse Coefficient is Kd=0.1 and reflectance coefficient Kr= 0.9 in this example, these probabilities are used in the Russian Roulette. )



In these two first posts, the intensity of the light is attenuating at each "hop" of the path ,  as a product  of : 
 BDpathEye[eyeIndex].contribution*BDpathEye[eyeIndex].cosin*2.0*M_PIf; 
(2*π is used because at this point only diffuse surfaces are concidered)

Contribution is the brdf of the ray-hitpoint. 



In the next post(s) , i will try to improve this factor by adding the attenuation from the traversed distance(r^2) into the mix , also i will add refraction and specular coefficient 

Wednesday, February 15, 2012

Bidirectional Path Tracing using Nvidia Optix , Part 1 (Diffuse only)

Greetings again after so long time,

In this post i will show you my first results from my implementation of  Bidirectional Path Tracing using Nvidia Optix , which I have started 3  weeks  ago, as a part of my CS Master Thesis in  A.U.E.B.  (Athens,Greece ) For a quick overview of the theory , see here .  Warning , Post in the Greek language :-)

I will try to give a hybrid approach to the problem (using both Optix on GPU and some computations being made on the CPU to balance the computational load )

In this first post, I will show you the first really good rendering ,after correcting some mistakes that i have encountered during these 20 days. The implementation includes at this POINT only the basics , diffuse only surfaces with no visual effects or importance sampling. Reflection and refraction are under construction right now. So you will have them in a later post.

The code is pretty straightforward , but i will not post it right now because it is not finished . I have only strictly implemented the theory which is described in the classic thesis of Veach and similar papers.



I have gathered the best material (theory and staff) that i gather here - >



Example snippet from the main Optix- Function:

______________________________________________________________
.....
unsigned int seed = tea<16>(screen.x*launch_index.y+launch_index.x, frame_number);
  do {
  
    unsigned int x = samples_per_pixel%sqrt_num_samples;
    unsigned int y = samples_per_pixel/sqrt_num_samples;
    float2 jitter = make_float2(x-rnd(seed), y-rnd(seed));
    float2 d = pixel + jitter*jitter_scale;
    float3 ray_origin = eye;
    float3 ray_direction = normalize(d.x*U + d.y*V + W);


//1st Step : Eye-Path 
unsigned int eyeDepth=0u;
float3 tmpres = pathtrace_camera(ray_origin,ray_direction,seed,eyeDepth);
  
//2nd Step : Light-Path
unsigned int lightDepth=0u;
tmpres = pathtrace_light(seed,lightDepth); 

//3rd Step : Combination
 float3 tmpColor=make_float3(0.0,0.0,0.0);

for(int light=0;light<lightDepth;light++)
{
for(int eye=0;eye<eyeDepth;eye++)
{
contrib=make_float3(1.0,1.0,1.0);

for(int i=0;i<light;i++)
{
uint3 lightIndex =make_uint3(launch_index,i);
contrib *= BDpathLight[lightIndex].contribution*BDpathLight[lightIndex].cosin*2.0*M_PIf;
}

contrib *= (BDconnection(light ,eye)*2.0*M_PIf*2.0*M_PIf);

for(int j=0;j<eye;j++)
{
uint3 eyeIndex =make_uint3(launch_index,j);
contrib *= BDpathEye[eyeIndex].contribution*BDpathEye[eyeIndex].cosin*2.0*M_PIf;

}
tmpColor+=contrib;
}
}

    
  tmpColor/=(float)(lightDepth*eyeDepth);
  
  color += tmpColor;
  } while (--samples_per_pixel);

//combination

  float3 pixel_color = color/(float)(sqrt_num_samples*sqrt_num_samples);
_______________________________________________________________________________