How to set the Precision of Depth Buffer in GLUT

graphics

    Next

  • 1. Linux, OpenGL, GLX_GRAY_SCALE
    Hi, does anyone know how to create a visual in linux, which uses GLX_GRAY_SCALE. The X-Server is configured to run in 8-Bit grayscale mode. All "regular" opengl based apps, which i tested crashed under this conditions. I was trying top adapt the visual in one of my apps accordingly, but so far failed .... Hartwig
  • 2. vertex shader + lighting
    hello. there is my first lighting vertex-shader: struct vertex { float4 position : POSITION; float4 normal : NORMAL; float4 color0 : COLOR0; }; struct fragment { float4 position : POSITION; float4 color0 : COLOR0; }; fragment main( vertex IN, uniform float4x4 modelViewProjection, uniform float4x4 modelViewInverse, uniform float4 eyePosition, uniform float4 lightVector ) { fragment OUT; OUT.position = mul( modelViewProjection, IN.position ); float4 normal = normalize( mul( modelViewInverse, IN.normal ).xyzz ); float4 light = normalize( lightVector ); float4 eye = eyePosition; float4 half = normalize( light + eye ); float diffuse = dot( normal, light ); float specular = dot( normal, half ); specular = pow( specular, 32 ); float4 ambientColor = float4( 0.2, 0.2, 0.2, 0.2 ); float4 specularMaterial = float4( 1.0, 1.0, 1.0, 1.0 ); OUT.color0 = diffuse * IN.color0 + specular * specularMaterial + ambientColor; return OUT; } simple. i'd like to draw object shaded with multiple lights. so my question: is there a way how can i pass more lights to shader? or i have to draw object for each light again? thanks -L
  • 3. Accumulation buffer under windows ??
    Can anybody send me a demo for using the accumulation buffer under windows ? I have tryed to set the accum bits in PIXELFORMATDESCRIPTOR then there seems to be no problem but when I LOAD and RETURN the accum buffer it is always empty any hints ? thanks Gerry
  • 4. glMultMatrix question - where is my fault?
    Hi, I want to use glMultMatrix the following way: (1) <<compute matrix>> glMultMatrix(matrix) draw "point p" before, i did that the following way and everything was ok: (2) <<compute matrix>> draw "matrix * point p" Now, it seems, that glMultMatrix has no effect and the points are drawn in the wrong location. The example is very simple and I don't want any performance hits to avoid multiply every time because the points are computed somewhere else. I am not sure, that (1) and (2) are theoretically the same and why not if they aren't. Thanks in advance and please excuse my english, Thomas
  • 5. vertexArray
    "Hello World\n", i would like to use vertex arrays. The problem is, that i don't have my vertices stored in an array of floats. I have the following architecture in my program: class Point3D { public: float x, y, z; private: int dx, dy, dz; int incx, incy, incz; int accux, accuy, accuz; int steps, currentStep; [Methods cut] class Vertex3D : public Point3D { public: Point3D normal; [Methods cut] In my PolygonObject i use the vertices like this: Vertex3D *vertex; vertex=new Vertex3D[vertexCount]; In the rendering method i tried the following code: glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, (&(vertex[1].x)-&(vertex[0].z)-1), &vertex[0].x); but this does not work. i also tried it by manually enter numbers for stride in a range from 5 - 60, but the result was not correct. When i create a new array: float array[1000*3]; for (int i=0;i<vertexCount;i++) { array[i*3+0]=vertex[i].x; array[i*3+1]=vertex[i].y; array[i*3+2]=vertex[i].z; } glVertexPointer(3, GL_FLOAT, 0, array); everything is fine. is it possible to use vertex arrays with my architecture? Tnx, Florian

How to set the Precision of Depth Buffer in GLUT

Postby tim » Thu, 03 Jun 2004 00:02:13 GMT

Hi, everyone,

How to set the Precision of Depth Buffer in GLUT?

Compared to MFC, we always have a data stucture to control the Precision of
Depth Buffer :
static PIXELFORMATDESCRIPTOR pfd =      // gives windows info on what we
want
   {
      sizeof(PIXELFORMATDESCRIPTOR),       // size
      1,                                   // version
      PFD_DRAW_TO_WINDOW |                 // must support drawing to a
window
      PFD_SUPPORT_OPENGL |                 // must support OpenGL
      PFD_DOUBLEBUFFER,                    // "    "       double buffering
      PFD_TYPE_RGBA,                       // request red green blue alpha
color mode
      16,                                  // color depth
      0, 0, 0, 0, 0, 0,                    // color bits..
      0,                                   // no alpha buffer
      0,                                   // no shift bit
      0,                                   // no accumulation buffer
      0, 0, 0, 0,                          // accumulation bits
      16,                                  // 16 bit z buffer (depth)
      0,                                   // no stencil buffer
      0,                                   // no auxiliary buffer
      PFD_MAIN_PLANE,                      // main drawing layer
      0,                                   // reserved
      0, 0, 0                              // layer masks
   };

thank you very much!

Tim



Re: How to set the Precision of Depth Buffer in GLUT

Postby Andy V » Thu, 03 Jun 2004 09:13:48 GMT




If you need to do this, don't use GLUT. GLUT provides limited support to 
simplify small demonstration programs.


Similar Threads:

1.D3D10: Rendering to depth buffer only and accessing 6 depth buffers as a cube texture

Hello,

I'm relatively new to D3D10, and I've got two questions:

1. In D3D9, you needed to have a color render target bound in order to 
render anything, even if you only wanted to render into the depth 
buffer. Is this still the case in D3D10?

2. I want to implement point light shadow maps using cube textures. 
Since it is now possible to render all six faces in one pass using the 
geometry shader stage, this seems much more promising than before.

Since it is now also possible to use the contents of the depth buffer as 
a texture (the SoftParticles sample from the SDK does it), I'd like to 
generate the shadow map by only rendering into 6 depth buffers, and not 
encoding depth in a color buffer, like it had to be done with D3D9.

So in the end, I'd have 6 depth buffers representing the scene depth 
from the light source's point of view in all 6 directions.

In the lighting shader, of course, I want to check if a pixel is 
shadowed or not, so I have to access the depth buffers. But as they form 
a cube, I'd like to access the 6 depth buffers as one cube texture.

Is that possible? It would save a lot of texture memory not having to 
store the depth in an extra texture when it is already stored in the 
depth buffer.
Is there anything special to be aware of when doing that?

Thanks!

David

PS: If there is a special newsgroup for D3D10, please tell me! I looked 
for one, but I didn't find one.

2.How to set depth buffer size ?

Hello from Danilo.
I have a 24 bit depth buffer.
How to set to 32 bit using GLUT (and without using
PIXELFORMATDESCRIPTOR) ??

Thanks to all !

Danilo

3.Write Depth Map to Depth buffer

Hi, I'm writting a program that would insert a 3D object in a 2D
scene. To do that I need to write a previously calculated Depth Map to
the Depth buffer. I have it loaded into a texture but right now I
don't know how to copy it to the depth buffer. Any help?

4.Writing 0 or 1 to depth buffer only when passing depth test

Hi,
   Is there any way to write specific values into the depth buffer
while not changing the depth testing?  If I use glDepthRange to limit
writes to a certain value, it affects the depth test for passing
fragments.  Same with changing the projection matrix to affect z
values.
   I have an application where I am working on a scene with potentially
millions of polygons on one large connected mesh and I want to move and
update the drawing of only a portion of them at a time (drawing
directly into the front buffer, on top of what was previously there) so
I can have a high frame rate.
   One way to do this would be to draw all the polygons I want to
update (before moving them anywhere) into the stencil buffer, then move
them, then draw again on top of the scene in the front buffer and only
draw where the stencil test passes.  However, it seems that many
mainstream graphics cards do not have stencil buffer support (for
example, the GeForce FX 5500 on the machine I'm working on right now -
is ther some trick to enable it?).
   What I'm trying to do then is, with the scene already drawn once, to
draw the selected polygons into the depth buffer with a depth value of
0 before I move them (though they'll have to pass the depth test
initially to be drawn into the depth buffer), then I move the polygons
a bit, then draw them again into the front buffer with the depth test
enabled so that they'll only be drawn over the area where they
previously were but will correctly draw only the front-most polygons.
Ideas?  Anything would be greatly appreciated.

Best,
Tom

5.write depth buffer from depth map in pixel shader

Hi evry one,

I m trying to write from a rectangular depth texture to the depth
buffer in a pixel shader. To so saw I draw in Opengl a quad of the
size of the screen with the corresponding texture coordinates. Then in
a shader I perform a texture lookup in my depth map at the position of
the pixel to get its depth and then I return that depth which would
write into the depth buffer (I verified my opengl states and the depth
mask is true). When I try to do so it seems not to work. My problem is
I don't understand what I am supposed to put in my OUT.depth because
the texRECT returns a float4 when I perform a texture lookup (what is
the format of result of texRECT with a depth map??? is it (D,D,D,1)
???). I use a 24bit depth map which is correct because I use it
elsewhere in my program.

When I try to display the depth map in the frame buffer to debug I
only get few pixels at 0 and the rest at 1. What do I do wrong

Here is my code :

Code:

struct fragment_out
{
   float4 color   : COLOR;
   float depth : DEPTH;
};

struct fragment_in
{
   float4 color   : COLOR;
   float4 position : WPOS;
};

fragment_out main(fragment_in IN,
                  uniform samplerRECT samplerZ,
                  uniform samplerRECT samplerColor)
{
   fragment_out OUT;   
   
   float4 res      = texRECT(samplerZ,IN.position.xy);
   float4 rescolor = texRECT(samplerColor,IN.position.xy); //just to
have the color of the pixel (no use for writting in the depth buffer)
 
  OUT.depth = res.r; // What should I put ( res = (D,D,D,1) ???)
   
   //OUT.color=rescolor; // for a correct color
   
   OUT.color[0]=res.r // to debug ....
   OUT.color[1]=0.0;
   OUT.color[2]=0.0;
   OUT.color[3]=1.0;
   return OUT;
}

6. Depth texture render to the depth buffer

7. DirectX9 D3D: Creating a Depth Buffer/Z-Buffer - Error C2065

8. Depth Buffer/Z-Buffer problem.....



Return to graphics

 

Who is online

Users browsing this forum: No registered users and 43 guest