Large Vertex Buffers and Vertex Streams?

Win32 Programming

    Next

  • 1. multi texturing with environment mapping
    Hi is there anyone who can help me with this problem. I am trying to do a multi texturing environment mapped object in DX8. I have been playing around with the SphereMap demo that comes with DX8 sdk. The effect I am trying to get is, for example, imagine the tea pot having a wooden texture on it and then apply like a light environment map to give you like a very glossy wooden object. Is there an easy way to apply multi texturing into the SphereMap demo to do this or does anyone know of a better method to accomplish this. Thanks
  • 2. having some problems with depth stencil surfaces
    basicly, I have a main render surface created by CreateDevice. (the size of which can change) This surface is created with EnableAutoDepthStencil turned on, thus creating a depth stencil that is the same size as the main render surface. Then, later on, the program calls CreateRenderTarget to create a secondary surface (using a fixed size) Then, it calls GetDepthStencilSurface followed by SetRenderTarget. This works great if the main surface is as large as or larger than the surface created by CreateRenderTarget. But if its smaller, it fails because the depth stencil surface is too small. So I need to do 1 of 3 things: 1.create a new depth stencil surface to use for the secondary surface in a way that doesnt mess up the primary surface and its depth stencil surface. 2.manually create a depth stencil surface that is as large as the larger of my 2 via CreateDepthStencilSurface and associate it with the primary surface instead of using EnableAutoDepthStencil in a way that will work for both surfaces. or 3.some other solution (e.g. resizing the backbuffer in createdevice then drawing only part of that in my viewport somehow). Anyone got a solution that will work with as few code changes as possible? I cant change the fixed size of the secondary surface or change things so that the main surface never becomes smaller than the secondary surface (because of how the app works).
  • 3. D3DXQuaternionSlerp And Quaternions
    Hi all, I have 2 questions regarding quaternions And Spherical Interpolation. I am trying to Interpolate between 2 Roations Values Suppose at Frame1 having rotation angles of (0.0f,0.0f,0.0f) to FrameN having angles(160.0f,0.0f,0.0f) So what i do i create 2 Quaternions using D3DXQuaternionRotationYawPitchRoll() function and use the D3DXQuaternionSlerp() to interpolate between them there are 2 issues. 1) when i get interpolated vvalue of Quaternion and convert to Transformation matrix the matrix also gets a Scaling Value in Z axis so the Object till frame N/2 have decreasing Scaling in Z and then returns to Scale 1 at N frame what could be the Problem ? Right now i Normalized the Matrix Values and and removed the Scaling Values so tis working fine but it doesnt makes sense. 2) If i try to interpolate between 0 and 360plus values or any values over 180 the object interpolates using the shortest path althought i want it to rotate through the whole angle Suppose i give 720 it should show 2 loops around x axis but it does moves at all same like in 360 how do it it i am pretty puzzled rite now thanx in advance One can mail me at XXXX@XXXXX.COM also Vishu
  • 4. depth stencil surface question
    I have an app that calls IDirect3DDevice8::GetDepthStencilSurface. However, there are no calls to IDirect3DDevice8::CreateDepthStencilSurface anywhere in the program. How then would the Depth Stencil Surface be created/initialized? Specificly, how is the size of the Depth Stencil Surface being set if its not being set by a call to CreateDepthStencilSurface?

Large Vertex Buffers and Vertex Streams?

Postby Jasper » Wed, 04 Feb 2004 07:27:18 GMT

Hello

First time on here so I hope my question isnt too stupid.

Iam doing a 3d engine and I plan to build numerous large (could be 2MB or so
each) vertex and index buffers, one for type of vertex stream data. So one
for xyz, one for normal, texture coords etc. Question is would I be forced
to limit this buffer sizes if I use the 16 bit index type in the index
buffers, even though no single mesh or call to DrawIndexedPrimitive will use
over 64000 vertices. Or does my strategy make use of the 32 bit a must?

Also on the stream data, not all meshes will need all the info, ie some may
only use a single texture. So does the OffsetInBytes arguement in
SetStreamSource allow me to change where DX gets the stream info like the
startindex does in DrawIndexedPrimitive. Or will I have to pad out each
stream buffer. Hope Ive explained that clearly enough.

Many thanks

Jasper



---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system ( http://www.**--****.com/ ).
Version: 6.0.576 / Virus Database: 365 - Release Date: 30/01/2004



Re: Large Vertex Buffers and Vertex Streams?

Postby Phil Taylor [ATI] » Wed, 04 Feb 2004 08:00:33 GMT

vertex buffer size has no relation to draw size. large vbs are okay as long
as you dont exceed the 64k primitive draw limit with 16-bit index data, as
you have noted.





so
use
may



Re: Large Vertex Buffers and Vertex Streams?

Postby Eyal Teler » Wed, 04 Feb 2004 08:48:29 GMT

Note that separating vertex components can slow rendering, since it 
makes it harder for the chip to read and cache the vertices (although 
the difference may not be great). This separation can be useful in 
some cases (such as tweening), but I don't see a advantage in it in 
the general case.

And yes, the stream offset should do what you want.

	Eyal




Similar Threads:

1.Changing vertex color after vertex buffer and index buffer definition

Hi,

Say you have this custom vertex structure:

struct CUSTOMVERTEX
{
	float x, y, z;
	DWORD color;
};

Is it possible once you've defined values for a vertex 
buffer using that structure to subsequently change the 
color for a vertex that you defined? If so, how?

Thanks :)

2.Streaming large amounts of data to vertex / fragment programs

Hi,

This is my first post here, I use OpenGL for programming as a hobby. I'm 
   french so please excuse my bad english :)

Here is my problem : in the context of an hardware accelerated raytracer 
I get horrible performance when using vertex programs + fragment 
programs + VBO.

My aim is to balance the calculations needed between the GPU and the CPU.

I get good performances using opengl fixed path with this method:
- get intersection points on the CPU
- set some non projective projection and render the diffuse pass, 
lighting is done on the GPU
- set a perspective projection and modelview matching raytraced scene 
and render specular pass with "imposters" (polygonal objects roughly 
representing the raytraced scene)

I implemented another version using Cg :
- get intersection points on the CPU
- render scene once per light

The second aproach looks promising (especially for big scenes where the 
specular pass becomes costly), but I get worse performances than with 
fixed path.
Here are my results for about 57k intersections (less than 228k sent to 
the GPU ; vertices weight 40 bytes (color, normal and position) for 
fixed path version, and 48 bytes for Cg (color, normal, 2D position, 
world space position)) ; each line gives the FPS with 1 and 4 lights :

- fixed path immediate mode :
   FPS = (13.4721, 12.8229)
- fixed path, rendering defered through classic vertex arrays
   FPS = (12.6694, 11.4943)
- fixed path, rendering defered through one big VBO
   FPS = (11.4602, 10.8844)
- Cg, rendering defered through one big VBO
   FPS = (8.65896, 3.77475)

The rendered data is all GL_QUADS of varying screen size (the raytracing 
is done with some adaptive quadtree subdivision scheme).

Of course the scene looks better (no artifacts on specular highlights) 
and Cg opens lots of interesting effect so I can live with the ~40% perf 
dropdown, but why does it gets so bad when the number of lights increases ?


++

3.Update a stream in vertex buffer

I have a vertex buffer created with D3DUSAGE_WRITEONLY and
D3DPOOL_MANAGED flags. The FVF is as below

#define HD3D_FVF_PT_NML	    (D3DFVF_XYZ|D3DFVF_NORMAL)

I have a requirement where in I need to change the normal data in my
vertex buffer. This has to be done on every update cycle. I was
thinking that I could just Lock the vertex buffer again and overwrite
the existing normal data with the new one. But what I found was, when
I Lock second time, DirectX doesn't give me back all my vertex buffer
data, it just gives me some memory location to write. I am using '0'
as flag for Lock function since I didn't find any other flags
relevant.

To summerize, how can I overwrite a specific stream of data in vertex
buffers and leave the other untouched?
How efficient is this method over creating a whole new vertex buffer?
Is there any other better way?

Thanks,
Rajesh

4.Mutiple streams, single vertex buffer

can I put multiple streams in single vertex buffer. I would like to do vertex 
tweening but instead of having vertex buffer for each stream, can I  have 
different part of vertex buffer as stream?

5.Vertex Arrays, Vertex Buffer Objects

Hi list.

I implemented rendering using display lists, vertex arrays, and vertex
buffer objects. And guess what was the faster ? ... display lists !

It should mean that I implemented badly those array stuff.
For the VBO part, I'm quite sure that today drivers are not correctly
implemented - systematically crash on some ATI hardware, even if
extention is detected, poor perfs on nVidia. After all, it runs as
fast as plain vertex arrays.
For the VA part ... I guess all static geometry could be rendered
using glLockArray() ... ? I think this is called compiled vertex array
? Is this an improvement compared to plain vertex arrays ?
Will I have to go to VAR to achieve correct perfs ?

Thx for enlightments

SeskaPeel.

6. vertex buffer, index buffer

7. Vertex shader and multi streams

8. Vertex blending to vertex shader



Return to Win32 Programming

 

Who is online

Users browsing this forum: No registered users and 33 guest