## Vertex buffer problems

Win32 Programming

### Next

• 1. Auto-resizing Camera Projection
Hi, I have a simple question (I think!). I want to change the projection of my camera based on the overall size of my model. The model can extend and when it does I want the camera to calculate the new bounding sphere so I can work out the radius which is used to resize. The problem I have is that the radius of the rootFrame.FrameHierarchy (see below) does not change even though the model changes. i.e. if the arms extend outwards then this would increase the bounding sphere but it doesnt. ================================================================ // Calculate the center and radius of a bounding sphere objectRadius = Frame.CalculateBoundingSphere(rootFrame.FrameHierarchy, out objectCenter); // Setup the camera's projection parameters camera.SetProjectionParameters((float)Math.PI / 4, aspectRatio, objectRadius / 64.0f, objectRadius * 200.0f); camera.SetViewParameters(new Vector3(0, 0, -2.2f * objectRadius), Vector3.Empty); camera.SetWindow(Framework.DefaultSizeWidth, Framework.DefaultSizeHeight); // Set device transforms DeviceKeyEvent.Transform.Projection = camera.ProjectionMatrix; DeviceKeyEvent.Transform.View = camera.ViewMatrix; ================================================================ How do I go about calling CalculateBoundingSphere() with the new updated model Hierarchy? Thanks Paul
• 2. Initialize a geometry transformation
I know that Identity Matrix can initialize a geometry transformation, and with view and projection matrix, we can setup the right "viewport" of the game, but how do you decide the locations by math of how the geometries are deployed in the scene? Say I have a piece of floor object in MultiAnimation, how do you work out the values of view, projection and default "geometry" location of the floor, so that they fit in a scene altogether really well?
• 3. Displaying different colored units in tile-based game
Been working on a tile-based game and I'm at the point where I need to start displaying units on different teams in different colors. My tiles are all bitmaps and I know I could create a different bitmap for each team color I want, but I'd like to find a way to have a single set of bitmaps and munge the colors while displaying them. Any ideas? So far I'm just using ID3DXSprite (DX9) to display the appropriate bitmaps on the screen. Thanks, -Jeff
• 4. DX app without a device window.
Hi, I am using the gpu for some gpgpu stuff and dont want to create a window as i will not be calling Present() . I plan to write a little library (say MyLib) that gets called by another app (Client) which has its own MsgProc() and window. Firstly, am I right that I have to create some sort of window(even if its made v.small or is minimised) before calling CreateDevice(). Secondly, will there be issues if DX resources are initialised from MyLib but the Client app is in focus whereas the one associated with MyLib is minimised/out of focus. Thirdly, if the Client is also an DX app which has its own resources then will there be problems (performance/stability/etc) with MyLib creating its own DX resources? Thanks. Dude.

### Re: Vertex buffer problems

```[Please do not mail me a copy of your followup]

Please post managed API questions to
microsoft.public.win32.programmer.directx.managed.

Please pick one newsgroup, post there and wait for a response.

--
"The Direct3D Graphics Pipeline"-- code samples, sample chapter, FAQ:
< http://www.**--****.com/ ~legalize/book/>
Pilgrimage: Utah's annual demoparty
< http://www.**--****.com/ >
```

```Hi,

Say you have this custom vertex structure:

struct CUSTOMVERTEX
{
float x, y, z;
DWORD color;
};

Is it possible once you've defined values for a vertex
buffer using that structure to subsequently change the
color for a vertex that you defined? If so, how?

Thanks :)
```

```there is setup time internally both for a VB and for a particular
FVF/components. its less if you stay with the same format/stride, but there
is still some cost with switching just the VB.

minimize it if you can without heroic rewriting, but dont worry about it if
its a huge rewrite unless you measure thats a significant factor in your
particular performance signature.

tools are required for measurement. VTune or other static performance
analysis tool and a dynamic analysis tool like Pix that shows your API usage
per frame is good, you can zero in on most issues that way.

"Hanna-Barbera" < XXXX@XXXXX.COM > wrote in message
news: XXXX@XXXXX.COM ...
> Hi,
>
> Does switching VB and IBs a lot cost too much?
> Is it better to store multiple things to be rendered in one VB/IB?
> Is it considered one of the costliest things, that must be dealt with.
> What about the size of the buffers.
> I know 16 bit indices are recommended, which is what I use.
>
> I know my question is abstract, but I would like to know in relative
> terms,
> compared to swithcing textures, switching shaders, and so on.
>
> Right now, my engine may be switching texture, and VB and IB per object.
> Shader are not switched so often cause the object's share the same shader.
>
> Thanks
>
>

```

```Ok.. so I tried to take my first step to create a 3D object with VertexBuffers but I have a problem cleaning up. For some reason, after I do CreateVertexBuffer call and I try to exit the application, it keeps saying "reference count for D3D object is non-null". I am releasing the vertex buffer before I exit the application, and I'm using D3DPOOL_MANAGED for vertex buffer.
Am I missing an important step here? Tutorial doesn't really have any special clean up either but it works fine for tutorials and samples. If anyone can give me few pointers or things to watch out for, I would appreciate it. Or if anyone can tell me what I"m doing wrong, that would even be better. ^^; Thanks.
```

```I have a vertex buffer of about 4000 vertices and also a index buffer of
8000 indices. The problem: when I render all these vertices I got a very
poor performance, 1 or 2 frame per sec. But the cpu usage is only 4%. I
don't know why.

The vertex buffer and index buffer were created with D3DPOOL_MANAGED. I
tried D3DPOOL_SYSTEMMEM, but no improvement.

thanks.

```

```    Hello,

I've just implement the ARB_vertex_buffer_objet in one my my program,
all works perfectly, but ... All run Twice SLOWER than a normal loop.

I create all things like that :

GLuint vbo_v,vbo_nv...;

glGenBuffersARB(1,&vbo_v);
glBindBufferARB(GL_ARRAY_BUFFER_ARB,vbo_v);

glBufferDataARB(GL_ARRAY_BUFFER_ARB,3*nbrv*sizeof(float),v,GL_STATIC_DRAW_AR
B);

glGenBuffersARB(1,&vbo_nv);
glBindBufferARB(GL_ARRAY_BUFFER_ARB,vbo_nv);

glBufferDataARB(GL_ARRAY_BUFFER_ARB,3*nbrv*sizeof(float),nv,GL_STATIC_DRAW_A
RB);
...

Where v and nv and pointers on 2 arrays (vertices and vertices
normals) - i do the same with vertex color, tex coords ...

glGenBuffersARB(1,&vbo_f);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB,vbo_f);

glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB,3*nbrf*sizeof(int),f,GL_STATIC_D
RAW_ARB);

For indexs

When i bind all that

glEnableClientState(GL_COLOR_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB,vbo_vc);
glColorPointer(4,GL_UNSIGNED_BYTE,0,NULL);

glEnableClientState(GL_NORMAL_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB,vbo_nv);
glNormalPointer(GL_FLOAT,0,NULL);

glEnableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB,vbo_v);
glVertexPointer(3,GL_FLOAT,0,NULL);

// DRAWING
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB,vbo_f);
glDrawElements(GL_TRIANGLES,3*nbrf,GL_UNSIGNED_INT,NULL);

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);

I have with a normal loop about 60 fps and 30 fps when using VBO
I run on an AMD ATHLON 2Ghz 512Mo GeForce4MX

All Works perfectly but TWICE slower than normal loop with glNormals,
glVertex, glTexCoord ...
Why ? Did i do something wrong ? Isn't vertex buffer objects are
supposed to increase the number of fps and not make your programs run slower
? Help !