Changing vertex color after vertex buffer and index buffer definition

Win32 Programming

    Next

  • 1. Terrain rendering
    Hello, I am building a terrain engine (LOD, Quadtree, multitexturing, ...) and still have one issue to deal with. I texture the terrain with a collection of material textures (i.e. snow, rock, ...). For each texture I have a 'basic dimension'. Then I just draw the entire terrain vertices using a REPEAT mode and using the terrain coordinates as the texture coordinates. God that's ugly in the terrain portions with steep slopes!!!!! Has anyone already designed a Texture wrapping scheme that would help? Thanks
  • 2. SetViewport causes device->Clear to fail
    I'm trying to render 2 lines to an ortho view using D3DXMatrixOrthoOffCenterRH. My lines draw with the correct length no matter what orientation they are in, but their width gets stretched depending on how I resize the window. I realized I'm not calling SetViewport anywhere. So I call it like this when the window resizes: LPDIRECT3DDEVICE9 d3dDevice = GetD3DDevice(); // this is ok D3DVIEWPORT9 vp; HRESULT ok = d3dDevice->GetViewport( &vp ); ASSERT( SUCCEEDED( ok ) ); vp.X = 0; vp.Y = 0; vp.Width = clientWidth; // window client area width vp.Height = clientHeight // window client area height vp.MinZ = 0.0f; vp.MaxZ = 1.0f; ok = d3dDevice->SetViewport( &vp ); ASSERT( SUCCEEDED( ok ) ); No assertions fail here, but the next time I try to render (the next D3D call), device->Clear returns 0x8876086C. If I don't call SetViewport, or if I don't change vp.Width or vp.Height that I got from GetViewport, Clear does not fail. But even changing either of those by 1 causes a failure. What am I doing wrong? And is this the correct way to ensure I don't get any distortion to my scene when the window is resized? Thanks. -jim
  • 3. "Shader Version Expected"???
    Hi, Because I am trying to debug a problem in an HLSL vertex shader I have, I compiled it with fxc (with optimization) and the resulting asm code begins with a "preshader" directive. Trying to run an asm version of a vertex shader I get an effect (that where my vertex shader is) compilation error with the message "...error X2001: shader version expected", and I believe it is because the asm code begins with the "preshader directive instead of something like this: vs_2_0. How there are run asm vertex shaders WITH "preshaders"? Thanks in advance for any help, -- ~Bilbao

Changing vertex color after vertex buffer and index buffer definition

Postby Andy » Thu, 04 Sep 2003 10:15:01 GMT

Hi,

Say you have this custom vertex structure:

struct CUSTOMVERTEX
{
	float x, y, z;
	DWORD color;
};

Is it possible once you've defined values for a vertex 
buffer using that structure to subsequently change the 
color for a vertex that you defined? If so, how?

Thanks :)

Re: Changing vertex color after vertex buffer and index buffer definition

Postby Eyal Teler » Thu, 04 Sep 2003 18:26:26 GMT

> struct CUSTOMVERTEX

The way you probably filled it in the first place -- by locking the 
buffer and writing the new value into it. (And then unlocking, of 
course. :)

	Eyal


Similar Threads:

1.Vertex and Index Buffer - color updating after definition

Andy wrote:
> I realized that my earlier question was not complete...
> 
> Given a defined vertex structure, a defined set of 
> vertices in a vertex buffer, and a defined index buffer 
> that has referenced those vertices, can you then change 
> vertex color as you use items being referenced by the 
> index buffer? If so, how?
> 
> (Simply put I'm trying to reuse vertex data that needs to 
> vary in color at different index buffer values... is there 
> a smarter way to do this?)
> 
> Thanks :)

If you mean you're trying to save duplicating vertices that have the 
same position but different colour, just duplicate them. There's no 
good way to reuse them.

	Eyal

2.vertex buffer, index buffer

there is setup time internally both for a VB and for a particular 
FVF/components. its less if you stay with the same format/stride, but there 
is still some cost with switching just the VB.

minimize it if you can without heroic rewriting, but dont worry about it if 
its a huge rewrite unless you measure thats a significant factor in your 
particular performance signature.

tools are required for measurement. VTune or other static performance 
analysis tool and a dynamic analysis tool like Pix that shows your API usage 
per frame is good, you can zero in on most issues that way.

"Hanna-Barbera" < XXXX@XXXXX.COM > wrote in message 
news: XXXX@XXXXX.COM ...
> Hi,
>
> Does switching VB and IBs a lot cost too much?
> Is it better to store multiple things to be rendered in one VB/IB?
> Is it considered one of the costliest things, that must be dealt with.
> What about the size of the buffers.
> I know 16 bit indices are recommended, which is what I use.
>
> I know my question is abstract, but I would like to know in relative 
> terms,
> compared to swithcing textures, switching shaders, and so on.
>
> Right now, my engine may be switching texture, and VB and IB per object.
> Shader are not switched so often cause the object's share the same shader.
>
> Thanks
>
> 


3.Missing Function Definitions for Vertex Buffer Objects (VBOs) on Linux

Hey there, OpenGL Hackers!

I am currently adding support for Vertex Buffer Objects in my Linux
(Ubuntu Feisty) application.

I have included <GL/glext.h> and this file really contains that
function and defines related to VBOs.

But when I compile my program, gcc only finds the defines but not the
function declarations (implicit declaration of gl...)

The linking fortunately works so my ATI Radeon 300 open source driver
apparently has them.

I of course still want the declarations to be included so I know that
I send the correct types of parameters.

How do I fix this?

"glxinfo|grep version" says:
server glx version string: 1.2
client glx version string: 1.4
GLX version: 1.2
OpenGL version string: 1.3 Mesa 6.5.2

and
glxinfo |grep vertex_buffer says:
GL_ARB_vertex_buffer_object, GL_ARB_vertex_program,
GL_ARB_window_pos,

I have read that VBOs were introduced in OpenGL 1.5 but my driver does
not seem to fully support 1.5. I guess this my be the reason for my
problem.

Thanks in advance,
Nordl

4.memory-loss when locking & unlocking vertex/index buffers

Hi,

I am finding that memory is being consumed at the rate of 1 to 6
megabytes per lock/unlock on my vertex and index buffer. The memory
does not get released until my application exits, which means that the
longer my app runs the more memory is consumed, so at a low 20fps
that's an awful lot of memory being consumed per second. Surely this
should not happen?

I am running VB .Net and Managed Direct X.

Cheers, code is posted below,

Richy

' I lock my buffers:

_vbdata = _vb.Lock(0, 0)
_ibdata = _ib.Lock(0, 0)

_lmvbdata = _lmvb.Lock(0, 0)
_lmibdata = _lmib.Lock(0, 0)

' and then I unlock it all:

_vb.Unlock()
_ib.Unlock()
_lmib.Unlock()
_lmvb.Unlock()

' and then I set the variables to nothing

_vbdata = Nothing
_ibdata = Nothing

_lmvbdata = Nothing
_lmibdata = Nothing

5.Filling vertex and index buffers

I am trying to do indexed rendering, and have a question. The book I am
going through (Zen of Direct 3D Game Programming) uses the following method
to fill a vertex buffer:  (I have changed some variable names to make their
use more obvious in such a small snippet)

LPDIRECT3DVERTEXBUFFER8 pVB = 0;
hresult = g_pDevice->CreateVertexBuffer( sizeof (CustomVertex) * iNumVerts,
D3DUSAGE_WRITEONLY,

CUSTOMVERTEXTYPE, D3DPOOL_DEFAULT, &pVB);


BYTE* pVertexData = 0;

hResult = pVB->Lock(0,0,&pVertexData,0);

///////////////////////////////////////////////////
//Here is the bit that gives me trouble
/////////////////////////////////////////////////

CopyMemory( pVertexData, (void *)&m_Vertices, sizeof(m_Vertices));
pVB->Unlock();

I have 2 questions about the use of CopyMemory(). First, what protection do
I have against a memory overrun? I think I am having this problem, and if I
understand the Copymemory() function correctly, there  is nothing that says
the block of memory pointed to by pVertexData is going to safely hold what
I'm trying to copy to it? Is this how it is usually done? It just seems
wrong to me.

Also, and this is really a C++ question, but it is relevant to this task, In
this example m_Vertices is an array of Vertices. So doesn't
sizeof(m_Vertices) only give me the size of the first element of the array?
I think I must not understand arrays as I thought I did, because I though
you couldn't use "sizeof" to find the size of an entire array.

Thanks!




6. index and vertex buffers

7. Indexed Vertex Buffer with noraml

8. optimizing vertex/index buffer use



Return to Win32 Programming

 

Who is online

Users browsing this forum: No registered users and 84 guest