Bit packing issues!
Archiviert a year ago
G
GLALIEツ
Member
Hello, I need help in understanding and solving a problem where my CPU is correctly (and I have verified) packing vertex position data into a 32 bit integer, but my GPU is unable to properly extract the bits therefore unable to draw triangles.
# CPU side
```cpp
// This formats the vertex array object so that the gpu understands how to traverse the vertex buffer
void ChunkMesh::Format() {
glEnableVertexArrayAttrib(VertexArray(), 0);
glVertexArrayVertexBuffer(VertexArray(), 0, VertexBuffer(), 0, sizeof(ChunkVertex));
glVertexArrayAttribFormat(VertexArray(), 0, 1, GL_UNSIGNED_INT, GL_FALSE, 0);
glVertexArrayAttribBinding(VertexArray(), 0, 0);
}
ChunkVertex::ChunkVertex(const std::uint8_t x, const std::uint8_t y, const std::uint8_t z, const std::uint8_t textureIndex, const std::uint8_t corner, const std::uint8_t face) {
uint8_t* ptr = (uint8_t*)&packed;
ptr[0] = x | (z << 4);
ptr[1] = y;
}
```
# Vertex Shader
```glsl
#version 430 core
in uint packedData;
// Yes, this is set.
uniform mat4 projection;
uniform mat4 view;
// Yes, this is also set.
uniform vec2 chunkPosition;
// Do not worry about this
const float brightness[] = float[6](
// Up, Down, Left, Right, Front, Back
1.0, 0.2, 0.5, 0.2, 0.75, 0.3
);
void main() {
// The problem:
uint x = packedData & 0xFu;
uint z = (packedData & 0xF0u) >> 4;
uint y = (packedData & 0xFF00u) >> 8;
vec3 position = vec3(float(x), float(y), float(z));
gl_Position = projection * view * vec4(position + vec3(chunkPosition.x, 0.0, chunkPosition.y), 1.0);
}
```
The fragment shader simply outputs red.
