https://www.youtube.com/watch?v=hCqBe9TS4z4
This is a not-for-production rendering engine. I work on it during my free time to improve my programming skills. Here's how it works.
Basic tiled forward shading is implemented. Lights are assigned on CPU by projection light AABB on the grid. There is a buffer (GL_TEXTURE_BUFFER. WebGL build uses regular GL_TEXTURE_2D) to store screen grid cells, every cell contains offset in the light data buffer and count of point/spot/projectors/decals encoded into G and B channels. The second texture buffer contains indices of the corresponding lights/decals/projectors. The light / projector data is stored in UBO array.
Regular shadow mapping by rendering depth maps into the atlas (all lights are the same size in the atlas). Point light shadows are rendered in the same way as spot - in a single direction (no cubemaps, for simplicity)
Engine has a single atlas that contains all the decals (I use TexturePacker to generate the atlas).
Decals and projectors are implemented in the same way as lights. The only difference is that decal substitutes the diffuse color (normals can be also done easily) and contain the projection matrix and atlas offset in their UBO structure.
Visible objects add themselves into the appropriate render queue (Opaque/Transparent etc)
At the frame start, all the object and light data are uploaded into corresponding UBO/texture buffers.
Depth only pass goes first. Then depth maps for visible light sources are rendered into the atlas.
Then the scene is drawn into the offscreen framebuffer.
Finally the full screen quad is drawn (post effect to be done)
The engine has high-level wrappers for OpenGL buffers that also helps to work with limited size uniform buffers (usually 64kb).
MultiVertexBufferObject
automatically switches to the next VBO for writing data if it's current buffer doesn't have enough space.
Most of the shaders are assembled from the root template file by conditionally include other templates based on the requested shader capabilities.
Materials use ShaderCapsSet
to obtain a shader from the generator.
Third party template engine is used to parse the shader templates.
Materials need to be improved. Right now there are a lot of material types in the engine and it's required to have a separate class for every new ShaderCapsSet
.
Material in it's constructor specifies ShaderCapsSet
for the shader generator and adds uniform bindings (e.g. textures).
The better solution would be to have a single fat Material class that covers the majority of use cases.
Uniforms are only used for textures, everything else is stored in UBO / texture buffers.
Matrices (with blending and interpolation) are calculated on CPU. Then they are uploaded into the UBO array of matrices. When rendering a skinned mesh object, buffer range is bound by calling glBindBufferRange
There is support for having geometry/skinning data and animations in the separate files.
There is an export script collada -> custom binary format
.
The binary format structure:
[uint32]
size of the JSON header in bytes[string]
JSON header that describes included components (e.g. geometry, scene hierarchy, animations, skinning etc)- binary data as described in JSON
Export script allows to skip geometry and keep Maya
reference node IDs during export - to use Maya
as a basic level editor.
Also it's possible to export skinning mesh model as mesh + joints only
or joint animations only
.
The engine doesn't contain platform specific code.
Currently supported platforms is Mac OS and partial support for WebGL2 via emscripten (no shadows support in emscripten build).
Windows support is easy to add, need to setup cmake for it.
- scene spatial partitioning
- improve light tiles assignment (with compute shaders or at least split into multiple CPU threads)
- Make sure world transform are calculated only once (probably make children dirty on change)
- fix mesh bounds scaling. Calculate bounds after update() calls
- use 4x3 matrices when possible
- Single material class
- sort render queues
- OpenGL state cache
- Bloom