On the other hand, if the correct color processing is applied to the clear color, the result is much more satisfying:
The problem with the current shader paradigm is that shader code has entirely different semantics than CPU code and is kept in a separate code base. Executing shader functions on the CPU side is at best difficult, and is typically entirely unsupported. The traditional workaround to this issue is to rewrite the necessary functions in the language being used on the CPU. This results in code duplication, however, and complicates maintaining and evolving the rendering system.
PyStream avoids this problem by using the same language on the CPU and the GPU, and unifying the codebases. With a few reasonable restrictions, most functions can be executed on either the CPU or the GPU.
The fragment shader is that generated the images in this post was as follows:
def shadeFragment(self, context, pos, normal, texCoord): surface = self.material.surface(pos, normal.normalize()) # Texture surface.diffuseColor *= self.sampler.texture(texCoord).xyz # Accumulate lighting self.ambient.accumulate(surface, self.worldToCamera) self.light.accumulate(surface, self.worldToCamera) mainColor = surface.litColor() mainColor = self.fog.apply(mainColor, surface.p) mainColor = self.processOutputColor(mainColor) mainColor = vec4(mainColor, 1.0) context.colors = (mainColor,) def processOutputColor(self, color): return rgb2srgb(tonemap(color))
The clear color is generated by calling the shader's
clearColor = shader.processOutputColor(shader.fog.color)