v3d: handle writes to gl_Layer from geometry shaders
When geometry shaders write a value to gl_Layer that doesn't correspond to an existing layer in the target framebuffer the rendering behavior is undefined according to the spec, however, there are CTS tests that trigger this scenario on purpose, probably to ensure that nothing terrible happens. For V3D, this situation is problematic because the binner uses the layer index to select the offset to write into the tile state data, and we only allocate tile state for MAX2(num_layers, 1), so we want to make sure we don't produce values that would lead to out of bounds writes. The simulator has an assert to catch this, although we haven't observed issues in actual hardware it is probably best to play safe. Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
This commit is contained in:
@@ -279,6 +279,14 @@ enum quniform_contents {
|
||||
* L2T cache will effectively be the shared memory area.
|
||||
*/
|
||||
QUNIFORM_SHARED_OFFSET,
|
||||
|
||||
/**
|
||||
* Returns the number of layers in the framebuffer.
|
||||
*
|
||||
* This is used to cap gl_Layer in geometry shaders to avoid
|
||||
* out-of-bounds accesses into the tile state during binning.
|
||||
*/
|
||||
QUNIFORM_FB_LAYERS,
|
||||
};
|
||||
|
||||
static inline uint32_t v3d_unit_data_create(uint32_t unit, uint32_t value)
|
||||
|
Reference in New Issue
Block a user