intel/compiler: Lower SSBO and shared loads/stores in NIR
We have a bunch of code to do this in the back-end compiler but it's fairly specific to typed surface messages and the way we emit them. This breaks it out into NIR were it's easier to do things a bit more generally. It also means we can easily share the code between the vec4 and FS back-ends if we wish. Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
This commit is contained in:
@@ -714,6 +714,8 @@ brw_preprocess_nir(const struct brw_compiler *compiler, nir_shader *nir)
|
||||
brw_nir_no_indirect_mask(compiler, nir->info.stage);
|
||||
OPT(nir_lower_indirect_derefs, indirect_mask);
|
||||
|
||||
OPT(brw_nir_lower_mem_access_bit_sizes);
|
||||
|
||||
/* Get rid of split copies */
|
||||
nir = brw_nir_optimize(nir, compiler, is_scalar, false);
|
||||
|
||||
|
Reference in New Issue
Block a user