Just checking for 2 jumps is not enough to be sure we can do a
complex loop unroll. We need to make sure we also have also found
2 loop terminators.
Without this we were attempting to unroll a loop where the second
jump was nested inside multiple ifs which loop analysis is unable
to detect as a terminator. We ended up splicing out the first
terminator but failed to actually unroll the loop, this resulted
in the creation of a possible infinite loop.
Fixes: 646621c66d "glsl: make loop unrolling more like the nir unrolling path"
Tested-by: Gert Wollny <gw.fossdev@gmail.com>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105670
Not used in GL but 8 and 16 component vectors exist in OpenCL.
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Refactor things so there isn't so much typing involved to add new
things.
Also drops a pointless conditional (out of bounds rows or columns
already returns error_type in all paths.. might as well drop it
rather than make the check more convoluted in the next patch by
adding the vec8/vec16 case).
Signed-off-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
I don't think it actually fixes anything, but that's nice not to have valgrind warnings.
It manifests itself when running the piglit test : glsl-fs-raytrace-bug27060
==2058== Uninitialised byte(s) found during client check request
==2058== at 0xC5BB040: blob_write_bytes (blob.c:152)
==2058== by 0xC595359: write_variable (nir_serialize.c:144)
==2058== by 0xC59560C: write_var_list (nir_serialize.c:192)
==2058== by 0xC5982E4: nir_serialize (nir_serialize.c:1124)
==2058== by 0xC0B729D: brw_program_serialize_nir (brw_program.c:835)
==2058== by 0xC0AB2D6: brw_link_shader (brw_link.cpp:358)
==2058== by 0xC32FE3F: _mesa_glsl_link_shader (ir_to_mesa.cpp:3169)
==2058== by 0xC36C7ED: create_new_program(gl_context*, state_key*) (ff_fragment_shader.cpp:1127)
==2058== by 0xC36C8A6: _mesa_get_fixed_func_fragment_program (ff_fragment_shader.cpp:1157)
==2058== by 0xC1B50AF: update_program (state.c:134)
==2058== by 0xC1B56DF: _mesa_update_state_locked (state.c:352)
==2058== by 0xC1B579A: _mesa_update_state (state.c:386)
==2058== Address 0xf1eab8a is 58 bytes inside a block of size 96 alloc'd
==2058== at 0x4C2CB8F: malloc (vg_replace_malloc.c:299)
==2058== by 0xC0FD306: ralloc_size (ralloc.c:121)
==2058== by 0xC0FD5B1: ralloc_array_size (ralloc.c:208)
==2058== by 0xC452B3B: (anonymous namespace)::nir_visitor::visit(ir_variable*) (glsl_to_nir.cpp:448)
==2058== by 0xC45CE8B: ir_variable::accept(ir_visitor*) (ir.h:428)
==2058== by 0xC46D0B5: visit_exec_list(exec_list*, ir_visitor*) (ir.cpp:1898)
==2058== by 0xC451D2F: glsl_to_nir (glsl_to_nir.cpp:162)
==2058== by 0xC0B5223: brw_create_nir (brw_program.c:79)
==2058== by 0xC0AAB67: brw_link_shader (brw_link.cpp:257)
==2058== by 0xC32FE3F: _mesa_glsl_link_shader (ir_to_mesa.cpp:3169)
==2058== by 0xC36C7ED: create_new_program(gl_context*, state_key*) (ff_fragment_shader.cpp:1127)
==2058== by 0xC36C8A6: _mesa_get_fixed_func_fragment_program (ff_fragment_shader.cpp:1157)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
Generated with
git grep -l nir_intrinsic_image | xargs \
sed -i 's/nir_intrinsic_image/nir_intrinsic_image_var/g'
and some manual fixing in nir_intrinsics.h
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The implementation is inspired by
lower_instructions_visitor::dfrexp_sig_to_arith.
This has been tested against the arb_gpu_shader_fp64/fs-frexp-dvec4
test using the ARB_gl_spirv branch.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Shader-db runtime change avarage of five runs:
Before 125,77 seconds (+/- 0,09%)
After 124,48 seconds (+/- 0,07%)
Tested-by: Dieter Nützel <Dieter at nuetzel-hh.de>
Reviewed-by: Eric Anholt <eric at anholt.net>
Make a simple worklist by basically just wrapping u_vector.
This is intended used in nir_opt_dce to reduce the number of calls
to ralloc, as we are currenlty spamming ralloc quite bad. It should
also give better cache locality and much lower memory usage.
Tested-by: Dieter Nützel <Dieter at nuetzel-hh.de>
Reviewed-by: Eric Anholt <eric at anholt.net>
Generalize the code for remove dead loops to also remove dead if
nodes. The conditions are the same in both cases, if the node (and
it's children) don't have side-effects AND the nodes after it don't
use the values produced by the node.
The only difference is when evaluating side effects: loops consider
only return jumps as a side-effect -- they can stop execution of nodes
after it; 'if' nodes outside loops should consider all kinds of
jumps (return, break, continue) since all of them can cause execution
of nodes after it to be skipped.
After this patch, empty ifs (those which both then and else blocks are
empty) will be removed by nir_opt_dead_cf.
It caused no change to shader-db, in part because the removal of empty
ifs is currently covered by nir_opt_peephole_select.
v2: Improve the identification of cases where break/continue can cause
side-effects. (Jason)
v3: Move code comment changes to a different patch. (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
i965 and gallium handle the atomic buffer index differently. It was
just by luck that the single piglit test for this was passing.
For gallium we use the atomic binding so that we match the handling
in st_bind_atomics().
On radeonsi this fixes the CTS test:
KHR-GL43.shader_storage_buffer_object.advanced-write-fragment
It also fixes tressfx hair rendering in Tomb Raider.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
This will only ever be used by gallium drivers so it probably doesn't
belong in the nir toolkit. Also we want to pass it some non NIR
things in the following patch.
To avoid regressions we wrap the lowering calls that have been moved
to st_glsl_to_nir with a quick hack so that they are only called for
radeonsi, we will replace the hack with a check for uniform packing
in a following patch.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Currently everything is padded to 4 components. Making the list
more flexible will allow us to do uniform packing.
V2 (suggestions from Nicolai):
- always pass existing calls to _mesa_add_parameter() true for padd_and_align
- fix bindless param value offsets
- remove left over wip logic from pad and align code
- zero out param value padding
- whitespace fix
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
When the shader cache is used, this can be generated. In fact, the
shader cache uses this sha1 to lookup the serialized GL shader
program.
If a GL shader program is restored with ProgramBinary, the shaders are
not available, and therefore the correct sha1 cannot be generated. If
this is restored, then we can use the shader cache to restore the
binary programs to the program that was loaded with ProgramBinary.
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Tapani Pälli <tapani.palli@intel.com>
This pass moves load UBO operations just before their first use,
loosely based on nir_opt_move_comparisons.
Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
So now, during spirv_to_nir, it uses the capability instead of the
extension. Note that we are really doing here is treating
SPV_AMD_gcn_shader as other supported extensions. SPV_AMD_gcn_shader
is not the first SPV extension supported. For example, the capability
draw_parameters infers if the extension SPV_KHR_shader_draw_parameters
is supported or not.
This could be seen as counter-intuitive, and that it would be easier
to define which extensions are supported, and based our checks on
that, but we need to take into account that some capabilities are
optional from core, and others came from new extensions.
Also this commit would make the implementation of ARB_spirv_extensions
easier.
v2: AMD_gcn_shader capability renamed to gcn_shader (Daniel Schürmann)
Reviewed-by: Daniel Schürmann <daniel.schuermann@campus.tu-berlin.de>
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
We don't need anymore the source and destination's data type, just
their bitsize.
v2:
- Use glsl_get_bit_size () instead (Jason).
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
There are some SPIRV opcodes (like UConvert and SConvert) have some
expectations of the output that doesn't depend on the operands
data type. Generalize the solution of all of them.
Signed-off-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Walking the whole hash table, inserting entries by hashing them first
is just a really bad idea. We can simply memcpy the whole thing.
V2: Remove leftover creation of acp in two places
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
OpenCL kernels also have int8/uint8.
v2: remove changes in nir_search as Jason posted a patch for that
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Karol Herbst <kherbst@redhat.com>
The code to handle mat multiplication by a scalar tries to pick either
imul or fmul depending on whether the matrix is float or integer.
However it was doing this by checking whether the base type is float.
This was making it choose the int path for doubles (and presumably
float16s).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
This is based heavily on 97f10934ed, "ac/nir: Add vote_ieq/vote_feq
lowering pass." from Bas Nieuwenhuizen. This version is a bit more
general since it's in common code. It also properly handles NaN due to
not flipping the comparison for floats.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
The SPIR-V extension wants us to be able to do an AllEqual on any vector
or scalar type. This has two implications:
1) We need to be able to handle vectors so we switch the vote_eq
intrinsics to be vectorized intrinsics.
2) We need to handle floats which have different behavior with respect
to +-0, NaN, etc. than the integer variant so we need two variants.
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>