Some symbols gathered in the symbols table during parsing are needed
later for the compile and link stages, so they are moved along the
process. Currently, only functions and non-temporary variables are
copied between symbol tables. However, the built-in gl_PerVertex
interface blocks are also needed during the linking stage (the last
step), to match re-declared blocks of inter-stage shaders.
This patch adds a new utility function that will factorize current code
that copies functions and variables between two symbol tables, and in
addition will copy explicitly declared gl_PerVertex blocks too.
The function will be used in a subsequent patch.
v2 (Neil Roberts):
Allow the src symbol table to be NULL and explicitly copy the
gl_PerVertex symbols in case they are not referenced in the exec_list.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Eduardo Lima Mitev <elima@igalia.com>
Signed-off-by: Neil Roberts <nroberts@igalia.com>
NIR does not have these instructions. TGSI and Mesa IR both implement
them using < and >=, repsectively. Removing them deletes a bunch of
code and means I don't have to add code to the SPIR-V generator for
them.
v2: Rebase on 2+ years of change... and fix a major bug added in the
rebase.
text data bss dec hex filename
8255291 268856 294072 8818219 868e2b 32-bit i965_dri.so before
8254235 268856 294072 8817163 868a0b 32-bit i965_dri.so after
7815339 345592 420592 8581523 82f193 64-bit i965_dri.so before
7813995 345560 420592 8580147 82ec33 64-bit i965_dri.so after
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Without the lexer changes, tests/glslparsertest/glsl2/tex_rect-02.frag
fails. Before this change, the parser would determine that
sampler2DRect is not a valid type because the call to
state->symbols->get_type() in ast_type_specifier::glsl_type() would
return NULL. Since ast_type_specifier::glsl_type() is now going to
return the glsl_type pointer that it received from the lexer, it doesn't
have an opportunity to generate an error.
text data bss dec hex filename
8255243 268856 294072 8818171 868dfb 32-bit i965_dri.so before
8255291 268856 294072 8818219 868e2b 32-bit i965_dri.so after
7815195 345592 420592 8581379 82f103 64-bit i965_dri.so before
7815339 345592 420592 8581523 82f193 64-bit i965_dri.so after
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
This allows us to use a single token for every built-in type except void.
text data bss dec hex filename
8275163 269336 294072 8838571 86ddab 32-bit i965_dri.so before
8255243 268856 294072 8818171 868dfb 32-bit i965_dri.so after
7836963 346552 420592 8604107 8349cb 64-bit i965_dri.so before
7815195 345592 420592 8581379 82f103 64-bit i965_dri.so after
Yes, the 64-bit binary shrinks by 21k.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Passing YYSTYPE into classify_identifier enables a later patch.
text data bss dec hex filename
8310339 269336 294072 8873747 876713 32-bit i965_dri.so before
8275163 269336 294072 8838571 86ddab 32-bit i965_dri.so after
7845579 346552 420592 8612723 836b73 64-bit i965_dri.so before
7836963 346552 420592 8604107 8349cb 64-bit i965_dri.so after
Yes, the 64-bit binary shrinks by 8k.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
There are two callers of the constructor, and they are right next to
each other. Move the "#anon_struct" name handling to the parser so that
the conditional can be removed.
I've also deleted part of the comment (about the memory leak) because I
don't think it's quite accurate or relevant.
text data bss dec hex filename
8310399 269336 294072 8873807 87674f 32-bit i965_dri.so before
8310339 269336 294072 8873747 876713 32-bit i965_dri.so after
7845611 346552 420592 8612755 836b93 64-bit i965_dri.so before
7845579 346552 420592 8612723 836b73 64-bit i965_dri.so after
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
From the OpenGL 4.6 spec, section 4.4.1 Input Layout Qualifiers, Page 68,
(Location aliasing):
"Further, when location aliasing, the aliases sharing the location
must have the same underlying numerical type (floating-point or
integer)."
The current implementation is too strict, since it checks that the
the base types are an exact match instead.
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
v2:
- we only need to validate inputs to the first stage and outputs
from the last stage, everything else has already been validated
during cross_validate_outputs_to_inputs (Timothy).
- Use MAX_VARYING instead of MAX_VARYINGS_INCL_PATCH (Illia)
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
For non-SSO programs, we only need to validate outputs, since
the cross validation of outputs to inputs will ensure that we
produce linker errors for invalid inputs too.
Hoever, for the SSO path there is no output to input validation,
so we need to validate inputs explicitly. Generalize the function
so it can handle this as well.
Also, notice that vertex shader inputs and fragment shader outputs
are already validated in assign_attribute_or_color_locations()
for both SSO and non-SSO paths, so we should not try to validate
that here again (in fact, the function would require explicit
paths to handle these two cases properly).
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Currently, we only validate explicit locations for non-SSO programs.
This creates a helper that we can call from both SSO and non-SSO paths
directly, so we can reuse all the logic behind this.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
From ARB_enhanced_layouts:
"[...]when location aliasing, the aliases sharing the location
must have the same underlying numerical type (floating-point or
integer) and the same auxiliary storage and
interpolation qualification.[...]"
Add code to the linker to validate that aliased locations do
have the same aux storage.
Fixes:
KHR-GL45.enhanced_layouts.varying_location_aliasing_with_mixed_auxiliary_storage
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
From ARB_enhanced_layouts:
"[...]when location aliasing, the aliases sharing the location
must have the same underlying numerical type (floating-point or
integer) and the same auxiliary storage and
interpolation qualification.[...]"
Add code to the linker to validate that aliased locations do
have the same interpolation.
Fixes:
KHR-GL45.enhanced_layouts.varying_location_aliasing_with_mixed_interpolation
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
The existing code was checking the whole interface variable rather
than its members, which is not what we want: we want to check
aliasing for each member in the interface variable.
Surprisingly, there are piglit tests that verify this and were
passing due to a bug in the existing code: when we were computing
the last component used by an interface variable we would use
the 'vector' path and multiply by vector_elements, which is 0 for
interface variables. This made the loop that checks for aliasing
be a no-op and not add the interface variable to the list of outputs
so then we would fail to link when we did not see a matching output
for the same input in the next stage. Since the tests expect a
linker error to happen, they would pass, but not for the right
reason.
Unfortunately, the current implementation uses ir_variable instances
to keep track of explicit locations. Since we don't have
ir_variables instances for individual interface members, we need
to have a custom struct with the data we need. This struct has
the ir_variable (which for interface members is the whole
interface variable), plus the data that we need to validate for
each aliased location, for now only the base type, which for
interface members we will take from the appropriate field inside
the interface variable.
Later patches will expand this custom struct so we can also check
other requirements for location aliasing, specifically that
we have matching interpolation and auxiliary storage, that once
again, we will take from the appropriate field members for the
interface variables.
v2:
- Use MAX_VARYING instead of MAX_VARYINGS_INCL_PATCH (Illia)
Fixes:
KHR-GL45.enhanced_layouts.varying_block_automatic_member_locations
Fixes (these were passing before but for incorrect reasons):
tests/spec/arb_enhanced_layouts/linker/block-member-locations/named-block-member-location-overlap.shader_test
tests/spec/arb_enhanced_layouts/linker/block-member-locations/named-block-member-mixed-order-overlap.shader_test
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
Move the checks for explicit locations to a separate function. We
will use this in a follow-up patch to validate locations for interface
variables where we need to validate each interface member rather than
the interface variable itself.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
We were assuming that if an input has an invalid explicit location it would
fail to link because it would not find the corresponding output, however,
since we look for the matching output by indexing the explicit_locations
array with the input location, we still need to ensure that we don't index
out of bounds.
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Ilia Mirkin <imirkin@alum.mit.edu>
There are two issues with the current implementation. First, it relies
on the layout(local_size_*) happening in the same shader as the main
function, and secondly it doesn't work for variable group sizes.
In both cases, the simplest fix is to move the setup of these derived
values to a later time, similar to how the gl_VertexID workarounds are
done. There already exist system values defined for both of the derived
values, so we use them unconditionally, and lower them after linking is
performed.
While we're at it, we move to using gl_LocalGroupSizeARB instead of
gl_WorkGroupSize for variable group sizes.
Also the dead code elimination avoidance can be removed, since there
can be situations where gl_LocalGroupSizeARB is needed but has not been
inserted for the shader with main function. As a result, the lowering
code has to insert its own copies of the system values if needed.
Reported-by: Stephane Chevigny <stephane.chevigny@polymtl.ca>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103393
Cc: mesa-stable@lists.freedesktop.org
Signed-off-by: Ilia Mirkin <imirkin@alum.mit.edu>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
We only need to add a check to validate output locations here. For
inputs with invalid locations we will fail to link when we can't
find a matching output in the same (invalid) location.
v2: compute location slots properly depending on shader stage and
variable type / direction
Fixes:
KHR-GL45.enhanced_layouts.varying_location_limit
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
We won't split varyings marked as always active because there
is no point in doing so. This means we need to mark both
sides of the interface as always active otherwise we will have
a mismatch and start removing things we shouldn't.
Reviewed-by: Eric Anholt <eric@anholt.net>
Since blob.h moved up to src/compiler the test should include that
instead of src/compiler/glsl
fixes: 0e3bd56c6e ("compiler: Move blob up a level")
Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Despite the name, it could only be used if you immediately wrote to the
pointer. Noboby was using it outside of one test, so clearly this
behavior wasn't that useful. Instead, make it return an offset into the
data buffer so that the result isn't invalidated if you later write to
the blob. In conjunction with blob_overwrite_bytes(), this will be
useful for leaving a placeholder and then filling it in later, which
we'll need to do for handling phi nodes when serializing NIR.
v2 (Jason Ekstrand):
- Detect overflow in the offset + to_write computation
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
There's no reason why that tiny bit of memory needs to be on the heap.
We always put blob_reader on the stack, so why not do the same with the
writable blob.
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
We're going to want to use the blob for Vulkan pipeline caching so it
makes sense to have it in libcompiler not libglsl.
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Otherwise we could have a failure followed by a smaller write that
succeeds and get a corrupted blob. If we ever OOM, we should stop.
v2 (Jason Ekstrand):
- Initialize the new boolean member in create_blob
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
Cc: mesa-stable@lists.freedesktop.org
Unlike uniforms, the limit on shared memory size is not called out
explicitly in the list of things that cause linker errors, but presumably
that's just an oversight in the spec.
Fixes dEQP-GLES31.functional.debug.negative_coverage.{callbacks,get_error,log}.compute.exceed_shared_memory_size_limit
Reviewed-by: Timothy Arceri <tarceri@itsqueeze.com>
c7affbf687 enabled GLSLOptimizeConservatively on some
drivers. The idea was to speed up compile times by running
the GLSL IR passes only once each time do_common_optimization()
is called. However loop unrolling can create a big mess and
with large loops can actually case compile times to increase
significantly due to a bunch of redundant if statements being
propagated to other IRs.
Here we make sure to clean things up before moving on.
There was no measureable difference in shader-db compile times,
but it makes compile times of some piglit tests go from a couple
of seconds to basically instant.
The shader-db results seemed positive also:
Totals:
SGPRS: 2829456 -> 2828376 (-0.04 %)
VGPRS: 1720793 -> 1721457 (0.04 %)
Spilled SGPRs: 7707 -> 7707 (0.00 %)
Spilled VGPRs: 33 -> 33 (0.00 %)
Private memory VGPRs: 3140 -> 2060 (-34.39 %)
Scratch size: 3308 -> 2180 (-34.10 %) dwords per thread
Code Size: 79441464 -> 79214616 (-0.29 %) bytes
LDS: 436 -> 436 (0.00 %) blocks
Max Waves: 558670 -> 558571 (-0.02 %)
Wait states: 0 -> 0 (0.00 %)
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
The old code assumed that loop terminators will always be at
the start of the loop, resulting in otherwise unrollable
loops not being unrolled at all. For example the current
code would unroll:
int j = 0;
do {
if (j > 5)
break;
... do stuff ...
j++;
} while (j < 4);
But would fail to unroll the following as no iteration limit was
calculated because it failed to find the terminator:
int j = 0;
do {
... do stuff ...
j++;
} while (j < 4);
Also we would fail to unroll the following as we ended up
calculating the iteration limit as 6 rather than 4. The unroll
code then assumed we had 3 terminators rather the 2 as it
wasn't able to determine that "if (j > 5)" was redundant.
int j = 0;
do {
if (j > 5)
break;
... do stuff ...
if (bool(i))
break;
j++;
} while (j < 4);
This patch changes this pass to be more like the NIR unrolling pass.
With this change we handle loop terminators correctly and also
handle cases where the terminators have instructions in their
branches other than a break.
V2:
- fixed regression where loops with a break in else were never
unrolled in v1.
- fixed confusing/wrong naming of bools in complex unrolling.
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
do-while loops can increment the starting value before the
condition is checked. e.g.
do {
ndx++;
} while (ndx < 3);
This commit changes the code to detect this and reduces the
iteration count by 1 if found.
V2: fix terminator spelling
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Elie Tournier <elie.tournier@collabora.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
These instructions will be executed on every iteration of the loop
we cannot drop them.
V2:
- move removal of unreachable terminators from the terminator list
to the same place they are removed from the IR as suggested by
Nicolai.
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de>
This gets pretty much the entire classic tree building, as well as
i965, including the various glapis. There are some workarounds for bugs
that are fixed in meson 0.43.0, which is due out on October 8th.
I have tested this with piglit using glx.
v2: - fix typo "vaule" -> "value"
- use gtest dep instead of linking to libgtest (rebase error)
- use gtest dep instead of linking against libgtest (rebase error)
- copy the megadriver, then create hard links from that, then delete
the megadriver. This matches the behavior of the autotools build.
(Eric A)
- Use host_machine instead of target_machine (Eric A)
- Put a comment in the right place (Eric A)
- Don't have two variables for the same information (Eric A)
- Put pre_args at top of file in this patch (Eric A)
- Fix glx generators in this patch instead of next (Eric A)
- Remove -DMESON hack (Eric A)
- add sha1_h to mesa in this patch (Eric A)
- Put generators in loops when possible to reduce code in
mapi/glapi/gen (Eric A)
v3: - put HAVE_X11_PLATFORM in this patch
Signed-off-by: Dylan Baker <dylanx.c.baker@intel.com>
Reviewed-by: Eric Anholt <eric@anholt.net>
It's inside an if-statement that already checks that the variables are
not NULL.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
The optimization as done in opt_copy_propagation would have to be
removed in the next patch. If we just eliminate that optimization
altogether, shader-db results, even on platforms that use NIR, are hurt
quite substantially. I have not investigated why NIR isn't picking up
the slack here.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Cc: Jason Ekstrand <jason@jlekstrand.net>
Instead of generating a sequence like:
run_default = true;
if (i == 3) // some label that appears after default
run_default = false;
if (i == 4) // some label that appears after default
run_default = false;
...
if (run_default) {
...
}
generate something like:
run_default = !((i == 3) || (i == 4) || ...);
if (run_default) {
...
}
This eliminates one use of conditional assignment, and it enables the
elimination of another.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Previously the instruction stream was walked looking for comparisons
with case-label values. This should generate nearly identical code.
For at least fs-default-notlast-fallthrough.shader_test, the code is
identical.
This change will make later changes possible.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
The values being compared are scalars, so these are the same. While
I'm here, simplify the run_default condition to just deref the flag
(instead of comparing a scalar bool with true).
There is a bit of extra change in this patch. When constructing an
ir_binop_equal ir_expression, there is an assertion that the types are
the same. There is no such assertion for ir_binop_all_equal, so
passing glsl_type::uint_type with glsl_type::int_type was previously
fine. A bunch of the code motion is to deal with that.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
This happens to work now because ir_binop_all_equal is used. This
causes vector typed init-expressions to produce scalar Boolean values
after comparison.
The next commit changes ir_binop_all_equal to ir_binop_equal. Vector
typed init-expressions will then produce vector Boolean values, and, in
debug builds, the ir_assignment constructor will fail an assertion.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>