After 41549a18e6 ("i965: Enable OpenGL 4.6 for Gen8+"), i965
implements GL_ARB_gl_spirv, GL_ARB_spirv_extensions and OpenGL 4.6.
After 15e439071d ("iris: Enable ARB_gl_spirv and ARB_spirv_extensions"),
iris implements GL_ARB_gl_spirv, GL_ARB_spirv_extensions and OpenGL
4.6.
v2:
- Explicit the support is for i965 and iris.
v3:
- Add also GL_ARB_spirv_extensions to the release notes (Alejandro).
Signed-off-by: Andres Gomez <agomez@igalia.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Found when building for Android in C99 mode. Include bitscan.h to ensure ffs is
available.
Fixes: 7b4ed2b5 ("egl: Convert configs to use shifts and sizes instead of masks")
Signed-off-by: Kevin Strasser <kevin.strasser@intel.com>
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
The order of comparison has changed, so we need to invert the logic of
"insert_left" when using rb_tree_insert_at().
Fixes: dae33052db (util/rb_tree: Reverse the order of comparison
functions).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
To enable EXT_demote_to_helper_invocation:
This extension adds a "demote" keyword that is similar to "discard" but
only suppresses subsequent writes and outputs to the framebuffer, and
does not terminate the execution of the invocation. For the remainder
of the execution, the invocation is "demoted" to act like a helper
invocation.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
From EXT_demote_to_helper_invocation, implemented with the existing
nir_intrinsic_is_helper_invocation.
Such builtin is necessary when using `demote` because we can't
redefine the value of gl_HelperInvocation (since it is an input
variable).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
When the EXT_demote_to_helper_invocation extension is enabled,
`demote` is treated as a keyword, and produces an ir_demote.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
To represent the new `demote` keyword when using
EXT_demote_to_helper_invocation extension. Most of the changes are to
include it in the visitors.
Demote is not considered a control flow, so also include an empty
visit member function in ir_control_flow_visitor.
Only NIR actually supports `demote`, so assert the translations for
TGSI and Mesa's gl_program -- since the demote is not expected to
appear for those.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
We can't just check for the BO base address, we need to check for the
full address including any offset we may have applied. When updating
the address, we need to include the offset again.
Fixes: 5ad0c88dbe ("iris: Replace buffer backing storage and rebind to update addresses.")
A while back, Michael Larabel noticed that Paraview's Wavelet Volume
case runs significantly slower on iris than i965. It turns out this
is because we enable CCS_E for 32-bit floating point formats, while
i965 disables it, with an oblique comment saying that we benchmarked
it (on what exactly?) and determined that it was a loss.
Paraview uses both R32_FLOAT and R32G32B32A32_FLOAT, and I observed
large framerate drops when enabling CCS_E for either format. However,
several other benchmarks (Aztec Ruins, many Synmark cases) use 16-bit
floating point formats, with no apparent ill effects.
So, disable compression for 32-bit float formats for now, but leave it
enabled for 16-bit float formats as they seem to be working fine.
Improves performance in Paraview's Wavelet Volume test by 62% on a
Skylake GT4e.
Fixes: 3cfc6a207b ("iris: Fill out res->aux.possible_usages")
Tests done with llvm-config indicate that there are only 2 libraries in
irreader and not in engine, LLVMAsmParser and LLVMIRReader and both of them
are part of coroutines so I replaced irreader with coroutines and added
libraries unique to coroutines.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Now that we have constant adjustment logic abstracted, we can do this
safely. Along with the csel inversion patch, this allows many more
common csel ops to inline their condition in the bundle.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If we can reuse constant slots from other instructions, we would like to
do so to include more instructions per bundle.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
If an instruction could be scheduled to vmul to satisfy the writeout
conditions, let's do that and save an instruction+cycle per fragment
shader.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We still emit in-order but we switch to using the bundles created from
the new scheduler, which will allow greater flexibility and room for
out-of-order optimization.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
We require chosen instructions to be "close", to avoid ballooning
register pressure. This is a kludge that will go away once we have
proper liveness tracking in the scheduler, but for now it prevents a lot
of needless spilling.
v2: Lower threshold to 6 (from 8). Schedule is hurt, but a few shaders
that spilled excessively are fixed.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Derp
We can bundle two load/store together. This eliminates the need for
explicit load/store pairing in a prepass, as well.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Conditions for branches don't have a swizzle explicitly in the emitted
binary, but they do implicitly get swizzled in whatever instruction
wrote r31, so we need to handle that.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Conditional instructions (csel and conditional branches) require their
condition to be written to a special condition pipeline register (r31.w
for scalar, r31.xyzw for vector). However, pipeline registers are live
only for the duration of a single bundle. As such, the logic to schedule
conditionals correct is surprisingly complex. Essentially, we see if we
could stuff the conditional within the same bundle as the csel/branch
without breaking anything; if we can, we do that. If we can't, we add a
dummy move to make room.
Signed-off-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>