The SPV_KHR_ray_tracing extension adds 6 new storage classes which is a
bit on the ridiculous side. In order to avoid adding that many variable
modes to NIR, we make a few simplifying assumptions:
1. CallableData and RayPayload data actually lives on the stack
somewhere, presumably in the caller's stack. We assume that these
are no different from global variables and use nir_var_shader_temp
for them. We still need a separate storage class for the incoming
variants but only so we can figure out which one the incoming one
is and lower it to something useful.
2. There's no difference between incoming CallableData and RayPaolad
data. We can use a single storage class for both.
3. ShaderRecordBuffer data is just a global memory access. This lets
us avoid NIR variables entirely and just fetch the pointer via the
shader_record_ptr system value and it's accessed using a 64-bit
global memory pointer.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6479>
Missing in this commit are NIR intrinsics for the ObjectToWorld and
WorldToObject built-ins. Those are matrices and so they take a bit more
work and justify a separate commit. For now, we add the enums and leave
the SYSTEM_VALUE <-> nir_intrinsic conversion commented out.
Reviewed-by: Bas Nieuwenhuizen <bas@basnieuwenhuizen.nl>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6479>
Most of this is fairly straightforward; we just set all the modes on any
derefs which are generic. The one tricky bit is OpGenericCastToPtrExplicit.
Instead of adding NIR intrinsics to do the cast, we add NIR intrinsics
to do a storage class check and then bcsel based on that.
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6332>
For UBO accesses to be the same performance as classic GL default uniform
block uniforms, we need to be able to push them through the same path. On
freedreno, we haven't been uploading UBOs as push constants when they're
used for indirect array access, because we don't know what range of the
UBO is needed for an access.
I believe we won't be able to calculate the range in general in spirv
given casts that can happen, so we define a [0, ~0] range to be "We don't
know anything". We use that at the moment for all UBO loads except for
nir_lower_uniforms_to_ubo, where we now avoid losing the range information
that default uniform block loads come with.
In a departure from other NIR intrinsics with a "base", I didn't make the
base an be something you have to add to the src[1] offset. This keeps us
from needing to modify all drivers (particularly since the base+offset
thing can mean needing to do addition in the backend), makes backend
tracking of ranges easy, and makes the range calculations in
load_store_vectorizer reasonable. However, this could definitely cause
some confusion for people used to the normal NIR base.
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>
Reviewed-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6359>
This commit propagates the alignment information provided either through
the Alignment decoration on pointers or via the alignment mem operands
to OpLoad, OpStore, and OpCopyMemory to the NIR deref chain. It does so
by wrapping the deref in a cast. NIR should be able to clean up most
unnecessary casts only leaving us with the useful alignment information.
Reviewed-by: Jesse Natalie <jenatali@microsoft.com>
Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6472>
I missed this if statement so atomic counters weren't getting bindings
and, when you have more than one of them, that meant they were all
getting combined into one.
Fixes: 3584cb09bc15 "spirv: Give atomic counters their own variable mode"
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6060>
Previously, objects of type OpTypeImage or OpTypeSampler were treated as
vtn_pointers and objects of type OpTypeSampledImage were a special-use
vtn_sampled_image struct. This commit changes that so that all of those
objects are stored in vtn_ssa_values. Each of images, samplers, and
sampled images, are stored as a scalar or vector nir_ssa_def whose
components are NIR deref values. We now use vtn_type_get_nir_type to
re-resolve those as-needed into GLSL sampler types for NIR.
This simplification has a number of benefits:
1. We can git rid of the rest of our special-cases for handling images
and samplers in function arguments. Now that they're treated as
structs at the glsl_type level, the generic paths can handle images
and samplers.
2. We can now construct composite values containing images and samplers
internally. It's unclear from the SPIR-V spec whether or not this
is allowed and it's not a pattern that GLSLang currently generates
thanks to GLSL rules. However, if we do start seeing SPIR-V that
contains such composites, we should now be able to handle it.
3. SPIR-V OpNull and OpUndef instructions can now create samplers,
images, and sampled images. The NIR generated won't likely be fully
valid but, given a NIR pass to do something sensible, it should be a
thing we can compile.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5278>
There are a few cases, atomic counters being one example, where the type
used by vtn_ssa_value is not the same as the type we want NIR to use in
derefs and variables. To solve this, we add a helper which converts
between the types for us. In the next commit, we'll be adding another
major user of this: images and samplers.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5278>
Previously, we created our vtn_ssa_value in _vtn_variable_load_store
manually as we did the recursive load/store. Instead, we now create the
SSA value before calling into the recursive function. This is a tiny
bit less efficient but it removes a case of hand-rolling vtn_ssa_value
creation. For symmetry, we make _vtn_block_load_store assume the value
is already created. Finally, we remove a trivial hand-rolled case in
vtn_composite_extract.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5278>
The original implementation of SPV_EXT_descriptor_indexing was extremely
paranoid about the NonUniform qualifier, trying to fetch it from every
possible location and propagate it through access chains etc. However,
the Vulkan spec is quite nice to us on this and has very strict rules
for where the NonUniform decoration has to be placed. For image and
texture operations, we can search for the decoration on the spot when we
process the image or texture op. For pointers, we continue putting it
on the pointer but we don't bother trying to do anything silly like
propagate it through casts.
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5278>
The GLSL to NIR compiler supports the LowerTessLevel flag to convert
gl_TessLevelInner/Outer from their GLSL declarations as arrays of
floats to vec4/vec2s to better match how they are represented in
hardware.
This commit adds the similar support to the SPIR-V to NIR compiler so
turnip can use the same IR3/NIR tess lowering passes as freedreno.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/5059>