Commit Graph

884 Commits

Author SHA1 Message Date
Christian Gmeiner
4e110eca42 nir: nir_shader_compiler_options: drop native_integers
Driver which do not support native integers should use a lowering
pass to go from integers to floats.

Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-05-07 07:35:52 +02:00
Caio Marcelo de Oliveira Filho
aa675cef5e intel/fs: Assert when brw_fs_nir sees a nir_deref_instr
Since 09f1de97a7 "anv,i965: Lower away image derefs in the driver"
the backend compiler is not expected to handle any derefs, so let's
assert on it.

This helps identifying problems when a deref is not lowered and
"leaks" into the backend compiler.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-05-02 23:25:30 -07:00
Eric Engestrom
7ca8ba199f delete autotools .gitignore files
One special case, `src/util/xmlpool/.gitignore` is not entirely deleted,
as `xmlpool.pot` still gets generated (eg. by `ninja xmlpool-pot`).

Signed-off-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2019-04-29 21:17:19 +00:00
Kenneth Graunke
9dcf90d7ba intel/fs: Don't emit empty ELSE blocks.
While we can clean this up later, it's trivial to not generate the
stupid code in the first place, which saves some optimization work.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-28 22:36:09 -07:00
Caio Marcelo de Oliveira Filho
055f6281d4 intel/fs: Don't handle texop_tex for shaders without implicit LOD
These will be lowered by nir_lower_tex() with the
lower_tex_when_implicit_lod_not_supported, so don't need the extra
handling here.

Reviewed-by: Rob Clark <robdclark@gmail.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-04-25 12:13:06 -07:00
Topi Pohjolainen
ff642fb0e6 intel/compiler/fs/icl: Use dummy masked urb write for tess eval
One cannot write the URB arbitrarily and therefore the message
has to be carefully constructed. The clever tricks originate
from Kenneth and Jason, I'm just writing the patch.

Fixes GPU hangs on ICL with Vulkan CTS.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Signed-off-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2019-04-25 22:00:43 +03:00
Juan A. Suarez Romero
b06ae53606 Revert "intel/compiler: split is_partial_write() into two variants"
This reverts commit 40b3abb4d1.

It is not clear that this commit was entirely correct, and unfortunately
it was pushed by error.

CC: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-25 09:19:10 +02:00
Dave Airlie
3323cf08f0 intel/compiler: fix uninit non-static variable. (v2)
Pointed out by coverity.

v2: init nir_locals also.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
2019-04-25 06:06:57 +10:00
Ian Romanick
21223acf7d intel/fs: Fix D to W conversion in opt_combine_constants
Found by GCC warning:

src/intel/compiler/brw_fs_combine_constants.cpp: In function ‘bool needs_negate(const fs_reg*, const imm*)’:
src/intel/compiler/brw_fs_combine_constants.cpp:306:34: warning: comparison of unsigned expression < 0 is always false [-Wtype-limits]
       return ((reg->d & 0xffffu) < 0) != (imm->w < 0);
               ~~~~~~~~~~~~~~~~~~~^~~

The result of the bit-and is a 32-bit value with the top bits all zero.
This will never be < 0.  Instead of masking off the bits, just cast to
int16_t and let the compiler handle the actual conversion.

Fixes: e64be391dd ("intel/compiler: generalize the combine constants pass")
Cc: Iago Toral Quiroga <itoral@igalia.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-23 19:48:33 -07:00
Ian Romanick
26391cceaa intel/compiler: Lower ffma on Gen4 and Gen5
flrp32 is also a 3-source instruction, but there is another pending
series that handles that for Gen4 and Gen5.

v2: Rebase on "intel/compiler: Don't have sepearate, per-Gen
nir_options"

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-23 17:50:28 -07:00
Ian Romanick
fd1fa9afc7 intel/compiler: Don't have sepearate, per-Gen nir_options
Instead, just have separate scalar vs. vector nir_options and do
per-Gen "fix ups".

Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-23 17:50:16 -07:00
Matt Turner
4ec258ac3c intel/compiler: Improve fix_3src_operand()
Allow ATTR and IMM sources unconditionally (ATTR are just GRFs, IMM will
be handled by opt_combine_constants(). Both are already allowed by
opt_copy_propagation().

Also allow FIXED_GRF if the regioning is 8,8,1. Could also allow other
stride=1 regions (e.g., 4,4,1) and scalar regions but I don't think
those occur. This is sufficient to allow a pass added in a future commit
(fs_visitor::lower_linterp) to avoid emitting extra MOV instructions.

I removed the 'src.stride > 1' case because it seems wrong: 3-src
instructions on Gen6-9 are align16-only and can only do stride=1 or
stride=0. A run through Jenkins with an assert(src.stride <= 1) never
triggers, so it seems that it was dead code.

Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
2019-04-22 16:54:31 -07:00
Matt Turner
8aae7a3998 intel/compiler: Add unit tests for sat prop for different exec sizes
The two new unit tests verify that propagating a saturate between
instructions of different exec sizes does not happen.

Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
2019-04-22 16:54:21 -07:00
Matt Turner
54d4d34b96 intel/compiler: Use SIMD16 instructions in fs saturate prop unit test
Will allow us to test that propagation between instructions of different
exec sizes does not happen (in the next commit).

The stray-looking change in intervening_dest_write is to adjust the size
of the texture result to keep the test functioning identically when the
instructions' exec sizes are doubled. Without the change, the texture
does not overwrite the destination fully as the unit test intends.

Reviewed-by: Rafael Antognolli <rafael.antognolli@intel.com>
2019-04-22 16:54:17 -07:00
Rafael Antognolli
70e03e220c intel/fs: Remove fs_generator::generate_linterp from gen11+.
We now have a lowering pass that will do this at the fs_visitor level,
so we can remove this code from gen11+.

v2: Reduce size of the "i" array from 4 to 2 (Matt).

Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-22 16:54:00 -07:00
Rafael Antognolli
9ea90aae1e intel/fs: Add a lowering pass for linear interpolation.
On gen11, instead of using a PLN instruction, we convert
FS_OPCODE_LINTERP to 2 or 4 multiply adds. That is done in the
fs_generator code.

This patch adds a lowering pass that does the same thing at the
fs_visitor. It also drops the usage of NF types, since we don't need the
extra precision and it lets us skip the accumulator. With all that, some
optimizations will still be run on the generated code, and we should get
better scheduling.

v2: Update comment about saturation and conditional mod (Matt)

Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-22 16:54:00 -07:00
Rafael Antognolli
c0504569ea intel/fs: Move the scalar-region conversion to the generator.
Move the scalar-region conversion from the IR to the generator, so it
doesn't affect the Gen11 path. We need the non-scalar regioning
for a later lowering pass that we are adding.

v2: Better commit message (Matt)

Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-22 16:54:00 -07:00
Rafael Antognolli
0778748eba intel/fs: Only propagate saturation if exec_size is the same.
Otherwise it could propagate the saturation from a SIMD16 instruction
into a SIMD8 instruction. With that, only part of the destination
register, which is the source of the move with saturation, would have
been updated.

Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-22 16:53:55 -07:00
Ian Romanick
a6ccc4c0c8 intel/fs: Add support for float16 to the fsign optimizations
Commit ad98fbc217 ("intel/fs: Refactor code generation for nir_op_fsign
to its own function") criss-crossed with c2b8fb9a81 ("anv/device:
expose VK_KHR_shader_float16_int8 in gen8+"), and I was not paying
enough attention when I rebased.  This adds back the float16 changes and
enables the optimization.

v2: Incorporate more changes from 19cd2f5deb and a8d8b1a139 that I
missed in the previous version.

Fixes: ad98fbc217 ("intel/fs: Refactor code generation for nir_op_fsign to its own function")
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=110474
Reviewed-by: Matt Turner <mattst88@gmail.com> [v1]
2019-04-20 20:49:34 -07:00
Jason Ekstrand
c0d9926df7 anv: Use bindless handles for images
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19 19:56:42 +00:00
Jason Ekstrand
83af92e593 intel/fs: Add support for bindless image load/store/atomic
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19 19:56:42 +00:00
Jason Ekstrand
843286d324 intel/fs: Add support for bindless texture ops
We add two new texture sources for bindless surface and sampler handles.
Bindless surface handles are expected to be pre-shifted so that the
20-bit surface state table index is in the top 20 bits of the 32-bit
handle.  This lets us avoid any extra shifts in the shader.  Bindless
sampler handles are 32-byte aligned byte offsets from general state base
address.  We use 32-byte aligned instead of 16-byte aligned to avoid
having to use more indirect messages than needed.  It means we can't
tightly pack samplers but that's probably not a big deal.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19 19:56:42 +00:00
Jason Ekstrand
2edf29b933 intel,nir: Lower TXD with a bindless sampler
When we have a bindless sampler, we need an instruction header.  Even in
SIMD8, this pushes the instruction over the sampler message size maximum
of 11 registers.  Instead, we have to lower TXD to TXL.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19 19:56:42 +00:00
Jason Ekstrand
bd56ce8ce5 anv: Implement VK_KHR_shader_atomic_int64
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19 19:56:42 +00:00
Jason Ekstrand
b1a633d9fb intel/nir: Re-run int64 lowering in postprocess_nir
We're about to start doing 64-bit pointer calculations in ANV.  They
will get applied after brw_preprocess_nir which is where we currently do
64-bit integer arithmetic lowering.  Because we're adding 64-bit integer
arithmetic after the initial lowering has happened, we need to lower
again.

Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-19 19:56:42 +00:00
Jason Ekstrand
cd4ffb376f intel/fs: Account for live range lengths in spill costs
The current register allocator has a concept of "spill benefit" which is
based on the number of nodes with which a given node interferes.  The
idea is that you want to spill stuff with high interference because
those are the most likely registers to help when spilling.  However,
this fails to take into account the length of the live range so the
allocator frequently picks "cheap" (not many uses) registers which are
actually very short lived and so spilling them doesn't help with the
pressure situation.

This commit takes into account the length of the live range to make
long-lived registers more likely to get spilled than short-lived ones.
This encourages the spill chooser to choose slightly larger registers
which will affect a larger area of the program and hopefully we have to
spill fewer of them to get the same reduction in over-all register
pressure.

Shader-db results on Kaby Lake:

    total spills in shared programs: 23664 -> 12050 (-49.08%)
    spills in affected programs: 19243 -> 7629 (-60.35%)
    helped: 296
    HURT: 8

    total fills in shared programs: 32028 -> 25139 (-21.51%)
    fills in affected programs: 20378 -> 13489 (-33.81%)
    helped: 295
    HURT: 16

Of course, most of that is in Deus Ex...

Shader-db results on Kaby Lake (without Deus Ex):

    total spills in shared programs: 6479 -> 5834 (-9.96%)
    spills in affected programs: 3231 -> 2586 (-19.96%)
    helped: 40
    HURT: 4

    total fills in shared programs: 17165 -> 17099 (-0.38%)
    fills in affected programs: 6951 -> 6885 (-0.95%)
    helped: 40
    HURT: 7

Even without Deus Ex, the spill help is pretty respectable.  The worst
hurt shaders were one compute shader in Aztec Ruins and one fragment
shader in KSP that were each hurt by around 13% fill 9% spill.

VkPipeline-db results on Kaby Lake:

    total spills in shared programs: 9149 -> 8069 (-11.80%)
    spills in affected programs: 5197 -> 4117 (-20.78%)
    helped: 27
    HURT: 16

    total fills in shared programs: 26390 -> 25477 (-3.46%)
    fills in affected programs: 12662 -> 11749 (-7.21%)
    helped: 24
    HURT: 22

The Vulkan results were decidedly more mixed but we don't have nearly as
many apps in that database yet.

Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-18 23:04:45 +00:00
Ian Romanick
1711bf6cf2 intel/fs: Generate better code for fsign multiplied by a value
v2: Rebase on v2 changes in previous two commits.

v3: Rebase on 85c35885b3 ("nir: Rework nir_src_as_alu_instr to not take
a pointer").

shader-db results:

Skylake and Broadwell had similar results. (Skylake shown)
total instructions in shared programs: 15297100 -> 15282141 (-0.10%)
instructions in affected programs: 956685 -> 941726 (-1.56%)
helped: 4527
HURT: 0
helped stats (abs) min: 1 max: 221 x̄: 3.30 x̃: 2
helped stats (rel) min: 0.07% max: 10.53% x̄: 1.85% x̃: 1.37%
95% mean confidence interval for instructions value: -3.48 -3.12
95% mean confidence interval for instructions %-change: -1.88% -1.81%
Instructions are helped.

total cycles in shared programs: 372809551 -> 372597886 (-0.06%)
cycles in affected programs: 13645512 -> 13433847 (-1.55%)
helped: 4362
HURT: 125
helped stats (abs) min: 1 max: 2088 x̄: 50.73 x̃: 28
helped stats (rel) min: 0.01% max: 28.20% x̄: 2.77% x̃: 2.39%
HURT stats (abs)   min: 1 max: 1836 x̄: 76.90 x̃: 28
HURT stats (rel)   min: <.01% max: 34.36% x̄: 3.03% x̃: 1.42%
95% mean confidence interval for cycles value: -50.98 -43.37
95% mean confidence interval for cycles %-change: -2.67% -2.55%
Cycles are helped.

total spills in shared programs: 23465 -> 23463 (<.01%)
spills in affected programs: 42 -> 40 (-4.76%)
helped: 1
HURT: 0

total fills in shared programs: 31766 -> 31763 (<.01%)
fills in affected programs: 69 -> 66 (-4.35%)
helped: 1
HURT: 0

Haswell
total instructions in shared programs: 13839992 -> 13828311 (-0.08%)
instructions in affected programs: 712503 -> 700822 (-1.64%)
helped: 3477
HURT: 0
helped stats (abs) min: 1 max: 221 x̄: 3.36 x̃: 2
helped stats (rel) min: 0.07% max: 10.64% x̄: 1.96% x̃: 1.52%
95% mean confidence interval for instructions value: -3.58 -3.14
95% mean confidence interval for instructions %-change: -2.01% -1.92%
Instructions are helped.

total cycles in shared programs: 387026330 -> 386872483 (-0.04%)
cycles in affected programs: 11329966 -> 11176119 (-1.36%)
helped: 3307
HURT: 139
helped stats (abs) min: 2 max: 1776 x̄: 49.58 x̃: 18
helped stats (rel) min: 0.01% max: 20.38% x̄: 2.27% x̃: 1.79%
HURT stats (abs)   min: 1 max: 2314 x̄: 72.68 x̃: 20
HURT stats (rel)   min: <.01% max: 33.99% x̄: 2.28% x̃: 0.96%
95% mean confidence interval for cycles value: -49.31 -39.98
95% mean confidence interval for cycles %-change: -2.15% -2.01%
Cycles are helped.

LOST:   1
GAINED: 0

Ivy Bridge
total instructions in shared programs: 12045602 -> 12038463 (-0.06%)
instructions in affected programs: 623837 -> 616698 (-1.14%)
helped: 2498
HURT: 0
helped stats (abs) min: 1 max: 39 x̄: 2.86 x̃: 2
helped stats (rel) min: 0.05% max: 10.00% x̄: 1.30% x̃: 1.05%
95% mean confidence interval for instructions value: -2.96 -2.75
95% mean confidence interval for instructions %-change: -1.34% -1.26%
Instructions are helped.

total cycles in shared programs: 181025675 -> 180891323 (-0.07%)
cycles in affected programs: 11329329 -> 11194977 (-1.19%)
helped: 2439
HURT: 47
helped stats (abs) min: 1 max: 1565 x̄: 57.06 x̃: 26
helped stats (rel) min: 0.02% max: 24.56% x̄: 2.02% x̃: 1.64%
HURT stats (abs)   min: 1 max: 1269 x̄: 102.51 x̃: 43
HURT stats (rel)   min: 0.11% max: 52.94% x̄: 4.15% x̃: 1.34%
95% mean confidence interval for cycles value: -59.91 -48.17
95% mean confidence interval for cycles %-change: -1.99% -1.82%
Cycles are helped.

Sandy Bridge, Iron Lake, and GM45 had similar results. (Sandy Bridge shown)
total instructions in shared programs: 10896368 -> 10896339 (<.01%)
instructions in affected programs: 3767 -> 3738 (-0.77%)
helped: 17
HURT: 0
helped stats (abs) min: 1 max: 4 x̄: 1.71 x̃: 1
helped stats (rel) min: 0.13% max: 9.52% x̄: 3.58% x̃: 2.73%
95% mean confidence interval for instructions value: -2.27 -1.14
95% mean confidence interval for instructions %-change: -5.14% -2.03%
Instructions are helped.

total cycles in shared programs: 155091109 -> 155091021 (<.01%)
cycles in affected programs: 47241 -> 47153 (-0.19%)
helped: 15
HURT: 8
helped stats (abs) min: 2 max: 81 x̄: 15.73 x̃: 4
helped stats (rel) min: 0.03% max: 10.59% x̄: 1.55% x̃: 0.71%
HURT stats (abs)   min: 14 max: 32 x̄: 18.50 x̃: 17
HURT stats (rel)   min: 0.32% max: 2.79% x̄: 2.43% x̃: 2.71%
95% mean confidence interval for cycles value: -14.59 6.93
95% mean confidence interval for cycles %-change: -1.41% 1.08%
Inconclusive result (value mean confidence interval includes 0).

Reviewed-by: Matt Turner <mattst88@gmail.com> [v2]
2019-04-18 12:38:05 -07:00
Ian Romanick
06d2c11641 intel/fs: Add a scale factor to emit_fsign
Normally fsign generates -1, 0, or +1.  The new scale factor, S, causes
fsign to generate -S, 0, or +S.

v2: Rebase on v2 changes in previous commit.

v3: Rebase on 85c35885b3 ("nir: Rework nir_src_as_alu_instr to not take
a pointer").

Reviewed-by: Matt Turner <mattst88@gmail.com> [v2]
2019-04-18 12:37:48 -07:00
Ian Romanick
ad98fbc217 intel/fs: Refactor code generation for nir_op_fsign to its own function
v2: Call emit_fsign from inside the existing switch statement.
Suggested by Matt.

Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-18 12:37:48 -07:00
Ian Romanick
90430d0488 intel/fs: Eliminate dead code first
This simplifies the later patch "i965/fs: Generate better code for fsign
multiplied by a value".

shader-db results:

Broadwell and Skylake had similar results. (Skylake shown)
total cycles in shared programs: 372808735 -> 372809551 (<.01%)
cycles in affected programs: 1519520 -> 1520336 (0.05%)
helped: 243
HURT: 277
helped stats (abs) min: 1 max: 226 x̄: 34.05 x̃: 5
helped stats (rel) min: 0.01% max: 13.88% x̄: 1.46% x̃: 0.27%
HURT stats (abs)   min: 1 max: 1810 x̄: 32.82 x̃: 5
HURT stats (rel)   min: 0.01% max: 16.03% x̄: 1.56% x̃: 0.29%
95% mean confidence interval for cycles value: -7.18 10.32
95% mean confidence interval for cycles %-change: -0.17% 0.46%
Inconclusive result (value mean confidence interval includes 0).

Sandy Bridge, Haswell and Ivy Bridge had similar results. (Sandy Bridge shown)
total cycles in shared programs: 155091458 -> 155091109 (<.01%)
cycles in affected programs: 370797 -> 370448 (-0.09%)
helped: 24
HURT: 36
helped stats (abs) min: 1 max: 331 x̄: 103.17 x̃: 41
helped stats (rel) min: 0.02% max: 7.70% x̄: 2.07% x̃: 0.56%
HURT stats (abs)   min: 1 max: 291 x̄: 59.08 x̃: 10
HURT stats (rel)   min: 0.02% max: 5.29% x̄: 1.02% x̃: 0.15%
95% mean confidence interval for cycles value: -37.92 26.28
95% mean confidence interval for cycles %-change: -0.88% 0.45%
Inconclusive result (value mean confidence interval includes 0).

Iron Lake and GM45 had similar results. (GM45 shown)
total cycles in shared programs: 129133970 -> 129133978 (<.01%)
cycles in affected programs: 111966 -> 111974 (<.01%)
helped: 3
HURT: 1
helped stats (abs) min: 2 max: 4 x̄: 2.67 x̃: 2
helped stats (rel) min: <.01% max: <.01% x̄: <.01% x̃: <.01%
HURT stats (abs)   min: 16 max: 16 x̄: 16.00 x̃: 16
HURT stats (rel)   min: 0.07% max: 0.07% x̄: 0.07% x̃: 0.07%
95% mean confidence interval for cycles value: -12.93 16.93
95% mean confidence interval for cycles %-change: -0.05% 0.08%
Inconclusive result (value mean confidence interval includes 0).

Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-18 12:37:48 -07:00
Jason Ekstrand
c6463f8ac2 nir: Add a nir_src_as_intrinsic() helper
Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-18 17:12:44 +00:00
Jason Ekstrand
85c35885b3 nir: Rework nir_src_as_alu_instr to not take a pointer
Other nir_src_as_* functions just take a nir_src.  It's not that much
more memory copying and the constness preserving really isn't worth the
cognitive dissonance.

Reviewed-by: Eric Anholt <eric@anholt.net>
Reviewed-by: Caio Marcelo de Oliveira Filho <caio.oliveira@intel.com>
2019-04-18 17:12:44 +00:00
Iago Toral Quiroga
8ed6d74c92 intel/compiler: validate region restrictions for mixed float mode
v2:
 - Adapted unit tests to make them consistent with the changes done
   to the validation of half-float conversions.

v3 (Curro):
- Check all the accummulators
- Constify declarations
- Do not check src1 type in single-source instructions.
- Check for all instructions that read accumulator (either implicitly or
  explicitly)
- Check restrictions in src1 too.
- Merge conditional block
- Add invalid test case.

v4 (Curro):
- Assert on 3-src instructions, as they are not validated.
- Get rid of types_are_mixed_float(), as we know instruction is mixed
  float at that point.
- Remove conditions from not verified case.
- Fix brackets on conditional.

Reviewed-by: Francisco Jerez <currojerez@riseup.net>
2019-04-18 13:22:46 +02:00
Iago Toral Quiroga
58d6417e59 intel/compiler: validate conversions between 64-bit and 8-bit types
v2:
 - Add some tests with UB type too (Jason)

v3:
 - consider implicit conversions from 2src instructions too (Curro).

v4:
 - Do not check src1 type in single-source instructions (Curro).

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v2)
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
7376d57a9c intel/compiler: validate region restrictions for half-float conversions
v2:
 - Consider implicit conversions in 2-src instructions too (Curro)
 - For restrictions that involve destination stride requirements
   only validate them for Align1, since Align16 always requires
   packed data.
 - Skip general rule for the dst/execution type size ratio for
   mixed float instructions on CHV and SKL+, these have their own
   set of rules that we'll be validated separately.

v3 (Curro):
 - Do not check src1 type in single-source instructions.
 - Check restriction on src1.
 - Remove invalid test.

Reviewed-by: Francisco Jerez <currojerez@riseup.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
6ff52f0628 intel/compiler: also set F execution type for mixed float mode in BDW
The section 'Execution Data Types' of 3D Media GPGPU volume, which
describes execution types, is exactly the same in BDW and SKL+.

Also, this section states that there is a single execution type, so it
makes sense that this is the wider of the two floating point types
involved in mixed float mode, which is what we do for SKL+ and CHV.

v2:
 - Make sure we also account for the destination type in mixed mode (Curro).

Acked-by: Francisco Jerez <currojerez@riseup.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
100debc3c9 intel/compiler: implement SIMD16 restrictions for mixed-float instructions
v2: f32to16/f16to32 can use a :W destination (Curro)
v3: check destination is packed (Curro).

Reviewed-by: Francisco Jerez <currojerez@riseup.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
6d87c651c9 intel/compiler: skip MAD algebraic optimization for half-float or mixed mode
It is very likely that this optimzation is never useful and we'll probably
just end up removing it, so let's not bother adding more cases to it for
now.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
64b93292ac intel/compiler: remove inexact algebraic optimizations from the backend
NIR already has these and correctly considers exact/inexact qualification,
whereas the backend doesn't and can apply the optimizations where it
shouldn't. This happened to be the case in a handful of Tomb Raider shaders,
where NIR would skip the optimizations because of a precise qualification
but the backend would then (incorrectly) apply them anyway.

Besides this, considering that we are not emitting much math in the backend
these days it is unlikely that these optimizations are useful in general. A
shader-db run confirms that MAD and LRP optimizations, for example, were only
being triggered in cases where NIR would skip them due to precise
requirements, so in the near future we might want to remove more of these,
but for now we just remove the ones that are not completely correct.

Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
ddd1706ab3 intel/compiler: fix cmod propagation for non 32-bit types
v2:
 - Do not propagate if the bit-size changes

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
66002eeebe intel/compiler: add a brw_reg_type_is_integer helper
v2:
 - Fixed typo: meant BRW_REGISTER_TYPE_UB instead BRW_REGISTER_TYPE_UV

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
44e1affaec intel/compiler: implement is_zero, is_one, is_negative_one for 8-bit/16-bit
There are no 8-bit immediates, so assert in that case.
16-bit immediates are replicated in each word of a 32-bit immediate, so
we only need to check the lower 16-bits.

v2:
 - Fix is_zero with half-float to consider -0 as well (Jason).
 - Fix is_negative_one for word type.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
e64be391dd intel/compiler: generalize the combine constants pass
At the very least we need it to handle HF too, since we are doing
constant propagation for MAD and LRP, which relies on this pass
to promote the immediates to GRF in the end, but ideally
we want it to support even more types so we can take advantage
of it to improve register pressure in some scenarios.

v2 (Jason):
 - Support 64-bit types too.
 - Check if we need to set the half-float flag if the immediate already
   existed.
 - Multiply the size of the immediate by the width of the copy

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
fb990bd76e intel/eu: force stride of 2 on NULL register for Byte instructions
The hardware only allows a stride of 1 on a Byte destination for raw
byte MOV instructions. This is required even when the destination
is the NULL register.

Rather than making sure that we emit a proper NULL:B destination
every time we need one, just fix it at emission time.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
ce68a061de intel/compiler: ask for an integer type if requesting an 8-bit type
v2:
  - Assign BRW_REGISTER_TYPE_B directly for 8-bit (Jason)

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
092b147774 intel/compiler: rework conversion opcodes
Now that we have the regioning lowering pass we can just put all of these
opcodes together in a single block and we can just assert on the few cases
of conversion instructions that are not supported in hardware and that should
be lowered in brw_nir_lower_conversions.

The only cases what we still handle separately are the conversions from float
to half-float since the rounding variants would need to fallthrough and we
are already doing this for boolean opcodes (since they need to negate), plus
there is also a large comment about these opcodes that we probably want to
keep so it is just easier to keep these separate.

Suggested-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
472244b374 intel/compiler: activate 16-bit bit-size lowerings also for 8-bit
Particularly, we need the same lowewrings we use for 16-bit
integers.

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
40b3abb4d1 intel/compiler: split is_partial_write() into two variants
This function is used in two different scenarios that for 32-bit
instructions are the same, but for 16-bit instructions are not.

One scenario is that in which we are working at a SIMD8 register
level and we need to know if a register is fully defined or written.
This is useful, for example, in the context of liveness analysis or
register allocation, where we work with units of registers.

The other scenario is that in which we want to know if an instruction
is writing a full scalar component or just some subset of it. This is
useful, for example, in the context of some optimization passes
like copy propagation.

For 32-bit instructions (or larger), a SIMD8 dispatch will always write
at least a full SIMD8 register (32B) if the write is not partial. The
function is_partial_write() checks this to determine if we have a partial
write. However, when we deal with 16-bit instructions, that logic disables
some optimizations that should be safe. For example, a SIMD8 16-bit MOV will
only update half of a SIMD register, but it is still a complete write of the
variable for a SIMD8 dispatch, so we should not prevent copy propagation in
this scenario because we don't write all 32 bytes in the SIMD register
or because the write starts at offset 16B (wehere we pack components Y or
W of 16-bit vectors).

This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit
instructions, which lose a number of optimizations because of this, most
important of which is copy-propagation.

This patch splits is_partial_write() into is_partial_reg_write(), which
represents the current is_partial_write(), useful for things like
liveness analysis, and is_partial_var_write(), which considers
the dispatch size to check if we are writing a full variable (rather
than a full register) to decide if the write is partial or not, which
is what we really want in many optimization passes.

Then the patch goes on and rewrites all uses of is_partial_write() to use
one or the other version. Specifically, we use is_partial_var_write()
in the following places: copy propagation, cmod propagation, common
subexpression elimination, saturate propagation and sel peephole.

Notice that the semantics of is_partial_var_write() exactly match the
current implementation of is_partial_write() for anything that is
32-bit or larger, so no changes are expected for 32-bit instructions.

Tested against ~5000 tests involving 16-bit instructions in CTS produced
the following changes in instruction counts:

            Patched  |     Master    |    %    |
================================================
SIMD8  |    621,900  |    706,721    | -12.00% |
================================================
SIMD16 |     93,252  |     93,252    |   0.00% |
================================================

As expected, the change only affects SIMD8 dispatches.

Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
0986199b31 intel/compiler: workaround for SIMD8 half-float MAD in gen8
Empirical testing shows that gen8 has a bug where MAD instructions with
a half-float source starting at a non-zero offset fail to execute
properly.

This scenario usually happened in SIMD8 executions, where we used to
pack vector components Y and W in the second half of SIMD registers
(therefore, with a 16B offset). It looks like we are not currently doing
this any more but this would handle the situation properly if we ever
happen to produce code like this again.

v2 (Jason):
 - Move this workaround to the lower_regioning pass as an additional case
   to has_invalid_src_region()
 - Do not apply the workaround if the stride of the source operand is 0,
   testing suggests the problem doesn't exist in that case.

v3 (Jason):
 - We want offset % REG_SIZE > 0, not just offset > 0
 - Use a helper to compute the offset

Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com> (v1)
2019-04-18 11:05:18 +02:00
Iago Toral Quiroga
aaae24179f intel/compiler: fix ddy for half-float in Broadwell
Broadwell has restrictions that apply to Align16 half-float that
make the Align16 implementation of this invalid for this platform.
Use the gen11 path for this instead, which uses Align1 mode.

The restriction is not present in cherryview, gen9 or gen10, where
the Align16 implementation seems to work just fine.

v2:
 - Rework the comment in the code, move the PRM citation from the
   commit message to the comment in the code (Matt)
 - Cherryview isn't affected, only Broadwell (Matt)

Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> (v1)
Reviewed-by: Matt Turner <mattst88@gmail.com>
2019-04-18 11:05:18 +02:00