Files
third_party_mesa3d/src/compiler/nir/nir_search.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

944 lines
31 KiB
C
Raw Normal View History

/*
* Copyright © 2014 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include "nir_search.h"
#include <inttypes.h>
#include "util/half_float.h"
#include "nir_builder.h"
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
#include "nir_worklist.h"
/* This should be the same as nir_search_max_comm_ops in nir_algebraic.py. */
#define NIR_SEARCH_MAX_COMM_OPS 8
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
struct match_state {
bool inexact_match;
bool has_exact_alu;
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
uint8_t comm_op_direction;
unsigned variables_seen;
/* Used for running the automaton on newly-constructed instructions. */
struct util_dynarray *states;
const struct per_op_table *pass_op_table;
const nir_algebraic_table *table;
nir_alu_src variables[NIR_SEARCH_MAX_VARIABLES];
struct hash_table *range_ht;
};
static bool
match_expression(const nir_algebraic_table *table, const nir_search_expression *expr, nir_alu_instr *instr,
unsigned num_components, const uint8_t *swizzle,
struct match_state *state);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
static bool
nir_algebraic_automaton(nir_instr *instr, struct util_dynarray *states,
const struct per_op_table *pass_op_table);
static const uint8_t identity_swizzle[NIR_MAX_VEC_COMPONENTS] = {
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
};
/**
* Check if a source produces a value of the given type.
*
* Used for satisfying 'a@type' constraints.
*/
static bool
src_is_type(nir_src src, nir_alu_type type)
{
assert(type != nir_type_invalid);
if (src.ssa->parent_instr->type == nir_instr_type_alu) {
nir_alu_instr *src_alu = nir_instr_as_alu(src.ssa->parent_instr);
nir_alu_type output_type = nir_op_infos[src_alu->op].output_type;
if (type == nir_type_bool) {
switch (src_alu->op) {
case nir_op_iand:
case nir_op_ior:
case nir_op_ixor:
return src_is_type(src_alu->src[0].src, nir_type_bool) &&
src_is_type(src_alu->src[1].src, nir_type_bool);
case nir_op_inot:
return src_is_type(src_alu->src[0].src, nir_type_bool);
default:
break;
}
}
return nir_alu_type_get_base_type(output_type) == type;
} else if (src.ssa->parent_instr->type == nir_instr_type_intrinsic) {
nir_intrinsic_instr *intr = nir_instr_as_intrinsic(src.ssa->parent_instr);
if (type == nir_type_bool) {
return intr->intrinsic == nir_intrinsic_load_front_face ||
intr->intrinsic == nir_intrinsic_load_helper_invocation;
}
}
/* don't know */
return false;
}
static bool
nir_op_matches_search_op(nir_op nop, uint16_t sop)
{
if (sop <= nir_last_opcode)
return nop == sop;
#define MATCH_FCONV_CASE(op) \
case nir_search_op_##op: \
return nop == nir_op_##op##16 || \
nop == nir_op_##op##32 || \
nop == nir_op_##op##64;
#define MATCH_ICONV_CASE(op) \
case nir_search_op_##op: \
return nop == nir_op_##op##8 || \
nop == nir_op_##op##16 || \
nop == nir_op_##op##32 || \
nop == nir_op_##op##64;
switch (sop) {
MATCH_FCONV_CASE(i2f)
MATCH_FCONV_CASE(u2f)
MATCH_FCONV_CASE(f2f)
MATCH_ICONV_CASE(f2u)
MATCH_ICONV_CASE(f2i)
MATCH_ICONV_CASE(u2u)
MATCH_ICONV_CASE(i2i)
MATCH_FCONV_CASE(b2f)
MATCH_ICONV_CASE(b2i)
default:
unreachable("Invalid nir_search_op");
}
#undef MATCH_FCONV_CASE
#undef MATCH_ICONV_CASE
nir/search: Add automaton-based pre-searching nir_opt_algebraic is currently one of the most expensive NIR passes, because of the many different patterns we've added over the years. Even though patterns are already sorted by opcode, there are still way too many patterns for common opcodes like bcsel and fadd, which means that many patterns are tried but only a few actually match. One way to fix this is to add a pre-pass over the code that scans it using an automaton constructed beforehand, similar to the automatons produced by lex and yacc for parsing source code. This automaton has to walk the SSA graph and recognize possible pattern matches. It turns out that the theory to do this is quite mature already, having been developed for instruction selection as well as other non-compiler things. I followed the presentation in the dissertation cited in the code, "Tree algorithms: Two Taxonomies and a Toolkit," trying to keep the naming similar. To create the automaton, we have to perform something like the classical NFA to DFA subset construction used by lex, but it turns out that actually computing the transition table for all possible states would be way too expensive, with the dissertation reporting times of almost half an hour for an example of size similar to nir_opt_algebraic. Instead, we adopt one of the "filter" approaches explained in the dissertation, which trade much faster table generation and table size for a few more table lookups per instruction at runtime. I chose the filter which resulted the fastest table generation time, with medium table size. Right now, the table generation takes around .5 seconds, despite being implemented in pure Python, which I think is good enough. Based on the numbers in the dissertation, the other choice might make table compilation time 25x slower to get 4x smaller table size, but I don't think that's worth it. As of now, we get the following binary size before and after this patch: text data bss dec hex filename 11979455 464720 730864 13175039 c908ff before i965_dri.so text data bss dec hex filename 12037835 616244 791792 13445871 cd2aef after i965_dri.so There are a number of places where I've simplified the automaton by getting rid of details in the LHS patterns rather than complicate things to deal with them. For example, right now the automaton doesn't distinguish between constants with different values. This means that it isn't as precise as it could be, but the decrease in compile time is still worth it -- these are the compilation time numbers for a shader-db run with my (admittedly old) database on Intel skylake: Difference at 95.0% confidence -42.3485 +/- 1.375 -7.20383% +/- 0.229926% (Student's t, pooled s = 1.69843) We can always experiment with making it more precise later. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-02-18 14:20:34 +01:00
}
uint16_t
nir_search_op_for_nir_op(nir_op nop)
{
#define MATCH_FCONV_CASE(op) \
case nir_op_##op##16: \
case nir_op_##op##32: \
case nir_op_##op##64: \
nir/search: Add automaton-based pre-searching nir_opt_algebraic is currently one of the most expensive NIR passes, because of the many different patterns we've added over the years. Even though patterns are already sorted by opcode, there are still way too many patterns for common opcodes like bcsel and fadd, which means that many patterns are tried but only a few actually match. One way to fix this is to add a pre-pass over the code that scans it using an automaton constructed beforehand, similar to the automatons produced by lex and yacc for parsing source code. This automaton has to walk the SSA graph and recognize possible pattern matches. It turns out that the theory to do this is quite mature already, having been developed for instruction selection as well as other non-compiler things. I followed the presentation in the dissertation cited in the code, "Tree algorithms: Two Taxonomies and a Toolkit," trying to keep the naming similar. To create the automaton, we have to perform something like the classical NFA to DFA subset construction used by lex, but it turns out that actually computing the transition table for all possible states would be way too expensive, with the dissertation reporting times of almost half an hour for an example of size similar to nir_opt_algebraic. Instead, we adopt one of the "filter" approaches explained in the dissertation, which trade much faster table generation and table size for a few more table lookups per instruction at runtime. I chose the filter which resulted the fastest table generation time, with medium table size. Right now, the table generation takes around .5 seconds, despite being implemented in pure Python, which I think is good enough. Based on the numbers in the dissertation, the other choice might make table compilation time 25x slower to get 4x smaller table size, but I don't think that's worth it. As of now, we get the following binary size before and after this patch: text data bss dec hex filename 11979455 464720 730864 13175039 c908ff before i965_dri.so text data bss dec hex filename 12037835 616244 791792 13445871 cd2aef after i965_dri.so There are a number of places where I've simplified the automaton by getting rid of details in the LHS patterns rather than complicate things to deal with them. For example, right now the automaton doesn't distinguish between constants with different values. This means that it isn't as precise as it could be, but the decrease in compile time is still worth it -- these are the compilation time numbers for a shader-db run with my (admittedly old) database on Intel skylake: Difference at 95.0% confidence -42.3485 +/- 1.375 -7.20383% +/- 0.229926% (Student's t, pooled s = 1.69843) We can always experiment with making it more precise later. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-02-18 14:20:34 +01:00
return nir_search_op_##op;
#define MATCH_ICONV_CASE(op) \
case nir_op_##op##8: \
case nir_op_##op##16: \
case nir_op_##op##32: \
case nir_op_##op##64: \
nir/search: Add automaton-based pre-searching nir_opt_algebraic is currently one of the most expensive NIR passes, because of the many different patterns we've added over the years. Even though patterns are already sorted by opcode, there are still way too many patterns for common opcodes like bcsel and fadd, which means that many patterns are tried but only a few actually match. One way to fix this is to add a pre-pass over the code that scans it using an automaton constructed beforehand, similar to the automatons produced by lex and yacc for parsing source code. This automaton has to walk the SSA graph and recognize possible pattern matches. It turns out that the theory to do this is quite mature already, having been developed for instruction selection as well as other non-compiler things. I followed the presentation in the dissertation cited in the code, "Tree algorithms: Two Taxonomies and a Toolkit," trying to keep the naming similar. To create the automaton, we have to perform something like the classical NFA to DFA subset construction used by lex, but it turns out that actually computing the transition table for all possible states would be way too expensive, with the dissertation reporting times of almost half an hour for an example of size similar to nir_opt_algebraic. Instead, we adopt one of the "filter" approaches explained in the dissertation, which trade much faster table generation and table size for a few more table lookups per instruction at runtime. I chose the filter which resulted the fastest table generation time, with medium table size. Right now, the table generation takes around .5 seconds, despite being implemented in pure Python, which I think is good enough. Based on the numbers in the dissertation, the other choice might make table compilation time 25x slower to get 4x smaller table size, but I don't think that's worth it. As of now, we get the following binary size before and after this patch: text data bss dec hex filename 11979455 464720 730864 13175039 c908ff before i965_dri.so text data bss dec hex filename 12037835 616244 791792 13445871 cd2aef after i965_dri.so There are a number of places where I've simplified the automaton by getting rid of details in the LHS patterns rather than complicate things to deal with them. For example, right now the automaton doesn't distinguish between constants with different values. This means that it isn't as precise as it could be, but the decrease in compile time is still worth it -- these are the compilation time numbers for a shader-db run with my (admittedly old) database on Intel skylake: Difference at 95.0% confidence -42.3485 +/- 1.375 -7.20383% +/- 0.229926% (Student's t, pooled s = 1.69843) We can always experiment with making it more precise later. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-02-18 14:20:34 +01:00
return nir_search_op_##op;
switch (nop) {
MATCH_FCONV_CASE(i2f)
MATCH_FCONV_CASE(u2f)
MATCH_FCONV_CASE(f2f)
MATCH_ICONV_CASE(f2u)
MATCH_ICONV_CASE(f2i)
MATCH_ICONV_CASE(u2u)
MATCH_ICONV_CASE(i2i)
MATCH_FCONV_CASE(b2f)
MATCH_ICONV_CASE(b2i)
nir/search: Add automaton-based pre-searching nir_opt_algebraic is currently one of the most expensive NIR passes, because of the many different patterns we've added over the years. Even though patterns are already sorted by opcode, there are still way too many patterns for common opcodes like bcsel and fadd, which means that many patterns are tried but only a few actually match. One way to fix this is to add a pre-pass over the code that scans it using an automaton constructed beforehand, similar to the automatons produced by lex and yacc for parsing source code. This automaton has to walk the SSA graph and recognize possible pattern matches. It turns out that the theory to do this is quite mature already, having been developed for instruction selection as well as other non-compiler things. I followed the presentation in the dissertation cited in the code, "Tree algorithms: Two Taxonomies and a Toolkit," trying to keep the naming similar. To create the automaton, we have to perform something like the classical NFA to DFA subset construction used by lex, but it turns out that actually computing the transition table for all possible states would be way too expensive, with the dissertation reporting times of almost half an hour for an example of size similar to nir_opt_algebraic. Instead, we adopt one of the "filter" approaches explained in the dissertation, which trade much faster table generation and table size for a few more table lookups per instruction at runtime. I chose the filter which resulted the fastest table generation time, with medium table size. Right now, the table generation takes around .5 seconds, despite being implemented in pure Python, which I think is good enough. Based on the numbers in the dissertation, the other choice might make table compilation time 25x slower to get 4x smaller table size, but I don't think that's worth it. As of now, we get the following binary size before and after this patch: text data bss dec hex filename 11979455 464720 730864 13175039 c908ff before i965_dri.so text data bss dec hex filename 12037835 616244 791792 13445871 cd2aef after i965_dri.so There are a number of places where I've simplified the automaton by getting rid of details in the LHS patterns rather than complicate things to deal with them. For example, right now the automaton doesn't distinguish between constants with different values. This means that it isn't as precise as it could be, but the decrease in compile time is still worth it -- these are the compilation time numbers for a shader-db run with my (admittedly old) database on Intel skylake: Difference at 95.0% confidence -42.3485 +/- 1.375 -7.20383% +/- 0.229926% (Student's t, pooled s = 1.69843) We can always experiment with making it more precise later. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2019-02-18 14:20:34 +01:00
default:
return nop;
}
#undef MATCH_FCONV_CASE
#undef MATCH_ICONV_CASE
}
static nir_op
nir_op_for_search_op(uint16_t sop, unsigned bit_size)
{
if (sop <= nir_last_opcode)
return sop;
#define RET_FCONV_CASE(op) \
case nir_search_op_##op: \
switch (bit_size) { \
case 16: \
return nir_op_##op##16; \
case 32: \
return nir_op_##op##32; \
case 64: \
return nir_op_##op##64; \
default: \
unreachable("Invalid bit size"); \
}
#define RET_ICONV_CASE(op) \
case nir_search_op_##op: \
switch (bit_size) { \
case 8: \
return nir_op_##op##8; \
case 16: \
return nir_op_##op##16; \
case 32: \
return nir_op_##op##32; \
case 64: \
return nir_op_##op##64; \
default: \
unreachable("Invalid bit size"); \
}
switch (sop) {
RET_FCONV_CASE(i2f)
RET_FCONV_CASE(u2f)
RET_FCONV_CASE(f2f)
RET_ICONV_CASE(f2u)
RET_ICONV_CASE(f2i)
RET_ICONV_CASE(u2u)
RET_ICONV_CASE(i2i)
RET_FCONV_CASE(b2f)
RET_ICONV_CASE(b2i)
default:
unreachable("Invalid nir_search_op");
}
#undef RET_FCONV_CASE
#undef RET_ICONV_CASE
}
static bool
match_value(const nir_algebraic_table *table,
const nir_search_value *value, nir_alu_instr *instr, unsigned src,
unsigned num_components, const uint8_t *swizzle,
struct match_state *state)
{
uint8_t new_swizzle[NIR_MAX_VEC_COMPONENTS];
/* If the source is an explicitly sized source, then we need to reset
* both the number of components and the swizzle.
*/
if (nir_op_infos[instr->op].input_sizes[src] != 0) {
num_components = nir_op_infos[instr->op].input_sizes[src];
swizzle = identity_swizzle;
}
glsl: Resolve GCC sign-compare warning. mesa/src/glsl/nir/nir_lower_tex_projector.c: In function 'nir_lower_tex_projector_block': mesa/src/glsl/nir/nir_lower_tex_projector.c:63:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < tex->num_srcs; i++) { ^ mesa/src/glsl/nir/nir_lower_tex_projector.c: In function 'nir_lower_tex_projector_block': mesa/src/glsl/nir/nir_lower_tex_projector.c:114:38: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = proj_index + 1; i < tex->num_srcs; i++) { ^ mesa/src/glsl/nir/nir_lower_tex_projector.c: In function 'nir_lower_tex_projector_block': mesa/src/glsl/nir/nir_lower_tex_projector.c:53:39: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (proj_index = 0; proj_index < tex->num_srcs; proj_index++) { ^ mesa/src/glsl/nir/nir_lower_tex_projector.c:57:22: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (proj_index == tex->num_srcs) ^ mesa/src/glsl/nir/nir_search.c: In function 'match_value': mesa/src/glsl/nir/nir_search.c:84:22: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < num_components; ++i) ^ mesa/src/glsl/nir/nir_search.c: In function 'match_value': mesa/src/glsl/nir/nir_search.c:110:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < num_components; ++i) { ^ mesa/src/glsl/nir/nir_search.c: In function 'match_value': mesa/src/glsl/nir/nir_search.c:139:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i < num_components) ^ mesa/src/glsl/nir/nir_opt_peephole_ffma.c: In function 'get_mul_for_src': mesa/src/glsl/nir/nir_opt_peephole_ffma.c:130:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (unsigned i = 0; i < num_components; i++) ^ Signed-off-by: Rhys Kidd <rhyskidd@gmail.com> Reviewed-by: Thomas Helland <thomashelland90@gmail.com> Reviewed-by: Jan Vesely <jan.vesely@rutgers.edu> Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
2015-09-08 23:52:48 +08:00
for (unsigned i = 0; i < num_components; ++i)
new_swizzle[i] = instr->src[src].swizzle[swizzle[i]];
/* If the value has a specific bit size and it doesn't match, bail */
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
if (value->bit_size > 0 &&
nir_src_bit_size(instr->src[src].src) != value->bit_size)
return false;
switch (value->type) {
case nir_search_value_expression:
if (instr->src[src].src.ssa->parent_instr->type != nir_instr_type_alu)
return false;
return match_expression(table, nir_search_value_as_expression(value),
nir_instr_as_alu(instr->src[src].src.ssa->parent_instr),
num_components, new_swizzle, state);
case nir_search_value_variable: {
nir_search_variable *var = nir_search_value_as_variable(value);
assert(var->variable < NIR_SEARCH_MAX_VARIABLES);
if (state->variables_seen & (1 << var->variable)) {
if (state->variables[var->variable].src.ssa != instr->src[src].src.ssa)
return false;
glsl: Resolve GCC sign-compare warning. mesa/src/glsl/nir/nir_lower_tex_projector.c: In function 'nir_lower_tex_projector_block': mesa/src/glsl/nir/nir_lower_tex_projector.c:63:25: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < tex->num_srcs; i++) { ^ mesa/src/glsl/nir/nir_lower_tex_projector.c: In function 'nir_lower_tex_projector_block': mesa/src/glsl/nir/nir_lower_tex_projector.c:114:38: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = proj_index + 1; i < tex->num_srcs; i++) { ^ mesa/src/glsl/nir/nir_lower_tex_projector.c: In function 'nir_lower_tex_projector_block': mesa/src/glsl/nir/nir_lower_tex_projector.c:53:39: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (proj_index = 0; proj_index < tex->num_srcs; proj_index++) { ^ mesa/src/glsl/nir/nir_lower_tex_projector.c:57:22: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (proj_index == tex->num_srcs) ^ mesa/src/glsl/nir/nir_search.c: In function 'match_value': mesa/src/glsl/nir/nir_search.c:84:22: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < num_components; ++i) ^ mesa/src/glsl/nir/nir_search.c: In function 'match_value': mesa/src/glsl/nir/nir_search.c:110:28: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (int i = 0; i < num_components; ++i) { ^ mesa/src/glsl/nir/nir_search.c: In function 'match_value': mesa/src/glsl/nir/nir_search.c:139:19: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] if (i < num_components) ^ mesa/src/glsl/nir/nir_opt_peephole_ffma.c: In function 'get_mul_for_src': mesa/src/glsl/nir/nir_opt_peephole_ffma.c:130:27: warning: comparison between signed and unsigned integer expressions [-Wsign-compare] for (unsigned i = 0; i < num_components; i++) ^ Signed-off-by: Rhys Kidd <rhyskidd@gmail.com> Reviewed-by: Thomas Helland <thomashelland90@gmail.com> Reviewed-by: Jan Vesely <jan.vesely@rutgers.edu> Reviewed-by: Emil Velikov <emil.l.velikov@gmail.com>
2015-09-08 23:52:48 +08:00
for (unsigned i = 0; i < num_components; ++i) {
if (state->variables[var->variable].swizzle[i] != new_swizzle[i])
return false;
}
return true;
} else {
if (var->is_constant &&
instr->src[src].src.ssa->parent_instr->type != nir_instr_type_load_const)
return false;
if (var->cond_index != -1 && !table->variable_cond[var->cond_index](state->range_ht, instr,
src, num_components, new_swizzle))
return false;
if (var->type != nir_type_invalid &&
!src_is_type(instr->src[src].src, var->type))
return false;
state->variables_seen |= (1 << var->variable);
state->variables[var->variable].src = instr->src[src].src;
for (unsigned i = 0; i < NIR_MAX_VEC_COMPONENTS; ++i) {
if (i < num_components)
state->variables[var->variable].swizzle[i] = new_swizzle[i];
else
state->variables[var->variable].swizzle[i] = 0;
}
return true;
}
}
case nir_search_value_constant: {
nir_search_constant *const_val = nir_search_value_as_constant(value);
if (!nir_src_is_const(instr->src[src].src))
return false;
switch (const_val->type) {
case nir_type_float: {
nir_load_const_instr *const load =
nir_instr_as_load_const(instr->src[src].src.ssa->parent_instr);
/* There are 8-bit and 1-bit integer types, but there are no 8-bit or
* 1-bit float types. This prevents potential assertion failures in
* nir_src_comp_as_float.
*/
if (load->def.bit_size < 16)
return false;
for (unsigned i = 0; i < num_components; ++i) {
double val = nir_src_comp_as_float(instr->src[src].src,
new_swizzle[i]);
if (val != const_val->data.d)
return false;
}
return true;
}
case nir_type_int:
case nir_type_uint:
case nir_type_bool: {
unsigned bit_size = nir_src_bit_size(instr->src[src].src);
uint64_t mask = u_uintN_max(bit_size);
for (unsigned i = 0; i < num_components; ++i) {
uint64_t val = nir_src_comp_as_uint(instr->src[src].src,
new_swizzle[i]);
if ((val & mask) != (const_val->data.u & mask))
return false;
}
return true;
}
default:
unreachable("Invalid alu source type");
}
}
default:
unreachable("Invalid search value type");
}
}
static bool
match_expression(const nir_algebraic_table *table, const nir_search_expression *expr, nir_alu_instr *instr,
unsigned num_components, const uint8_t *swizzle,
struct match_state *state)
{
if (expr->cond_index != -1 && !table->expression_cond[expr->cond_index](instr))
return false;
if (!nir_op_matches_search_op(instr->op, expr->opcode))
return false;
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
if (expr->value.bit_size > 0 &&
instr->def.bit_size != expr->value.bit_size)
return false;
state->inexact_match = expr->inexact || state->inexact_match;
state->has_exact_alu = (instr->exact && !expr->ignore_exact) || state->has_exact_alu;
if (state->inexact_match && state->has_exact_alu)
return false;
assert(nir_op_infos[instr->op].num_inputs > 0);
/* If we have an explicitly sized destination, we can only handle the
* identity swizzle. While dot(vec3(a, b, c).zxy) is a valid
* expression, we don't have the information right now to propagate that
* swizzle through. We can only properly propagate swizzles if the
* instruction is vectorized.
*/
if (nir_op_infos[instr->op].output_size != 0) {
for (unsigned i = 0; i < num_components; i++) {
if (swizzle[i] != i)
return false;
}
}
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
/* If this is a commutative expression and it's one of the first few, look
* up its direction for the current search operation. We'll use that value
* to possibly flip the sources for the match.
*/
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
unsigned comm_op_flip =
(expr->comm_expr_idx >= 0 &&
expr->comm_expr_idx < NIR_SEARCH_MAX_COMM_OPS)
? ((state->comm_op_direction >> expr->comm_expr_idx) & 1)
: 0;
bool matched = true;
for (unsigned i = 0; i < nir_op_infos[instr->op].num_inputs; i++) {
/* 2src_commutative instructions that have 3 sources are only commutative
* in the first two sources. Source 2 is always source 2.
*/
if (!match_value(table, &state->table->values[expr->srcs[i]].value, instr,
i < 2 ? i ^ comm_op_flip : i,
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
num_components, swizzle, state)) {
matched = false;
break;
}
}
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
return matched;
}
static unsigned
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
replace_bitsize(const nir_search_value *value, unsigned search_bitsize,
struct match_state *state)
{
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
if (value->bit_size > 0)
return value->bit_size;
if (value->bit_size < 0)
return nir_src_bit_size(state->variables[-value->bit_size - 1].src);
return search_bitsize;
}
static nir_alu_src
construct_value(nir_builder *build,
const nir_search_value *value,
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
unsigned num_components, unsigned search_bitsize,
struct match_state *state,
nir_instr *instr)
{
switch (value->type) {
case nir_search_value_expression: {
const nir_search_expression *expr = nir_search_value_as_expression(value);
unsigned dst_bit_size = replace_bitsize(value, search_bitsize, state);
nir_op op = nir_op_for_search_op(expr->opcode, dst_bit_size);
if (nir_op_infos[op].output_size != 0)
num_components = nir_op_infos[op].output_size;
nir_alu_instr *alu = nir_alu_instr_create(build->shader, op);
nir_def_init(&alu->instr, &alu->def, num_components,
dst_bit_size);
/* We have no way of knowing what values in a given search expression
* map to a particular replacement value. Therefore, if the
* expression we are replacing has any exact values, the entire
* replacement should be exact.
*/
alu->exact = state->has_exact_alu || expr->exact;
for (unsigned i = 0; i < nir_op_infos[op].num_inputs; i++) {
/* If the source is an explicitly sized source, then we need to reset
* the number of components to match.
*/
if (nir_op_infos[alu->op].input_sizes[i] != 0)
num_components = nir_op_infos[alu->op].input_sizes[i];
alu->src[i] = construct_value(build, &state->table->values[expr->srcs[i]].value,
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
num_components, search_bitsize,
state, instr);
}
nir_builder_instr_insert(build, &alu->instr);
assert(alu->def.index ==
util_dynarray_num_elements(state->states, uint16_t));
util_dynarray_append(state->states, uint16_t, 0);
nir_algebraic_automaton(&alu->instr, state->states, state->pass_op_table);
nir_alu_src val;
val.src = nir_src_for_ssa(&alu->def);
memcpy(val.swizzle, identity_swizzle, sizeof val.swizzle);
return val;
}
case nir_search_value_variable: {
const nir_search_variable *var = nir_search_value_as_variable(value);
assert(state->variables_seen & (1 << var->variable));
nir_alu_src val = { NIR_SRC_INIT };
nir_alu_src_copy(&val, &state->variables[var->variable]);
assert(!var->is_constant);
for (unsigned i = 0; i < NIR_MAX_VEC_COMPONENTS; i++)
val.swizzle[i] = state->variables[var->variable].swizzle[var->swizzle[i]];
return val;
}
case nir_search_value_constant: {
const nir_search_constant *c = nir_search_value_as_constant(value);
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
unsigned bit_size = replace_bitsize(value, search_bitsize, state);
nir_def *cval;
switch (c->type) {
case nir_type_float:
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
cval = nir_imm_floatN_t(build, c->data.d, bit_size);
break;
case nir_type_int:
case nir_type_uint:
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
cval = nir_imm_intN_t(build, c->data.i, bit_size);
break;
case nir_type_bool:
cval = nir_imm_boolN_t(build, c->data.u, bit_size);
break;
default:
unreachable("Invalid alu source type");
}
assert(cval->index ==
util_dynarray_num_elements(state->states, uint16_t));
util_dynarray_append(state->states, uint16_t, 0);
nir_algebraic_automaton(cval->parent_instr, state->states,
state->pass_op_table);
nir_alu_src val;
val.src = nir_src_for_ssa(cval);
memset(val.swizzle, 0, sizeof val.swizzle);
return val;
}
default:
unreachable("Invalid search value type");
}
}
UNUSED static void
dump_value(const nir_algebraic_table *table, const nir_search_value *val)
{
switch (val->type) {
case nir_search_value_constant: {
const nir_search_constant *sconst = nir_search_value_as_constant(val);
switch (sconst->type) {
case nir_type_float:
fprintf(stderr, "%f", sconst->data.d);
break;
case nir_type_int:
fprintf(stderr, "%" PRId64, sconst->data.i);
break;
case nir_type_uint:
fprintf(stderr, "0x%" PRIx64, sconst->data.u);
break;
case nir_type_bool:
fprintf(stderr, "%s", sconst->data.u != 0 ? "True" : "False");
break;
default:
unreachable("bad const type");
}
break;
}
case nir_search_value_variable: {
const nir_search_variable *var = nir_search_value_as_variable(val);
if (var->is_constant)
fprintf(stderr, "#");
fprintf(stderr, "%c", var->variable + 'a');
break;
}
case nir_search_value_expression: {
const nir_search_expression *expr = nir_search_value_as_expression(val);
fprintf(stderr, "(");
if (expr->inexact)
fprintf(stderr, "~");
switch (expr->opcode) {
#define CASE(n) \
case nir_search_op_##n: \
fprintf(stderr, #n); \
break;
CASE(b2f)
CASE(b2i)
CASE(i2i)
CASE(f2i)
CASE(i2f)
#undef CASE
default:
fprintf(stderr, "%s", nir_op_infos[expr->opcode].name);
}
unsigned num_srcs = 1;
if (expr->opcode <= nir_last_opcode)
num_srcs = nir_op_infos[expr->opcode].num_inputs;
for (unsigned i = 0; i < num_srcs; i++) {
fprintf(stderr, " ");
dump_value(table, &table->values[expr->srcs[i]].value);
}
fprintf(stderr, ")");
break;
}
}
if (val->bit_size > 0)
fprintf(stderr, "@%d", val->bit_size);
}
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
static void
add_uses_to_worklist(nir_instr *instr,
nir_instr_worklist *worklist,
struct util_dynarray *states,
const struct per_op_table *pass_op_table)
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
{
nir_def *def = nir_instr_def(instr);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_foreach_use_safe(use_src, def) {
if (nir_algebraic_automaton(nir_src_parent_instr(use_src), states, pass_op_table))
nir_instr_worklist_push_tail(worklist, nir_src_parent_instr(use_src));
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
}
}
static void
nir_algebraic_update_automaton(nir_instr *new_instr,
nir_instr_worklist *algebraic_worklist,
struct util_dynarray *states,
const struct per_op_table *pass_op_table)
{
nir_instr_worklist *automaton_worklist = nir_instr_worklist_create();
/* Walk through the tree of uses of our new instruction's SSA value,
* recursively updating the automaton state until it stabilizes.
*/
add_uses_to_worklist(new_instr, automaton_worklist, states, pass_op_table);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_instr *instr;
while ((instr = nir_instr_worklist_pop_head(automaton_worklist))) {
nir_instr_worklist_push_tail(algebraic_worklist, instr);
add_uses_to_worklist(instr, automaton_worklist, states, pass_op_table);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
}
nir_instr_worklist_destroy(automaton_worklist);
}
static nir_def *
nir_replace_instr(nir_builder *build, nir_alu_instr *instr,
struct hash_table *range_ht,
struct util_dynarray *states,
const nir_algebraic_table *table,
const nir_search_expression *search,
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
const nir_search_value *replace,
nir_instr_worklist *algebraic_worklist,
struct exec_list *dead_instrs)
{
uint8_t swizzle[NIR_MAX_VEC_COMPONENTS] = { 0 };
for (unsigned i = 0; i < instr->def.num_components; ++i)
swizzle[i] = i;
struct match_state state;
state.inexact_match = false;
state.has_exact_alu = false;
state.range_ht = range_ht;
state.pass_op_table = table->pass_op_table;
state.table = table;
STATIC_ASSERT(sizeof(state.comm_op_direction) * 8 >= NIR_SEARCH_MAX_COMM_OPS);
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
unsigned comm_expr_combinations =
1 << MIN2(search->comm_exprs, NIR_SEARCH_MAX_COMM_OPS);
bool found = false;
for (unsigned comb = 0; comb < comm_expr_combinations; comb++) {
/* The bitfield of directions is just the current iteration. Hooray for
* binary.
*/
state.comm_op_direction = comb;
state.variables_seen = 0;
if (match_expression(table, search, instr,
instr->def.num_components,
nir/search: Search for all combinations of commutative ops Consider the following search expression and NIR sequence: ('iadd', ('imul', a, b), b) ssa_2 = imul ssa_0, ssa_1 ssa_3 = iadd ssa_2, ssa_0 The current algorithm is greedy and, the moment the imul finds a match, it commits those variable names and returns success. In the above example, it maps a -> ssa_0 and b -> ssa_1. When we then try to match the iadd, it sees that ssa_0 is not b and fails to match. The iadd match will attempt to flip itself and try again (which won't work) but it cannot ask the imul to try a flipped match. This commit instead counts the number of commutative ops in each expression and assigns an index to each. It then does a loop and loops over the full combinatorial matrix of commutative operations. In order to keep things sane, we limit it to at most 4 commutative operations (16 combinations). There is only one optimization in opt_algebraic that goes over this limit and it's the bitfieldReverse detection for some UE4 demo. Shader-db results on Kaby Lake: total instructions in shared programs: 15310125 -> 15302469 (-0.05%) instructions in affected programs: 1797123 -> 1789467 (-0.43%) helped: 6751 HURT: 2264 total cycles in shared programs: 357346617 -> 357202526 (-0.04%) cycles in affected programs: 15931005 -> 15786914 (-0.90%) helped: 6024 HURT: 3436 total loops in shared programs: 4360 -> 4360 (0.00%) loops in affected programs: 0 -> 0 helped: 0 HURT: 0 total spills in shared programs: 23675 -> 23666 (-0.04%) spills in affected programs: 235 -> 226 (-3.83%) helped: 5 HURT: 1 total fills in shared programs: 32040 -> 32032 (-0.02%) fills in affected programs: 190 -> 182 (-4.21%) helped: 6 HURT: 2 LOST: 18 GAINED: 5 Reviewed-by: Thomas Helland <thomashelland90@gmail.com>
2019-03-22 17:45:29 -05:00
swizzle, &state)) {
found = true;
break;
}
}
if (!found)
return NULL;
#if 0
fprintf(stderr, "matched: ");
dump_value(&search->value);
fprintf(stderr, " -> ");
dump_value(replace);
fprintf(stderr, " ssa_%d\n", instr->def.index);
#endif
nir/algebraic: Change the default cursor location when replacing a unary op If the expression tree that is being replaced has a unary operation at its root, set the cursor (location where new instructions are inserted) at the source instruction instead. This doesn't do much now because there are very few patterns that have a unary operation as the root. Almost all of the patterns that do have a unary operation as the root have inot. All of the shaders that are affected by this commit have expression trees with an inot at the root. This change prevents some significant, spurious caused by the next commit. There is further explanation in the large comment added in the code. I also considered a couple other options that may still be worth exploring. 1. Add some mark-up to the search pattern to denote where new instructions should be added. I considered using "@" to denote the cursor location. For example, (('fneg', ('fadd@', a, b)), ...) 2. To prevent other kinds of unintended code motion, add the ability to name expressions in the search pattern so that they can be reused in the replacement. For example, (('bcsel', ('ige', ('find_lsb=b', a), 0), ('find_lsb', a), -1), b), An alternative would be to add some kind of CSE at the time of inserting the replacements. Create a new instruction, then check to see if it already exists. That option might be better overall. Over the years I know Matt has heard me complain, "I added a pattern that just deleted an instruction, but it added a bunch of spills!" This was always in large, complex shaders that are very hard to analyze. I always blamed these cases on the scheduler being dumb. I am now very suspicious that unintended code motion was the real problem. All Gen4+ Intel platforms had similar results. (Tiger Lake shown) total instructions in shared programs: 17611405 -> 17611333 (<.01%) instructions in affected programs: 18613 -> 18541 (-0.39%) helped: 41 HURT: 13 helped stats (abs) min: 1 max: 18 x̄: 4.46 x̃: 4 helped stats (rel) min: 0.27% max: 5.68% x̄: 1.29% x̃: 1.34% HURT stats (abs) min: 1 max: 20 x̄: 8.54 x̃: 7 HURT stats (rel) min: 0.30% max: 4.20% x̄: 2.15% x̃: 2.38% 95% mean confidence interval for instructions value: -3.29 0.63 95% mean confidence interval for instructions %-change: -0.95% 0.02% Inconclusive result (value mean confidence interval includes 0). total cycles in shared programs: 338366118 -> 338365223 (<.01%) cycles in affected programs: 257889 -> 256994 (-0.35%) helped: 42 HURT: 15 helped stats (abs) min: 2 max: 120 x̄: 39.38 x̃: 34 helped stats (rel) min: 0.04% max: 2.55% x̄: 0.86% x̃: 0.76% HURT stats (abs) min: 6 max: 204 x̄: 50.60 x̃: 34 HURT stats (rel) min: 0.11% max: 4.75% x̄: 1.12% x̃: 0.56% 95% mean confidence interval for cycles value: -30.39 -1.02 95% mean confidence interval for cycles %-change: -0.66% -0.02% Cycles are helped. Reviewed-by: Matt Turner <mattst88@gmail.com> Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/1359>
2020-03-23 14:25:24 -07:00
/* If the instruction at the root of the expression tree being replaced is
* a unary operation, insert the replacement instructions at the location
* of the source of the unary operation. Otherwise, insert the replacement
* instructions at the location of the expression tree root.
*
* For the unary operation case, this is done to prevent some spurious code
* motion that can dramatically extend live ranges. Imagine an expression
* like -(A+B) where the addtion and the negation are separated by flow
* control and thousands of instructions. If this expression is replaced
* with -A+-B, inserting the new instructions at the site of the negation
* could extend the live range of A and B dramtically. This could increase
* register pressure and cause spilling.
*
* It may well be that moving instructions around is a good thing, but
* keeping algebraic optimizations and code motion optimizations separate
* seems safest.
*/
nir_alu_instr *const src_instr = nir_src_as_alu_instr(instr->src[0].src);
if (src_instr != NULL &&
(instr->op == nir_op_fneg || instr->op == nir_op_fabs ||
instr->op == nir_op_ineg || instr->op == nir_op_iabs ||
instr->op == nir_op_inot)) {
/* Insert new instructions *after*. Otherwise a hypothetical
* replacement fneg(X) -> fabs(X) would insert the fabs() instruction
* before X! This can also occur for things like fneg(X.wzyx) -> X.wzyx
* in vector mode. A move instruction to handle the swizzle will get
* inserted before X.
*
* This manifested in a single OpenGL ES 2.0 CTS vertex shader test on
* older Intel GPU that use vector-mode vertex processing.
*/
build->cursor = nir_after_instr(&src_instr->instr);
} else {
build->cursor = nir_before_instr(&instr->instr);
}
state.states = states;
nir_alu_src val = construct_value(build, replace,
instr->def.num_components,
instr->def.bit_size,
nir/algebraic: Rewrite bit-size inference Before this commit, there were two copies of the algorithm: one in C, that we would use to figure out what bit-size to give the replacement expression, and one in Python, that emulated the C one and tried to prove that the C algorithm would never fail to correctly assign bit-sizes. That seemed pretty fragile, and likely to fall over if we make any changes. Furthermore, the C code was really just recomputing more-or-less the same thing as the Python code every time. Instead, we can just store the results of the Python algorithm in the C datastructure, and consult it to compute the bitsize of each value, moving the "brains" entirely into Python. Since the Python algorithm no longer has to match C, it's also a lot easier to change it to something more closely approximating an actual type-inference algorithm. The algorithm used is based on Hindley-Milner, although deliberately weakened a little. It's a few more lines than the old one, judging by the diffstat, but I think it's easier to verify that it's correct while being as general as possible. We could split this up into two changes, first making the C code use the results of the Python code and then rewriting the Python algorithm, but since the old algorithm never tracked which variable each equivalence class, it would mean we'd have to add some non-trivial code which would then get thrown away. I think it's better to see the final state all at once, although I could also try splitting it up. v2: - Replace instances of "== None" and "!= None" with "is None" and "is not None". - Rename first_src to first_unsized_src - Only merge the destination with the first unsized source, since the sources have already been merged. - Add a comment explaining what nir_search_value::bit_size now means. v3: - Fix one last instance to use "is not" instead of != - Don't try to be so clever when choosing which error message to print based on whether we're in the search or replace expression. - Fix trailing whitespace. Reviewed-by: Jason Ekstrand <jason@jlekstrand.net> Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
2018-11-23 17:34:19 +01:00
&state, &instr->instr);
/* Note that NIR builder will elide the MOV if it's a no-op, which may
* allow more work to be done in a single pass through algebraic.
*/
nir_def *ssa_val =
nir_mov_alu(build, val, instr->def.num_components);
if (ssa_val->index == util_dynarray_num_elements(states, uint16_t)) {
util_dynarray_append(states, uint16_t, 0);
nir_algebraic_automaton(ssa_val->parent_instr, states, table->pass_op_table);
}
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
/* Rewrite the uses of the old SSA value to the new one, and recurse
* through the uses updating the automaton's state.
*/
nir_def_rewrite_uses(&instr->def, ssa_val);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_algebraic_update_automaton(ssa_val->parent_instr, algebraic_worklist,
states, table->pass_op_table);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
/* Nothing uses the instr any more, so drop it out of the program. Note
* that the instr may be in the worklist still, so we can't free it
* directly.
*/
assert(instr->instr.pass_flags == 0);
instr->instr.pass_flags = 1;
nir_instr_remove(&instr->instr);
exec_list_push_tail(dead_instrs, &instr->instr.node);
return ssa_val;
}
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
static bool
nir_algebraic_automaton(nir_instr *instr, struct util_dynarray *states,
const struct per_op_table *pass_op_table)
{
switch (instr->type) {
case nir_instr_type_alu: {
nir_alu_instr *alu = nir_instr_as_alu(instr);
nir_op op = alu->op;
uint16_t search_op = nir_search_op_for_nir_op(op);
const struct per_op_table *tbl = &pass_op_table[search_op];
if (tbl->num_filtered_states == 0)
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
return false;
/* Calculate the index into the transition table. Note the index
* calculated must match the iteration order of Python's
* itertools.product(), which was used to emit the transition
* table.
*/
unsigned index = 0;
for (unsigned i = 0; i < nir_op_infos[op].num_inputs; i++) {
index *= tbl->num_filtered_states;
if (tbl->filter)
index += tbl->filter[*util_dynarray_element(states, uint16_t,
alu->src[i].src.ssa->index)];
}
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
uint16_t *state = util_dynarray_element(states, uint16_t,
alu->def.index);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
if (*state != tbl->table[index]) {
*state = tbl->table[index];
return true;
}
return false;
}
case nir_instr_type_load_const: {
nir_load_const_instr *load_const = nir_instr_as_load_const(instr);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
uint16_t *state = util_dynarray_element(states, uint16_t,
load_const->def.index);
if (*state != CONST_STATE) {
*state = CONST_STATE;
return true;
}
return false;
}
default:
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
return false;
}
}
static bool
nir_algebraic_instr(nir_builder *build, nir_instr *instr,
struct hash_table *range_ht,
const bool *condition_flags,
const nir_algebraic_table *table,
struct util_dynarray *states,
nir_instr_worklist *worklist,
struct exec_list *dead_instrs)
{
if (instr->type != nir_instr_type_alu)
return false;
nir_alu_instr *alu = nir_instr_as_alu(instr);
unsigned bit_size = alu->def.bit_size;
const unsigned execution_mode =
build->shader->info.float_controls_execution_mode;
const bool ignore_inexact =
nir_is_float_control_signed_zero_inf_nan_preserve(execution_mode, bit_size) ||
nir_is_denorm_flush_to_zero(execution_mode, bit_size);
int xform_idx = *util_dynarray_element(states, uint16_t,
alu->def.index);
for (const struct transform *xform = &table->transforms[table->transform_offsets[xform_idx]];
xform->condition_offset != ~0;
xform++) {
if (condition_flags[xform->condition_offset] &&
!(table->values[xform->search].expression.inexact && ignore_inexact) &&
nir_replace_instr(build, alu, range_ht, states, table,
&table->values[xform->search].expression,
&table->values[xform->replace].value, worklist, dead_instrs)) {
_mesa_hash_table_clear(range_ht, NULL);
return true;
}
}
return false;
}
bool
nir_algebraic_impl(nir_function_impl *impl,
const bool *condition_flags,
const nir_algebraic_table *table)
{
bool progress = false;
nir_builder build = nir_builder_create(impl);
/* Note: it's important here that we're allocating a zeroed array, since
* state 0 is the default state, which means we don't have to visit
* anything other than constants and ALU instructions.
*/
struct util_dynarray states = { 0 };
if (!util_dynarray_resize(&states, uint16_t, impl->ssa_alloc)) {
nir_metadata_preserve(impl, nir_metadata_all);
return false;
}
memset(states.data, 0, states.size);
struct hash_table *range_ht = _mesa_pointer_hash_table_create(NULL);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_instr_worklist *worklist = nir_instr_worklist_create();
/* Walk top-to-bottom setting up the automaton state. */
nir_foreach_block(block, impl) {
nir_foreach_instr(instr, block) {
nir_algebraic_automaton(instr, &states, table->pass_op_table);
}
}
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
/* Put our instrs in the worklist such that we're popping the last instr
* first. This will encourage us to match the biggest source patterns when
* possible.
*/
nir_foreach_block_reverse(block, impl) {
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_foreach_instr_reverse(instr, block) {
instr->pass_flags = 0;
if (instr->type == nir_instr_type_alu)
nir_instr_worklist_push_tail(worklist, instr);
}
}
struct exec_list dead_instrs;
exec_list_make_empty(&dead_instrs);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_instr *instr;
while ((instr = nir_instr_worklist_pop_head(worklist))) {
/* The worklist can have an instr pushed to it multiple times if it was
* the src of multiple instrs that also got optimized, so make sure that
* we don't try to re-optimize an instr we already handled.
*/
if (instr->pass_flags)
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
continue;
progress |= nir_algebraic_instr(&build, instr,
range_ht, condition_flags,
table, &states, worklist, &dead_instrs);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
}
nir_instr_free_list(&dead_instrs);
nir: Make algebraic backtrack and reprocess after a replacement. The algebraic pass was exhibiting O(n^2) behavior in dEQP-GLES2.functional.uniform_api.random.3 and dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 (along with other code-generated tests, and likely real-world loop-unroll cases). In the process of using fmul(b2f(x), b2f(x)) -> b2f(iand(x, y)) to transform: result = b2f(a == b); result *= b2f(c == d); ... result *= b2f(z == w); -> temp = (a == b) temp = temp && (c == d) ... temp = temp && (z == w) result = b2f(temp); nir_opt_algebraic, proceeding bottom-to-top, would match and convert the top-most fmul(b2f(), b2f()) case each time, leaving the new b2f to be matched by the next fmul down on the next time algebraic got run by the optimization loop. Back in 2016 in 7be8d0773229 ("nir: Do opt_algebraic in reverse order."), Matt changed algebraic to go bottom-to-top so that we would match the biggest patterns first. This helped his cases, but I believe introduced this failure mode. Instead of reverting that, now that we've got the automaton, we can update the automaton's state recursively and just re-process any instructions whose state has changed (indicating that they might match new things). There's a small chance that the state will hash to the same value and miss out on this round of algebraic, but this seems to be good enough to fix dEQP. Effects with NIR_VALIDATE=0 (improvement is better with validation enabled): Intel shader-db runtime -0.954712% +/- 0.333844% (n=44/46, obvious throttling outliers removed) dEQP-GLES2.functional.uniform_api.random.3 runtime -65.3512% +/- 4.22369% (n=21, was 1.4s) dEQP-GLES31.functional.ubo.random.all_per_block_buffers.13 runtime -68.8066% +/- 6.49523% (was 4.8s) v2: Use two worklists, suggested by @cwabbott, to cut out a bunch of tricky code. Runtime of uniform_api.random.3 down -0.790299% +/- 0.244213% compred to v1. v3: Re-add the nir_instr_remove() that I accidentally dropped in v2, fixing infinite loops. Reviewed-by: Connor Abbott <cwabbott0@gmail.com>
2019-10-02 10:59:13 -07:00
nir_instr_worklist_destroy(worklist);
ralloc_free(range_ht);
util_dynarray_fini(&states);
if (progress) {
nir_metadata_preserve(impl, nir_metadata_block_index |
nir_metadata_dominance);
} else {
nir_metadata_preserve(impl, nir_metadata_all);
}
return progress;
}