Files
third_party_mesa3d/src/intel/compiler/brw_fs_live_variables.cpp

361 lines
11 KiB
C++
Raw Normal View History

/*
* Copyright © 2012 Intel Corporation
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice (including the next
* paragraph) shall be included in all copies or substantial portions of the
* Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
* Authors:
* Eric Anholt <eric@anholt.net>
*
*/
#include "brw_cfg.h"
#include "brw_fs_live_variables.h"
using namespace brw;
#define MAX_INSTRUCTION (1 << 30)
/** @file brw_fs_live_variables.cpp
*
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
* Support for calculating liveness information about virtual GRFs.
*
* This produces a live interval for each whole virtual GRF. We could
* choose to expose per-component live intervals for VGRFs of size > 1,
* but we currently do not. It is easier for the consumers of this
* information to work with whole VGRFs.
*
* However, we internally track use/def information at the per-GRF level for
* greater accuracy. Large VGRFs may be accessed piecemeal over many
* (possibly non-adjacent) instructions. In this case, examining a single
* instruction is insufficient to decide whether a whole VGRF is ultimately
* used or defined. Tracking individual components allows us to easily
* assemble this information.
*
* See Muchnick's Advanced Compiler Design and Implementation, section
* 14.1 (p444).
*/
void
fs_live_variables::setup_one_read(struct block_data *bd, fs_inst *inst,
int ip, const fs_reg &reg)
{
int var = var_from_reg(reg);
assert(var < num_vars);
start[var] = MIN2(start[var], ip);
i965: Add src/dst interference for certain instructions with hazards. When working on tessellation shaders, I created some vec4 virtual opcodes for creating message headers through a sequence like: mov(8) g7<1>UD 0x00000000UD { align1 WE_all 1Q compacted }; mov(1) g7.5<1>UD 0x00000100UD { align1 WE_all }; mov(1) g7<1>UD g0<0,1,0>UD { align1 WE_all compacted }; mov(1) g7.3<1>UD g8<0,1,0>UD { align1 WE_all }; This is done in the generator since the vec4 backend can't handle align1 regioning. From the visitor's point of view, this is a single opcode: hs_set_output_urb_offsets vgrf7.0:UD, 1U, vgrf8.xxxx:UD Normally, there's no hazard between sources and destinations - an instruction (naturally) reads its sources, then writes the result to the destination. However, when the virtual instruction generates multiple hardware instructions, we can get into trouble. In the above example, if the register allocator assigned vgrf7 and vgrf8 to the same hardware register, then we'd clobber the source with 0 in the first instruction, and read back the wrong value in the last one. It occured to me that this is exactly the same problem we have with SIMD16 instructions that use W/UW or B/UB types with 0 stride. The hardware implicitly decodes them as two SIMD8 instructions, and with the overlapping regions, the first would clobber the second. Previously, we handled that by incrementing the live range end IP by 1, which works, but is excessive: the next instruction doesn't actually care about that. It might also be the end of control flow. This might keep values alive too long. What we really want is to say "my source and destinations interfere". This patch creates new infrastructure for doing just that, and teaches the register allocator to add interference when there's a hazard. For my vec4 case, we can determine this by switching on opcodes. For the SIMD16 case, we just move the existing code there. I audited our existing virtual opcodes that generate multiple instructions; I believe FS_OPCODE_PACK_HALF_2x16_SPLIT needs this treatment as well, but no others. v2: Rebased by mattst88. Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Matt Turner <mattst88@gmail.com>
2015-11-19 16:00:18 -08:00
end[var] = MAX2(end[var], ip);
/* The use[] bitset marks when the block makes use of a variable (VGRF
* channel) without having completely defined that variable within the
* block.
*/
if (!BITSET_TEST(bd->def, var))
BITSET_SET(bd->use, var);
}
void
fs_live_variables::setup_one_write(struct block_data *bd, fs_inst *inst,
int ip, const fs_reg &reg)
{
int var = var_from_reg(reg);
assert(var < num_vars);
start[var] = MIN2(start[var], ip);
end[var] = MAX2(end[var], ip);
/* The def[] bitset marks when an initialization in a block completely
* screens off previous updates of that variable (VGRF channel).
*/
intel/fs: Restrict live intervals to the subset possibly reachable from any definition. Currently the liveness analysis pass would extend a live interval up to the top of the program when no unconditional and complete definition of the variable is found that dominates all of its uses. This can lead to a serious performance problem in shaders containing many partial writes, like scalar arithmetic, FP64 and soon FP16 operations. The number of oversize live intervals in such workloads can cause the compilation time of the shader to explode because of the worse than quadratic behavior of the register allocator and scheduler when running out of registers, and it can also cause the running time of the shader to explode due to the amount of spilling it leads to, which is orders of magnitude slower than GRF memory. This patch fixes it by computing the intersection of our current live intervals with the subset of the program that can possibly be reached from any definition of the variable. Extending the storage allocation of the variable beyond that is pretty useless because its value is guaranteed to be undefined at a point that cannot be reached from any definition. According to Jason, this improves performance of the subgroup Vulkan CTS tests significantly (e.g. the runtime of the dvec4 broadcast test improves by nearly 50x). No significant change in the running time of shader-db (with 5% statistical significance). shader-db results on IVB: total cycles in shared programs: 61108780 -> 60932856 (-0.29%) cycles in affected programs: 16335482 -> 16159558 (-1.08%) helped: 5121 HURT: 4347 total spills in shared programs: 1309 -> 1288 (-1.60%) spills in affected programs: 249 -> 228 (-8.43%) helped: 3 HURT: 0 total fills in shared programs: 1652 -> 1597 (-3.33%) fills in affected programs: 262 -> 207 (-20.99%) helped: 4 HURT: 0 LOST: 2 GAINED: 209 shader-db results on BDW: total cycles in shared programs: 67617262 -> 67361220 (-0.38%) cycles in affected programs: 23397142 -> 23141100 (-1.09%) helped: 8045 HURT: 6488 total spills in shared programs: 1456 -> 1252 (-14.01%) spills in affected programs: 465 -> 261 (-43.87%) helped: 3 HURT: 0 total fills in shared programs: 1720 -> 1465 (-14.83%) fills in affected programs: 471 -> 216 (-54.14%) helped: 4 HURT: 0 LOST: 2 GAINED: 162 shader-db results on SKL: total cycles in shared programs: 65436248 -> 65245186 (-0.29%) cycles in affected programs: 22560936 -> 22369874 (-0.85%) helped: 8457 HURT: 6247 total spills in shared programs: 437 -> 437 (0.00%) spills in affected programs: 0 -> 0 helped: 0 HURT: 0 total fills in shared programs: 870 -> 854 (-1.84%) fills in affected programs: 16 -> 0 helped: 1 HURT: 0 LOST: 0 GAINED: 107 Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2017-09-07 00:26:03 -07:00
if (inst->dst.file == VGRF) {
intel/compiler: split is_partial_write() into two variants This function is used in two different scenarios that for 32-bit instructions are the same, but for 16-bit instructions are not. One scenario is that in which we are working at a SIMD8 register level and we need to know if a register is fully defined or written. This is useful, for example, in the context of liveness analysis or register allocation, where we work with units of registers. The other scenario is that in which we want to know if an instruction is writing a full scalar component or just some subset of it. This is useful, for example, in the context of some optimization passes like copy propagation. For 32-bit instructions (or larger), a SIMD8 dispatch will always write at least a full SIMD8 register (32B) if the write is not partial. The function is_partial_write() checks this to determine if we have a partial write. However, when we deal with 16-bit instructions, that logic disables some optimizations that should be safe. For example, a SIMD8 16-bit MOV will only update half of a SIMD register, but it is still a complete write of the variable for a SIMD8 dispatch, so we should not prevent copy propagation in this scenario because we don't write all 32 bytes in the SIMD register or because the write starts at offset 16B (wehere we pack components Y or W of 16-bit vectors). This is a problem for SIMD8 executions (VS, TCS, TES, GS) of 16-bit instructions, which lose a number of optimizations because of this, most important of which is copy-propagation. This patch splits is_partial_write() into is_partial_reg_write(), which represents the current is_partial_write(), useful for things like liveness analysis, and is_partial_var_write(), which considers the dispatch size to check if we are writing a full variable (rather than a full register) to decide if the write is partial or not, which is what we really want in many optimization passes. Then the patch goes on and rewrites all uses of is_partial_write() to use one or the other version. Specifically, we use is_partial_var_write() in the following places: copy propagation, cmod propagation, common subexpression elimination, saturate propagation and sel peephole. Notice that the semantics of is_partial_var_write() exactly match the current implementation of is_partial_write() for anything that is 32-bit or larger, so no changes are expected for 32-bit instructions. Tested against ~5000 tests involving 16-bit instructions in CTS produced the following changes in instruction counts: Patched | Master | % | ================================================ SIMD8 | 621,900 | 706,721 | -12.00% | ================================================ SIMD16 | 93,252 | 93,252 | 0.00% | ================================================ As expected, the change only affects SIMD8 dispatches. Reviewed-by: Topi Pohjolainen <topi.pohjolainen@intel.com>
2018-07-10 09:52:46 +02:00
if (!inst->is_partial_reg_write() && !BITSET_TEST(bd->use, var))
BITSET_SET(bd->def, var);
intel/fs: Restrict live intervals to the subset possibly reachable from any definition. Currently the liveness analysis pass would extend a live interval up to the top of the program when no unconditional and complete definition of the variable is found that dominates all of its uses. This can lead to a serious performance problem in shaders containing many partial writes, like scalar arithmetic, FP64 and soon FP16 operations. The number of oversize live intervals in such workloads can cause the compilation time of the shader to explode because of the worse than quadratic behavior of the register allocator and scheduler when running out of registers, and it can also cause the running time of the shader to explode due to the amount of spilling it leads to, which is orders of magnitude slower than GRF memory. This patch fixes it by computing the intersection of our current live intervals with the subset of the program that can possibly be reached from any definition of the variable. Extending the storage allocation of the variable beyond that is pretty useless because its value is guaranteed to be undefined at a point that cannot be reached from any definition. According to Jason, this improves performance of the subgroup Vulkan CTS tests significantly (e.g. the runtime of the dvec4 broadcast test improves by nearly 50x). No significant change in the running time of shader-db (with 5% statistical significance). shader-db results on IVB: total cycles in shared programs: 61108780 -> 60932856 (-0.29%) cycles in affected programs: 16335482 -> 16159558 (-1.08%) helped: 5121 HURT: 4347 total spills in shared programs: 1309 -> 1288 (-1.60%) spills in affected programs: 249 -> 228 (-8.43%) helped: 3 HURT: 0 total fills in shared programs: 1652 -> 1597 (-3.33%) fills in affected programs: 262 -> 207 (-20.99%) helped: 4 HURT: 0 LOST: 2 GAINED: 209 shader-db results on BDW: total cycles in shared programs: 67617262 -> 67361220 (-0.38%) cycles in affected programs: 23397142 -> 23141100 (-1.09%) helped: 8045 HURT: 6488 total spills in shared programs: 1456 -> 1252 (-14.01%) spills in affected programs: 465 -> 261 (-43.87%) helped: 3 HURT: 0 total fills in shared programs: 1720 -> 1465 (-14.83%) fills in affected programs: 471 -> 216 (-54.14%) helped: 4 HURT: 0 LOST: 2 GAINED: 162 shader-db results on SKL: total cycles in shared programs: 65436248 -> 65245186 (-0.29%) cycles in affected programs: 22560936 -> 22369874 (-0.85%) helped: 8457 HURT: 6247 total spills in shared programs: 437 -> 437 (0.00%) spills in affected programs: 0 -> 0 helped: 0 HURT: 0 total fills in shared programs: 870 -> 854 (-1.84%) fills in affected programs: 16 -> 0 helped: 1 HURT: 0 LOST: 0 GAINED: 107 Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2017-09-07 00:26:03 -07:00
BITSET_SET(bd->defout, var);
}
}
/**
* Sets up the use[] and def[] bitsets.
*
* The basic-block-level live variable analysis needs to know which
* variables get used before they're completely defined, and which
* variables are completely defined before they're used.
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
*
* These are tracked at the per-component level, rather than whole VGRFs.
*/
void
fs_live_variables::setup_def_use()
{
int ip = 0;
foreach_block (block, cfg) {
assert(ip == block->start_ip);
if (block->num > 0)
assert(cfg->blocks[block->num - 1]->end_ip == ip - 1);
struct block_data *bd = &block_data[block->num];
foreach_inst_in_block(fs_inst, inst, block) {
/* Set use[] for this instruction */
for (unsigned int i = 0; i < inst->sources; i++) {
fs_reg reg = inst->src[i];
if (reg.file != VGRF)
continue;
for (unsigned j = 0; j < regs_read(inst, i); j++) {
setup_one_read(bd, inst, ip, reg);
i965/fs: Replace fs_reg::reg_offset with fs_reg::offset expressed in bytes. The fs_reg::offset field in byte units introduced in this patch is a more straightforward alternative to the current register offset representation split between fs_reg::reg_offset and ::subreg_offset. The split representation makes it too easy to forget about one of the offsets while dealing with the other, which has led to multiple back-end bugs in the past. To make the matter worse the unit reg_offset was expressed in was rather inconsistent, for uniforms it would be expressed in either 4B or 16B units depending on the back-end, and for most other things it would be expressed in 32B units. This encodes reg_offset as a new offset field expressed consistently in byte units. Each rvalue reference of reg_offset in existing code like 'x = r.reg_offset' is rewritten to 'x = r.offset / reg_unit', and each lvalue reference like 'r.reg_offset = x' is rewritten to 'r.offset = r.offset % reg_unit + x * reg_unit'. Because the change affects a lot of places and is rather non-trivial to verify due to the inconsistent value of reg_unit, I've tried to avoid making any additional changes other than applying the rewrite rule above in order to keep the patch as simple as possible, sometimes at the cost of introducing obvious stupidity (e.g. algebraic expressions that could be simplified given some knowledge of the context) -- I'll clean those up later on in a second pass. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
2016-09-01 12:42:20 -07:00
reg.offset += REG_SIZE;
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
}
}
bd->flag_use[0] |= inst->flags_read(v->devinfo) & ~bd->flag_def[0];
/* Set def[] for this instruction */
if (inst->dst.file == VGRF) {
fs_reg reg = inst->dst;
for (unsigned j = 0; j < regs_written(inst); j++) {
setup_one_write(bd, inst, ip, reg);
i965/fs: Replace fs_reg::reg_offset with fs_reg::offset expressed in bytes. The fs_reg::offset field in byte units introduced in this patch is a more straightforward alternative to the current register offset representation split between fs_reg::reg_offset and ::subreg_offset. The split representation makes it too easy to forget about one of the offsets while dealing with the other, which has led to multiple back-end bugs in the past. To make the matter worse the unit reg_offset was expressed in was rather inconsistent, for uniforms it would be expressed in either 4B or 16B units depending on the back-end, and for most other things it would be expressed in 32B units. This encodes reg_offset as a new offset field expressed consistently in byte units. Each rvalue reference of reg_offset in existing code like 'x = r.reg_offset' is rewritten to 'x = r.offset / reg_unit', and each lvalue reference like 'r.reg_offset = x' is rewritten to 'r.offset = r.offset % reg_unit + x * reg_unit'. Because the change affects a lot of places and is rather non-trivial to verify due to the inconsistent value of reg_unit, I've tried to avoid making any additional changes other than applying the rewrite rule above in order to keep the patch as simple as possible, sometimes at the cost of introducing obvious stupidity (e.g. algebraic expressions that could be simplified given some knowledge of the context) -- I'll clean those up later on in a second pass. Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
2016-09-01 12:42:20 -07:00
reg.offset += REG_SIZE;
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
}
}
if (!inst->predicate && inst->exec_size >= 8)
bd->flag_def[0] |= inst->flags_written() & ~bd->flag_use[0];
ip++;
}
}
}
/**
* The algorithm incrementally sets bits in liveout and livein,
* propagating it through control flow. It will eventually terminate
* because it only ever adds bits, and stops when no bits are added in
* a pass.
*/
void
fs_live_variables::compute_live_variables()
{
bool cont = true;
while (cont) {
cont = false;
foreach_block_reverse (block, cfg) {
struct block_data *bd = &block_data[block->num];
/* Update liveout */
foreach_list_typed(bblock_link, child_link, link, &block->children) {
struct block_data *child_bd = &block_data[child_link->block->num];
for (int i = 0; i < bitset_words; i++) {
BITSET_WORD new_liveout = (child_bd->livein[i] &
~bd->liveout[i]);
if (new_liveout) {
bd->liveout[i] |= new_liveout;
cont = true;
}
}
BITSET_WORD new_liveout = (child_bd->flag_livein[0] &
~bd->flag_liveout[0]);
if (new_liveout) {
bd->flag_liveout[0] |= new_liveout;
cont = true;
}
}
/* Update livein */
for (int i = 0; i < bitset_words; i++) {
BITSET_WORD new_livein = (bd->use[i] |
(bd->liveout[i] &
~bd->def[i]));
if (new_livein & ~bd->livein[i]) {
bd->livein[i] |= new_livein;
cont = true;
}
}
BITSET_WORD new_livein = (bd->flag_use[0] |
(bd->flag_liveout[0] &
~bd->flag_def[0]));
if (new_livein & ~bd->flag_livein[0]) {
bd->flag_livein[0] |= new_livein;
cont = true;
}
}
}
intel/fs: Restrict live intervals to the subset possibly reachable from any definition. Currently the liveness analysis pass would extend a live interval up to the top of the program when no unconditional and complete definition of the variable is found that dominates all of its uses. This can lead to a serious performance problem in shaders containing many partial writes, like scalar arithmetic, FP64 and soon FP16 operations. The number of oversize live intervals in such workloads can cause the compilation time of the shader to explode because of the worse than quadratic behavior of the register allocator and scheduler when running out of registers, and it can also cause the running time of the shader to explode due to the amount of spilling it leads to, which is orders of magnitude slower than GRF memory. This patch fixes it by computing the intersection of our current live intervals with the subset of the program that can possibly be reached from any definition of the variable. Extending the storage allocation of the variable beyond that is pretty useless because its value is guaranteed to be undefined at a point that cannot be reached from any definition. According to Jason, this improves performance of the subgroup Vulkan CTS tests significantly (e.g. the runtime of the dvec4 broadcast test improves by nearly 50x). No significant change in the running time of shader-db (with 5% statistical significance). shader-db results on IVB: total cycles in shared programs: 61108780 -> 60932856 (-0.29%) cycles in affected programs: 16335482 -> 16159558 (-1.08%) helped: 5121 HURT: 4347 total spills in shared programs: 1309 -> 1288 (-1.60%) spills in affected programs: 249 -> 228 (-8.43%) helped: 3 HURT: 0 total fills in shared programs: 1652 -> 1597 (-3.33%) fills in affected programs: 262 -> 207 (-20.99%) helped: 4 HURT: 0 LOST: 2 GAINED: 209 shader-db results on BDW: total cycles in shared programs: 67617262 -> 67361220 (-0.38%) cycles in affected programs: 23397142 -> 23141100 (-1.09%) helped: 8045 HURT: 6488 total spills in shared programs: 1456 -> 1252 (-14.01%) spills in affected programs: 465 -> 261 (-43.87%) helped: 3 HURT: 0 total fills in shared programs: 1720 -> 1465 (-14.83%) fills in affected programs: 471 -> 216 (-54.14%) helped: 4 HURT: 0 LOST: 2 GAINED: 162 shader-db results on SKL: total cycles in shared programs: 65436248 -> 65245186 (-0.29%) cycles in affected programs: 22560936 -> 22369874 (-0.85%) helped: 8457 HURT: 6247 total spills in shared programs: 437 -> 437 (0.00%) spills in affected programs: 0 -> 0 helped: 0 HURT: 0 total fills in shared programs: 870 -> 854 (-1.84%) fills in affected programs: 16 -> 0 helped: 1 HURT: 0 LOST: 0 GAINED: 107 Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2017-09-07 00:26:03 -07:00
/* Propagate defin and defout down the CFG to calculate the union of live
* variables potentially defined along any possible control flow path.
*/
do {
cont = false;
foreach_block (block, cfg) {
const struct block_data *bd = &block_data[block->num];
foreach_list_typed(bblock_link, child_link, link, &block->children) {
struct block_data *child_bd = &block_data[child_link->block->num];
for (int i = 0; i < bitset_words; i++) {
const BITSET_WORD new_def = bd->defout[i] & ~child_bd->defin[i];
child_bd->defin[i] |= new_def;
child_bd->defout[i] |= new_def;
cont |= new_def;
}
}
}
} while (cont);
}
/**
* Extend the start/end ranges for each variable to account for the
* new information calculated from control flow.
*/
void
fs_live_variables::compute_start_end()
{
foreach_block (block, cfg) {
struct block_data *bd = &block_data[block->num];
for (int i = 0; i < num_vars; i++) {
intel/fs: Restrict live intervals to the subset possibly reachable from any definition. Currently the liveness analysis pass would extend a live interval up to the top of the program when no unconditional and complete definition of the variable is found that dominates all of its uses. This can lead to a serious performance problem in shaders containing many partial writes, like scalar arithmetic, FP64 and soon FP16 operations. The number of oversize live intervals in such workloads can cause the compilation time of the shader to explode because of the worse than quadratic behavior of the register allocator and scheduler when running out of registers, and it can also cause the running time of the shader to explode due to the amount of spilling it leads to, which is orders of magnitude slower than GRF memory. This patch fixes it by computing the intersection of our current live intervals with the subset of the program that can possibly be reached from any definition of the variable. Extending the storage allocation of the variable beyond that is pretty useless because its value is guaranteed to be undefined at a point that cannot be reached from any definition. According to Jason, this improves performance of the subgroup Vulkan CTS tests significantly (e.g. the runtime of the dvec4 broadcast test improves by nearly 50x). No significant change in the running time of shader-db (with 5% statistical significance). shader-db results on IVB: total cycles in shared programs: 61108780 -> 60932856 (-0.29%) cycles in affected programs: 16335482 -> 16159558 (-1.08%) helped: 5121 HURT: 4347 total spills in shared programs: 1309 -> 1288 (-1.60%) spills in affected programs: 249 -> 228 (-8.43%) helped: 3 HURT: 0 total fills in shared programs: 1652 -> 1597 (-3.33%) fills in affected programs: 262 -> 207 (-20.99%) helped: 4 HURT: 0 LOST: 2 GAINED: 209 shader-db results on BDW: total cycles in shared programs: 67617262 -> 67361220 (-0.38%) cycles in affected programs: 23397142 -> 23141100 (-1.09%) helped: 8045 HURT: 6488 total spills in shared programs: 1456 -> 1252 (-14.01%) spills in affected programs: 465 -> 261 (-43.87%) helped: 3 HURT: 0 total fills in shared programs: 1720 -> 1465 (-14.83%) fills in affected programs: 471 -> 216 (-54.14%) helped: 4 HURT: 0 LOST: 2 GAINED: 162 shader-db results on SKL: total cycles in shared programs: 65436248 -> 65245186 (-0.29%) cycles in affected programs: 22560936 -> 22369874 (-0.85%) helped: 8457 HURT: 6247 total spills in shared programs: 437 -> 437 (0.00%) spills in affected programs: 0 -> 0 helped: 0 HURT: 0 total fills in shared programs: 870 -> 854 (-1.84%) fills in affected programs: 16 -> 0 helped: 1 HURT: 0 LOST: 0 GAINED: 107 Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2017-09-07 00:26:03 -07:00
if (BITSET_TEST(bd->livein, i) && BITSET_TEST(bd->defin, i)) {
start[i] = MIN2(start[i], block->start_ip);
end[i] = MAX2(end[i], block->start_ip);
}
intel/fs: Restrict live intervals to the subset possibly reachable from any definition. Currently the liveness analysis pass would extend a live interval up to the top of the program when no unconditional and complete definition of the variable is found that dominates all of its uses. This can lead to a serious performance problem in shaders containing many partial writes, like scalar arithmetic, FP64 and soon FP16 operations. The number of oversize live intervals in such workloads can cause the compilation time of the shader to explode because of the worse than quadratic behavior of the register allocator and scheduler when running out of registers, and it can also cause the running time of the shader to explode due to the amount of spilling it leads to, which is orders of magnitude slower than GRF memory. This patch fixes it by computing the intersection of our current live intervals with the subset of the program that can possibly be reached from any definition of the variable. Extending the storage allocation of the variable beyond that is pretty useless because its value is guaranteed to be undefined at a point that cannot be reached from any definition. According to Jason, this improves performance of the subgroup Vulkan CTS tests significantly (e.g. the runtime of the dvec4 broadcast test improves by nearly 50x). No significant change in the running time of shader-db (with 5% statistical significance). shader-db results on IVB: total cycles in shared programs: 61108780 -> 60932856 (-0.29%) cycles in affected programs: 16335482 -> 16159558 (-1.08%) helped: 5121 HURT: 4347 total spills in shared programs: 1309 -> 1288 (-1.60%) spills in affected programs: 249 -> 228 (-8.43%) helped: 3 HURT: 0 total fills in shared programs: 1652 -> 1597 (-3.33%) fills in affected programs: 262 -> 207 (-20.99%) helped: 4 HURT: 0 LOST: 2 GAINED: 209 shader-db results on BDW: total cycles in shared programs: 67617262 -> 67361220 (-0.38%) cycles in affected programs: 23397142 -> 23141100 (-1.09%) helped: 8045 HURT: 6488 total spills in shared programs: 1456 -> 1252 (-14.01%) spills in affected programs: 465 -> 261 (-43.87%) helped: 3 HURT: 0 total fills in shared programs: 1720 -> 1465 (-14.83%) fills in affected programs: 471 -> 216 (-54.14%) helped: 4 HURT: 0 LOST: 2 GAINED: 162 shader-db results on SKL: total cycles in shared programs: 65436248 -> 65245186 (-0.29%) cycles in affected programs: 22560936 -> 22369874 (-0.85%) helped: 8457 HURT: 6247 total spills in shared programs: 437 -> 437 (0.00%) spills in affected programs: 0 -> 0 helped: 0 HURT: 0 total fills in shared programs: 870 -> 854 (-1.84%) fills in affected programs: 16 -> 0 helped: 1 HURT: 0 LOST: 0 GAINED: 107 Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2017-09-07 00:26:03 -07:00
if (BITSET_TEST(bd->liveout, i) && BITSET_TEST(bd->defout, i)) {
start[i] = MIN2(start[i], block->end_ip);
end[i] = MAX2(end[i], block->end_ip);
}
}
}
}
fs_live_variables::fs_live_variables(fs_visitor *v, const cfg_t *cfg)
: v(v), cfg(cfg)
{
mem_ctx = ralloc_context(NULL);
num_vgrfs = v->alloc.count;
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
num_vars = 0;
var_from_vgrf = rzalloc_array(mem_ctx, int, num_vgrfs);
for (int i = 0; i < num_vgrfs; i++) {
var_from_vgrf[i] = num_vars;
num_vars += v->alloc.sizes[i];
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
}
vgrf_from_var = rzalloc_array(mem_ctx, int, num_vars);
for (int i = 0; i < num_vgrfs; i++) {
for (unsigned j = 0; j < v->alloc.sizes[i]; j++) {
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
vgrf_from_var[var_from_vgrf[i] + j] = i;
}
}
start = ralloc_array(mem_ctx, int, num_vars);
end = rzalloc_array(mem_ctx, int, num_vars);
for (int i = 0; i < num_vars; i++) {
start[i] = MAX_INSTRUCTION;
end[i] = -1;
}
block_data= rzalloc_array(mem_ctx, struct block_data, cfg->num_blocks);
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
bitset_words = BITSET_WORDS(num_vars);
for (int i = 0; i < cfg->num_blocks; i++) {
block_data[i].def = rzalloc_array(mem_ctx, BITSET_WORD, bitset_words);
block_data[i].use = rzalloc_array(mem_ctx, BITSET_WORD, bitset_words);
block_data[i].livein = rzalloc_array(mem_ctx, BITSET_WORD, bitset_words);
block_data[i].liveout = rzalloc_array(mem_ctx, BITSET_WORD, bitset_words);
intel/fs: Restrict live intervals to the subset possibly reachable from any definition. Currently the liveness analysis pass would extend a live interval up to the top of the program when no unconditional and complete definition of the variable is found that dominates all of its uses. This can lead to a serious performance problem in shaders containing many partial writes, like scalar arithmetic, FP64 and soon FP16 operations. The number of oversize live intervals in such workloads can cause the compilation time of the shader to explode because of the worse than quadratic behavior of the register allocator and scheduler when running out of registers, and it can also cause the running time of the shader to explode due to the amount of spilling it leads to, which is orders of magnitude slower than GRF memory. This patch fixes it by computing the intersection of our current live intervals with the subset of the program that can possibly be reached from any definition of the variable. Extending the storage allocation of the variable beyond that is pretty useless because its value is guaranteed to be undefined at a point that cannot be reached from any definition. According to Jason, this improves performance of the subgroup Vulkan CTS tests significantly (e.g. the runtime of the dvec4 broadcast test improves by nearly 50x). No significant change in the running time of shader-db (with 5% statistical significance). shader-db results on IVB: total cycles in shared programs: 61108780 -> 60932856 (-0.29%) cycles in affected programs: 16335482 -> 16159558 (-1.08%) helped: 5121 HURT: 4347 total spills in shared programs: 1309 -> 1288 (-1.60%) spills in affected programs: 249 -> 228 (-8.43%) helped: 3 HURT: 0 total fills in shared programs: 1652 -> 1597 (-3.33%) fills in affected programs: 262 -> 207 (-20.99%) helped: 4 HURT: 0 LOST: 2 GAINED: 209 shader-db results on BDW: total cycles in shared programs: 67617262 -> 67361220 (-0.38%) cycles in affected programs: 23397142 -> 23141100 (-1.09%) helped: 8045 HURT: 6488 total spills in shared programs: 1456 -> 1252 (-14.01%) spills in affected programs: 465 -> 261 (-43.87%) helped: 3 HURT: 0 total fills in shared programs: 1720 -> 1465 (-14.83%) fills in affected programs: 471 -> 216 (-54.14%) helped: 4 HURT: 0 LOST: 2 GAINED: 162 shader-db results on SKL: total cycles in shared programs: 65436248 -> 65245186 (-0.29%) cycles in affected programs: 22560936 -> 22369874 (-0.85%) helped: 8457 HURT: 6247 total spills in shared programs: 437 -> 437 (0.00%) spills in affected programs: 0 -> 0 helped: 0 HURT: 0 total fills in shared programs: 870 -> 854 (-1.84%) fills in affected programs: 16 -> 0 helped: 1 HURT: 0 LOST: 0 GAINED: 107 Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
2017-09-07 00:26:03 -07:00
block_data[i].defin = rzalloc_array(mem_ctx, BITSET_WORD, bitset_words);
block_data[i].defout = rzalloc_array(mem_ctx, BITSET_WORD, bitset_words);
block_data[i].flag_def[0] = 0;
block_data[i].flag_use[0] = 0;
block_data[i].flag_livein[0] = 0;
block_data[i].flag_liveout[0] = 0;
}
setup_def_use();
compute_live_variables();
compute_start_end();
}
fs_live_variables::~fs_live_variables()
{
ralloc_free(mem_ctx);
}
void
fs_visitor::invalidate_live_intervals()
{
ralloc_free(live_intervals);
live_intervals = NULL;
}
i965/fs: Do live variables dataflow analysis on a per-channel level. This significantly improves our handling of VGRFs of size > 1. Previously, we only marked VGRFs as def'd if the whole register was written by a single instruction. Large VGRFs which were written piecemeal would not be considered def'd at all, even if they were ultimately completely written. Without being def'd, these were then marked "live in" to the basic block, often extending the range to preceding blocks and sometimes even the start of the program. The new per-component tracking gives more accurate live intervals, which makes register coalescing more effective. In the future, this should help with texturing from GRFs on Gen7+. A sampler message might be represented by a 2-register VGRF which holds the texture coordinates. If those are incoming varyings, they'll be produced by two PLN instructions, which are piecemeal writes. No reduction in shader-db instruction counts. However, code which prints the live interval ranges does show that some VGRFs now have smaller (and more correct) live intervals. v2: Rebase on current send-from-GRF code requiring adding extra use[]s. v3: Rebase on live intervals fix to include defs in the end of the interval. v4 (Kenneth Graunke): Rebase; split off a few preparatory patches; add lots of comments; minor style changes; rewrite commit message. v5 (Eric Anholt): whitespace nit. Written-by: Eric Anholt <eric@anholt.net> [v1-3] Signed-off-by: Kenneth Graunke <kenneth@whitecape.org> [v4] Reviewed-by: Kenneth Graunke <kenneth@whitecape.org> Reviewed-by: Eric Anholt <eric@anholt.net> (v4)
2012-06-04 16:00:32 -07:00
/**
* Compute the live intervals for each virtual GRF.
*
* This uses the per-component use/def data, but combines it to produce
* information about whole VGRFs.
*/
void
fs_visitor::calculate_live_intervals()
{
if (this->live_intervals)
return;
int num_vgrfs = this->alloc.count;
ralloc_free(this->virtual_grf_start);
ralloc_free(this->virtual_grf_end);
virtual_grf_start = ralloc_array(mem_ctx, int, num_vgrfs);
virtual_grf_end = ralloc_array(mem_ctx, int, num_vgrfs);
for (int i = 0; i < num_vgrfs; i++) {
virtual_grf_start[i] = MAX_INSTRUCTION;
virtual_grf_end[i] = -1;
}
this->live_intervals = new(mem_ctx) fs_live_variables(this, cfg);
/* Merge the per-component live ranges to whole VGRF live ranges. */
for (int i = 0; i < live_intervals->num_vars; i++) {
int vgrf = live_intervals->vgrf_from_var[i];
virtual_grf_start[vgrf] = MIN2(virtual_grf_start[vgrf],
live_intervals->start[i]);
virtual_grf_end[vgrf] = MAX2(virtual_grf_end[vgrf],
live_intervals->end[i]);
}
}
bool
fs_live_variables::vars_interfere(int a, int b)
{
return !(end[b] <= start[a] ||
end[a] <= start[b]);
}
bool
fs_visitor::virtual_grf_interferes(int a, int b)
{
return !(virtual_grf_end[a] <= virtual_grf_start[b] ||
virtual_grf_end[b] <= virtual_grf_start[a]);
}