This allows the caller to execute it in a loop rather than
hand-rolling a separate call for each stage.
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
These are replaced with
ctx->Const.Program[MESA_SHADER_{VERTEX,FRAGMENT,GEOMETRY}]. In
patches to follow, this will allow us to replace a lot of ad-hoc logic
with a variable index into the array.
With the exception of the changes to mtypes.h, this patch was
generated entirely by the command:
find src -type f '(' -iname '*.c' -o -iname '*.cpp' -o -iname '*.py' \
-o -iname '*.y' ')' -print0 | xargs -0 sed -i \
-e 's/Const\.VertexProgram/Const.Program[MESA_SHADER_VERTEX]/g' \
-e 's/Const\.GeometryProgram/Const.Program[MESA_SHADER_GEOMETRY]/g' \
-e 's/Const\.FragmentProgram/Const.Program[MESA_SHADER_FRAGMENT]/g'
Suggested-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Brian Paul <brianp@vmware.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Commit eda21d2a30 fixed the rasterization
of points for Direct3D but ended up breaking the rasterization of OpenGL
non-sprite points, in particular conform's pntrast.c test.
The only way to get both working is to properly honour
pipe_rasterizer::point_quad_rasterization, and follow the weird OpenGL
rule when it is false.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
We definitely want to fall through to the unsynchronized map case, instead
of wasting bandwidth on a copy. Prevents a -43.2407% +/- 1.06113% (n=49)
performance regression on aa10perf when teaching glamor to provide the
GL_INVALIDATE_RANGE_BIT information.
This is a performance fix, which I usually wouldn't cherry-pick to stable.
But this was really was just a bug in the code, its presence would
discourage developers from giving us the best information they can, and I
think we've got fairly high confidence in the unsynchronized map path
already.
Cc: 10.0 9.2 <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
While incorrect, it probably wouldn't affect anyone ever: You'd have to do
an appropriately-formatted readpixels into a PBO, then overwrite the tail
end of the updated area of the PBO with glBufferSubData(), and you
wouldn't get appropriate synchronization.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
The earlier assert made sure that our math didn't exceed our bounds, but
this makes sure that we don't overflow from the high bits X into the low
bits of Y. We've already put checks in intel_miptree_blit(), but I've
wanted to expand the type in our protoype from short to uint32_t, and we
could get in trouble with intel_emit_linear_blit() if we did.
v2: Add Ken's comment about the funny language extension used.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com> (v1)
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com> (v1)
This was one of the things we always wanted to do to this, to make it more
useful than just (value << FIELD_MASK).
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
With all of the flipping and pitch twiddling and miptree layout involved
in our blits, there are lots of ways for us to scribble outside of a
buffer. Put in a check that we're not about to do so.
This catches a bug that glamor was running into.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Ian Romanick <ian.d.romanick@intel.com>
Reviewed-by: Anuj Phogat <anuj.phogat@gmail.com>
This fixes the following compile error:
src\glsl\ir_constant_expression.cpp(1405) : error C2666: 'copysign' : 3
overloads have similar conversions
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
Add for now some simple/basic query support (ie. things not actually
requiring the GPU). Might change around a bit when I actually add
GPU queries, but for now this enables some useful performance info
in the GALLIUM_HUD. For example:
GALLIUM_HUD=fps+batches+batches-sysmem+batches-gmem+restores,draw-calls
The driver specific specific queries are:
+ draw-calls
+ batches - number of batches per second, sum of batches-sysmem
plus batches-gmem
+ batches-gmem - render a set of tiles in GMEM, for each tile
(optionally) system mem -> gmem (restore), plus N draws,
plus gmem -> system mem (resolve) per second
+ batches-sysmem - N draws to system memory (GMEM bypass) per
second
+ restores - number of GMEM batches that required restore per
second
Ideally for GMEM rendering, you want batches-gmem to equal fps. If
the app is doing something that triggers multiple passes (ie. requires
extra round trip gmem <-> system memory) then the # of batches per
second will go up relative to fps.
Signed-off-by: Rob Clark <robclark@freedesktop.org>
Since we now have the cmdstream patch mechanism needed for hw binning,
might as well also use it for RB_RENDER_CONTROL updates. This avoids
the need to use RMW (and associated WFI) to update RB_RENDER_CONTROL.
Signed-off-by: Rob Clark <robclark@freedesktop.org>
The binning pass sorts vertices into which bins/tiles they apply to.
The visibility information generated during the binning pass can be
used to speed up the rendering pass by filtering out vertices which
do not apply to the current tile. See:
https://github.com/freedreno/freedreno/wiki/Adreno-tiling#optimized-approach
This brings a significant fps boost. A rough assortment of tests
(supertuxkart, etracer, tremulous, glmark2 'build' test, etc) seems
to yield a ~35-45% fps improvement.
For now, to be conservative, the binning pass is not enabled yet by
default. To enable it use:
FD_MESA_DEBUG=binning
So far I haven't found anything that breaks with binning enabled,
but I'd like a bit more testing before I enable it as default.
Signed-off-by: Rob Clark <robclark@freedesktop.org>
The hardware is broken with nonzero texel offsets and unnormalized
coordinates; instead of doing correct offsetting, we get garbage.
This just extends the existing workaround for ir_txf and
ir_tg4+gsampler2DRect to also consider ir_tex+gsampler2DRect.
Fixes broken rendering in 'tesseract' when 'mesa_texrectoffset_bug' is
not enabled; also fixes the new piglit test
'tests/spec/glsl-1.30/execution/fs-textureOffset-Rect'.
Has been broken ~forever; suggesting including this in only 10.0 because
the lowering pass doesn't exist in 9.2 or earlier so would require quite
a different patch.
Signed-off-by: Chris Forbes <chrisf@ijw.co.nz>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Cc: Lee Salzman <lsalzman@gmail.com>
Cc: "10.0" <mesa-stable@lists.freedesktop.org>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
v2: Also rename "shaderType" param of is_varying_var() to "stage".
Reviewed-by: Brian Paul <brianp@vmware.com>
This reduces confusion since gl_shader::Type is sometimes
GL_SHADER_PROGRAM_MESA but is more frequently
GL_SHADER_{VERTEX,GEOMETRY,FRAGMENT}. It also has the advantage that
when switching on gl_shader::Stage, the compiler will alert if one of
the possible enum types is unhandled. Finally, many functions in
src/glsl (especially those dealing with linking) already use
gl_shader_stage to represent pipeline stages; using gl_shader::Stage
in those functions avoids the need for a conversion.
Note: in the process I changed _mesa_write_shader_to_file() so that if
it encounters an unexpected shader stage, it will use a file suffix of
"????" rather than "geom".
Reviewed-by: Brian Paul <brianp@vmware.com>
v2: Split from patch "mesa: Store gl_shader_stage enum in gl_shader objects."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Also move the related #define MESA_SHADER_STAGES. This will allow
gl_shader_stage to be used in struct gl_shader.
Reviewed-by: Brian Paul <brianp@vmware.com>
v2: Split from patch "mesa: Store gl_shader_stage enum in gl_shader objects."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Brian Paul <brianp@vmware.com>
v2: Split from patch "mesa: Store gl_shader_stage enum in gl_shader objects."
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
Previously, we had an enum called gl_shader_type which represented
pipeline stages in the order they occur in the pipeline
(i.e. MESA_SHADER_VERTEX=0, MESA_SHADER_GEOMETRY=1, etc), and several
inconsistently named functions for converting between it and other
representations:
- _mesa_shader_type_to_string: gl_shader_type -> string
- _mesa_shader_type_to_index: GLenum (GL_*_SHADER) -> gl_shader_type
- _mesa_program_target_to_index: GLenum (GL_*_PROGRAM) -> gl_shader_type
- _mesa_shader_enum_to_string: GLenum (GL_*_{SHADER,PROGRAM}) -> string
This patch tries to clean things up so that we use more consistent
terminology: the enum is now called gl_shader_stage (to emphasize that
it is in the order of pipeline stages), and the conversion functions are:
- _mesa_shader_stage_to_string: gl_shader_stage -> string
- _mesa_shader_enum_to_shader_stage: GLenum (GL_*_SHADER) -> gl_shader_stage
- _mesa_program_enum_to_shader_stage: GLenum (GL_*_PROGRAM) -> gl_shader_stage
- _mesa_progshader_enum_to_string: GLenum (GL_*_{SHADER,PROGRAM}) -> string
In addition, MESA_SHADER_TYPES has been renamed to MESA_SHADER_STAGES,
for consistency with the new name for the enum.
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
v2: Also rename the "target" field of _mesa_glsl_parse_state and the
"target" parameter of _mesa_shader_stage_to_string to "stage".
Reviewed-by: Brian Paul <brianp@vmware.com>
The adjustment needs to be applied to the y coordinates and not the x
coordinates, just like the equivalent code for lines and triangles in
lp_setup_line.c and lp_setup_tri.c.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
This was inadvertently forgotten when replacing gl_rasterization_rules
with lower_left_origin and half_pixel_center (commit
2737abb44e).
This makes a difference when lower_left_origin != half_pixel_center, e.g,
D3D10.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Reviewed-by: Zack Rusin <zackr@vmware.com>
When the depth buffer is to be read, perform a Depth Buffer Resolve if it has
been rendered. When the depth buffer is to be rendered, perform a HiZ Buffer
Resolve when the depth buffer is modified externally.
Add blitter functions to perform Depth Buffer Clear, Depth Buffer Resolve, and
Hierarchical Depth Buffer Resolve. Those functions set ilo_blitter up and
pass it to the pipelines to emit the commands.
Allow HiZ op to be specified in 3DSTATE_WM. Pass depth format directly in
gen7_emit_3DSTATE_SF. Use tex->hiz.bo to determine if HiZ exists. Fix
3DSTATE_SF for the case when there is no ilo_rasterizer_state. Fix
3DSTATE_PS for the case when there is no ilo_shader_state.
GEN6 has several requirements regarding the LOD/Depth/Width/Height of the
render targets and the depth buffer. We used to offset to the layers in
question unconditionally to meet the requirements. With this commit,
offseting is done only when the requirements are not met.
On Haswell, POW takes 24 cycles, while EXP2 only takes 14. Plus, using
POW requires putting 2.0 in a register, while EXP2 doesn't.
I believe that EXP2 will be faster than POW on basically all GPUs, so
it makes sense to optimize it.
Looking at the savage2 subset of shader-db:
total instructions in shared programs: 113225 -> 113179 (-0.04%)
instructions in affected programs: 2139 -> 2093 (-2.15%)
instances of 'math pow': 795 -> 749 (-6.14%)
instances of 'math exp': 389 -> 435 (11.8%)
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Matt Turner <mattst88@gmail.com>