Now gl_texture_image::TexFormat is a simple MESA_FORMAT_x enum.
ctx->Driver.ChooseTexture format also returns a MESA_FORMAT_x.
gl_texture_format will go away next.
This is about a 30% performance win in OA with high settings on my GM45,
and experiments with 915GM indicate that it'll be around a 20% win there.
Currently, 915-class hardware is seriously hurt by the fact that we use
fence regs to control the tiling even for 3D instructions that could live
without them, so we spend a bunch of time waiting on previous rendering in
order to pull fences off. Thus, the texture_tiling driconf option defaults
off there for now.
Fixes segfault on an fbo.c negative test for FBO with texture width/height
of 0. Previously we just tested for border != 0 to work around this
segfault.
Also enable them all regardless of screen bpp, as 32 bpp what I've been
testing against, and haven't been able to detect any screen bpp-specific
troubles with them.
We can't render into any texture format; only certain formats.
Check that render-to-texture's format is renderable in the
intel_validate_framebuffer()
There seems to be a bug somewhere that causes rendering to rgb565 textures
to be corrupted so disallow that for now. This will be revisted.
I was lured into a false sense of security by the fact that the spans code was
already there, and a bunch of tests didn't catch the problem. oglconform's
mask.c did, though.
Bug #19970.
This lets us avoid allocing new buffers for renderbuffers, finalized miptrees,
and PBO-uploaded textures when there's an unreferenced but still active one
cached, while also avoiding CPU waits for batchbuffers and CPU-uploaded
textures. The size of BOs allocated for a desktop running current GL
cairogears on i915 is cut in half with this.
Note that this means we require libdrm 2.4.5.
We only allow combined depth+stencil renderbuffers so the complicated code
for splitting and combining separate depth and stencil buffers is no longer
needed.
Take advantage of the GL_FRAMEBUFFER_UNSUPPORTED feature to disallow separate
depth and stencil renderbuffers; only allow combined depth/stencil buffers.
Next up: remove/simplify a bunch of the depth/stencil renderbuffer code.
Also: restore the previously disabled GL_DEPTH_COMPONENT16 case
It's been broken and deprecated for a while, so it's time to die. This has the
wonderful benefit of cleaning up the code a fair amount; making it marginally
less twisty.
I'm unsure if the for loops in IntelWindowMoved are still needed.
OpenGL allows mixing and matching depth and stencil renderbuffers in
framebuffer objects while the hardware really only supports interleaved
depth/stencil buffers. This makes for some tricky buffer management.
An extra wrinkle is the situation where the user allocates a 16bpp depth
texture or renderbuffer then tries to render to it along with a stencil
buffer. We'd have to promote the 16bpp Z values to 24-bit Z values and
mix in the stencil values to setup the depth/stencil renderbuffer.
There's no support for that now, so always allocate 32bpp depth textures/
renderbuffers for now.
The boolean that the server gives us for whether the region is tiled was
getting used as the enum for what tiling mode. Instead, guess the correct
tiling in screen setup.
Also, fix the Y-tiling pitch setup. The pitch to the next tile in Y is
32 scanlines, not 8.
Each array element is now a BUFFER_x token rather than a BUFFER_BIT_x bitmask.
The number of active color buffers is specified by _NumColorDrawBuffers.
This builds on the previous DrawBuffer changes and will help with drivers
implementing GL_ARB_draw_buffers.
Putting the bufmgr in the screen is not thread-safe since the emit_reloc
changes. It also led to a significant performance hit from pthread usage
for the attempted thread-safety (up to 12% of a cpu spent on refcounting
protection in single-threaded 965). The motivation had been to allow
multi-context bufmgr sharing in classic mode, but it wasn't worth the cost.