Change block_pool->bo to be a pointer, and update its usage everywhere.
This makes it simpler to switch it later to a list of BOs.
v3:
- Use a static "bos" field in the struct, instead of malloc'ing it.
This will be later changed to a fixed length array of BOs.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
After switching to using anv_state_table, there are very few places left
still using pool->map directly. We want to avoid that because it won't
be always the right map once we split it into multiple BOs.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Use anv_state_pool_return_blocks() to return blocks to the pool, instead
of manually pushing them.
v3:
- return blocks from the end of the chunk (Jason).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The use of anv_state_table_add() combined with anv_state_table_push(),
specially when adding a bunch of states to the table, is very verbose.
So we add this helper that makes things easier to digest.
We also already add the anv_state_table member in this commit, so things
can compile properly, even though it's not used.
v2: assert that the states are always aligned to their size (Jason)
v3: Add "table" member to anv_state_pool in this commit.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
We will need the anv_block_pool_map to find the map relative to some BO
that is not at the start of the block pool.
v2: just return a pointer instead of a struct (Jason)
v4: Update comment (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Add a structure to hold anv_states. This table will initially be used to
recycle anv_states, instead of relying on a linked list implemented in
GPU memory. Later it could be used so that all anv_states just point to
the content of this struct, instead of making copies of anv_states
everywhere.
One has to call anv_state_table_add(), which returns an index for the
state in the table, and then get a pointer to such index, and finally
fill in the rest of the struct.
TODO:
1) There's a lot of common code between this table backing store
memory and the anv_block_pool buffer, due to how we grow it. I think
it's possible to refactory this and reuse code on both places.
2) Add unit tests.
v3:
- Rename state table memfd (Jason)
- Return VK_ERROR_OUT_OF_HOST_MEMORY on more places (Jason)
- anv_state_table_grow returns VkResult (Jason)
- Rename variables to be more informative (Jason)
- Return errors on state table grow.
- Rename anv_state_table_push/pop to anv_free_list_push2/pop2
This will be renamed again to remove the trailing "2" later.
v4:
- Remove exit(-1) from anv_state_table (Jason).
- Use uint32_t "next" field in anv_free_entry (Jason).
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Replace calls to create hash tables and sets that use
_mesa_hash_pointer/_mesa_key_pointer_equal with the helpers
_mesa_pointer_hash_table_create() and _mesa_pointer_set_create().
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Acked-by: Eric Engestrom <eric@engestrom.ch>
Not going to matter, but be consistent.
Found by coverity
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Fixes: caf41c78c (anv/allocator: Support softpin in the BO cache)
On Broadwell and above, we have to use different MOCS settings to allow
the kernel to take over and disable caching when needed for external
buffers. On Broadwell, this is especially important because the kernel
can't disable eLLC so we have to do it in userspace. We very badly
don't want to do that on everything so we need separate MOCS for
external and internal BOs.
In order to do this, we add an anv-specific BO flag for "external" and
use that to distinguish between buffers which may be shared with other
processes and/or display and those which are entirely internal. That,
together with an anv_mocs_for_bo helper lets us choose the right MOCS
settings for each BO use.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=99507
Cc: mesa-stable@lists.freedesktop.org
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
It's safer to set them there because we have the opportunity to properly
handle combining flags if a BO is imported more than once.
Reviewed-by: Scott D Phillips <scott.d.phillips@intel.com>
The state_pools reserve virtual address space of the full
BLOCK_POOL_MEMFD_SIZE, but maintain the current behavior of
growing from the middle.
v2: - rename block_pool::offset to block_pool::start_address (Jason)
- assign state pool start_address statically (Jason)
v3: - remove unnecessary bo_flags tampering for the dynamic pool (Jason)
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
Reviewed-by: Jordan Justen <jordan.l.justen@intel.com>
The new name make the zero-input behavior more obvious. The next
patch adds a new function with different zero-input behavior.
Signed-off-by: Ian Romanick <ian.d.romanick@intel.com>
Suggested-by: Matt Turner <mattst88@gmail.com>
Reviewed-by: Alejandro Piñeiro <apinheiro@igalia.com>
Ken suggested that we might be underallocating scratch space on HD
400. Allocating scratch space as though there was actually 8 EUs
seems to help with a GPU hang seen on synmark CSDof.
Cc: <mesa-stable@lists.freedesktop.org>
Signed-off-by: Jordan Justen <jordan.l.justen@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
This will allow to set the flags on any anv_bo created/filled from a
state pool or block pool later.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Kenneth Graunke <kenneth@whitecape.org>
While modern pthread mutexes are very fast, they still incur a call to an
external DSO and overhead of the generality and features of pthread mutexes.
Most mutexes in mesa only needs lock/unlock, and the idea here is that we can
inline the atomic operation and make the fast case just two intructions.
Mutexes are subtle and finicky to implement, so we carefully copy the
implementation from Ulrich Dreppers well-written and well-reviewed paper:
"Futexes Are Tricky"
http://www.akkadia.org/drepper/futex.pdf
We implement "mutex3", which gives us a mutex that has no syscalls on
uncontended lock or unlock. Further, the uncontended case boils down to a
cmpxchg and an untaken branch and the uncontended unlock is just a locked decr
and an untaken branch. We use __builtin_expect() to indicate that contention
is unlikely so that gcc will put the contention code out of the main code
flow.
A fast mutex only supports lock/unlock, can't be recursive or used with
condition variables. We keep the pthread mutex implementation around as
for the few places where we use condition variables or recursive locking.
For platforms or compilers where futex and atomics aren't available,
simple_mtx_t falls back to the pthread mutex.
The pthread mutex lock/unlock overhead shows up on benchmarks for CPU bound
applications. Most CPU bound cases are helped and some of our internal
bind_buffer_object heavy benchmarks gain up to 10%.
Signed-off-by: Kristian Høgsberg <krh@bitplanet.net>
Signed-off-by: Timothy Arceri <tarceri@itsqueeze.com>
Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
This change prepares for VK_ANDROID_native_buffer. When the user imports
a gralloc hande into a VkImage using VK_ANDROID_native_buffer, the user
provides no size. The driver must infer the size from the internals of
the gralloc buffer.
The patch is essentially a refactor patch, but it does change behavior
in some edge cases, described below. In what follows, the "nominal size"
of the bo refers to anv_bo::size, which may not match the bo's "actual
size" according to the kernel.
Post-patch, the nominal size of the bo returned from
anv_bo_cache_import() is always the size of imported dma-buf according
to lseek(). Pre-patch, the bo's nominal size was difficult to predict.
If the imported dma-buf's gem handle was not resident in the cache, then
the bo's nominal size was align(VkMemoryAllocateInfo::allocationSize,
4096). If it *was* resident, then the bo's nominal size was whatever
the cache returned. As a consequence, the first cache insert decided the
bo's nominal size, which could be significantly smaller compared to the
dma-buf's actual size, as the nominal size was determined by
VkMemoryAllocationInfo::allocationSize and not lseek().
I believe this patch cleans up that messy behavior. For an imported or
exported VkDeviceMemory, anv_bo::size should now be the true size of the
bo, if I correctly understand the problem (which I possibly don't).
v2:
- Preserve behavior of aligning size to 4096 before checking. [for
jekstrand]
- Check size with < instead of <=, to match behavior of commit c0a4f56
"anv: bo_cache: allow importing a BO larger than needed". [for
chadv]
This will allow us to implement VK_ANDROID_native_buffer without dup'ing
the fd. We must close the fd in VK_KHR_external_memory_fd, but we should
not in VK_ANDROID_native_buffer.
v2:
- Add missing close(fd) for case
VK_EXTERNAL_SEMAPHORE_HANDLE_TYPE_OPAQUE_FD_BIT_KHR, subcase
ANV_SEMAPHORE_TYPE_BO.
Reviewed-by: Jason Ekstrand <jason@jlekstrand.net>
The functions we're marking as UNUSED in genX_pipeline.c are used only
when compiling for particular generations.
Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
Reviewed-by: Eric Engestrom <eric.engestrom@imgtec.com>
There is one small ANV change here because we used the
VK_ERROR_INVALID_EXTERNAL_HANDLE_KHX enum in the BO cache and that had
to be updated to have the _KHR suffix.
Reviewed-by: Samuel Iglesias Gonsálvez <siglesias@igalia.com>
Reviewed-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
The idea behind doing this was to make it easier to set various flags.
However, we have enough custom flag settings floating around the driver
that this is more of a nuisance than a help. This commit has the
following functional changes:
1) The workaround_bo created in anv_CreateDevice loses both flags.
This shouldn't matter because it's very small and entirely internal
to the driver.
2) The bo created in anv_CreateDmaBufImageINTEL loses the
EXEC_OBJECT_ASYNC flag. In retrospect, it never should have gotten
EXEC_OBJECT_ASYNC in the first place.
Reviewed-by: Nanley Chery <nanley.g.chery@intel.com>
Cc: "17.1" <mesa-stable@lists.freedesktop.org>
Previously, the maximum size of a state that could be allocated from a
state pool was a block. However, this has caused us various issues
particularly with shaders which are potentially very large. We've also
hit issues with render passes with a large number of attachments when we
go to allocate the block of surface state. This effectively removes the
restriction on the maximum size of a single state. (There's still a
limit of 1MB imposed by a fixed-length bucket array.)
For states larger than the block size, we just grab a large block off of
the block pool rather than sub-allocating. When we go to allocate some
chunk of state and the current bucket does not have state, we try to
pull a chunk from some larger bucket and split it up. This should
improve memory usage if a client occasionally allocates a large block of
state.
This commit is inspired by some similar work done by Juan A. Suarez
Romero <jasuarez@igalia.com>.
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>
The old algorithm worked fine assuming a constant block size. We're
about to break that assumption so we need an algorithm that's a bit more
robust against suddenly growing by a huge amount compared to the
currently allocated quantity of memory.
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Now that the state stream is allocating off of the state pool, there's
no reason why we need the block pool to be separate.
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Now that everything is going through the state pools, the block pool no
longer needs to be able to handle re-use.
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>
Since the state_stream is now pulling from a state_pool, the only thing
pulling directly off the block pool is the state pool so we can just
move the block_size there. The one exception is when we allocate
binding tables but we can just reference the state pool there as well.
The only functional change here is that we no longer grow the block pool
immediately upon creation so no BO gets allocated until our first state
allocation.
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>
The helper functions aren't really gaining us as much as they claim and
are actually about to be in the way.
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>
We should only use size_t when referring to sizes of bits of CPU memory.
Anything on the GPU or just a regular array length should be a type that
has the same size on both 32 and 64-bit architectures. For state
objects, we use a uint32_t because we'll never allocate a piece of
driver-internal GPU state larger than 2GB (more like 16KB).
Reviewed-by: Juan A. Suarez Romero <jasuarez@igalia.com>