Commit Graph

10 Commits

Author SHA1 Message Date
Jason Ekstrand
8c6bc1e85d anv/allocator: Create 2GB memfd up-front for the block pool 2015-09-17 17:44:20 -07:00
Jason Ekstrand
74bf7aa07c anv/allocator: Take the device mutex when growing a block pool
We don't have any locking issues yet because we use the pool size itself as
a mutex in block_pool_alloc to guarantee that only one thread is resizing
at a time.  However, we are about to add support for growing the block pool
at both ends.  This introduces two potential races:

 1) You could have two block_pool_alloc() calls that both try to grow the
    block pool, one from each end.

 2) The relocation handling code will now have to think about not only the
    bo that we use for the block pool but also the offset from the start of
    that bo to the center of the block pool.  It's possible that the block
    pool growing code could race with the relocation handling code and get
    a bo and offset out of sync.

Grabbing the device mutex solves both of these problems.  Thanks to (2), we
can't really do anything more granular.
2015-09-17 17:44:20 -07:00
Jason Ekstrand
6a7ca4ef2c Merge remote-tracking branch 'mesa-public/master' into vulkan 2015-08-17 11:25:03 -07:00
Jason Ekstrand
facf587dea vk/allocator: Solve a data race in anv_block_pool
The anv_block_pool data structure suffered from the exact same race as the
state pool.  Namely, that the uniqueness of the blocks handed out depends
on the next_block value increasing monotonically.  However, this invariant
did not hold thanks to our block "return" concept.
2015-08-03 01:19:34 -07:00
Jason Ekstrand
56ce219493 vk/allocator: Make block_pool_grow take and return a size
It takes the old size as an argument and returns the new size as the return
value.  On error, it returns a size of 0.
2015-08-03 01:06:45 -07:00
Jason Ekstrand
fd64598462 vk/allocator: Fix a data race in the state pool
The previous algorithm had a race because of the way we were using
__sync_fetch_and_add for everything.  In particular, the concept of
"returning" over-allocated states in the "next > end" case was completely
bogus.  If too many threads were hitting the state pool at the same time,
it was possible to have the following sequence:

A: Get an offset (next == end)
B: Get an offset (next > end)
A: Resize the pool (now next < end by a lot)
C: Get an offset (next < end)
B: Return the over-allocated offset
D: Get an offset

in which case D will get the same offset as C.  The solution to this race
is to get rid of the concept of "returning" over-allocated states.
Instead, the thread that gets a new block simply sets the next and end
offsets directly and threads that over-allocate don't return anything and
just futex-wait.  Since you can only ever hit the over-allocate case if
someone else hit the "next == end" case and hasn't resized yet, you're
guaranteed that the end value will get updated and the futex won't block
forever.
2015-08-03 00:38:48 -07:00
Jason Ekstrand
481122f4ac vk/allocator: Make a few things more consistant 2015-08-03 00:35:19 -07:00
Jason Ekstrand
e65953146c vk/allocator: Use memory pools rather than (MALLOC|FREE)LIKE
We have pools, so we should be using them.  Also, I think this will help
keep valgrind from getting confused when we have to end up fighting with
system allocations such as those from malloc/free and mmap/munmap.
2015-07-31 10:38:28 -07:00
Jason Ekstrand
1920ef9675 vk/allocator: Add an anv_state_pool_finish function
Currently this is a no-op but it gives us a place to put finalization
things in the future.
2015-07-31 10:38:28 -07:00
Chad Versace
2c2233e328 vk: Prefix most filenames with anv
Jason started the task by creating anv_cmd_buffer.c and anv_cmd_emit.c.
This patch finishes the task by renaming all other files except
gen*_pack.h and glsl_scraper.py.
2015-07-17 20:25:38 -07:00