If supported by host virglrenderer and host kernel, use userspace
allocated GPU virtual addresses. This lets us avoid stalling on
waiting for response from host kernel until we need to know the
host handle (which is usually not until submit time).
Handling the async response from host to get host_handle is done
thru the submit_queue, so that in the submit path (hot) we do not
need any additional synchronization to know that the host_handle
is valid.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/16086>
Add a new backend to enable using native driver in a VM guest, via a new
virtgpu context type which (indirectly) makes host kernel interface
available in guest and handles the details of mapping buffers to guest,
etc.
Note that fence-fd's are currently a bit awkward, in that they get
signaled by the guest kernel driver (drm/virtio) once virglrenderer in
the host has processed the execbuf, not when host kernel has signaled
the submit fence. For passing buffers to the host (virtio-wl) the egl
context in virglrenderer is used to create a fence on the host side.
But use of out-fence-fd's in guest could have slightly unexpected
results. For this reason we limit all submitqueues to default priority
(so they cannot be preepmted by host egl context). AFAICT virgl and
venus have a similar problem, which will eventually be solveable once we
have RESOURCE_CREATE_SYNC.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/14900>
Extends command submission ioctls to support multiple semaphores through
generic ioctl extension design. In this approach, a multisync extension
subclasses a generic ioctl extension struct (base) and enables more than
one wait and signal semaphores. Multisync extension also uses v3d_queue
to specify the wait_stage, i.e. which job should sync before start (wait
semaphores).
Signed-off-by: Melissa Wen <mwen@igalia.com>
Reviewed-by: Iago Toral Quiroga <itoral@igalia.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/13178>
This change allows creating contexts of depending on set of
context parameters. The meaning of each of the parameters
is listed below:
1) VIRTGPU_CONTEXT_PARAM_CAPSET_ID
This determines the type of a context based on the capability set
ID. For example, the current capsets:
VIRTIO_GPU_CAPSET_VIRGL
VIRTIO_GPU_CAPSET_VIRGL2
define a Gallium, TGSI based "virgl" context. We only need 1 capset
ID per context type, though virgl has two due a bug that has since
been fixed.
The use case is the "gfxstream" rendering library and "venus"
renderer.
gfxstream doesn't do Gallium/TGSI translation and mostly relies on
auto-generated API streaming. Certain users prefer gfxstream over
virgl for GLES on GLES emulation. {gfxstream vk}/{venus} are also
required for Vulkan emulation.
The goal is for guest userspace to choose the optimal context type
depending on the situation/hardware.
2) VIRTGPU_CONTEXT_PARAM_NUM_RINGS
This tells the number of independent command rings that the context
will use. This value may be zero and is inferred to be zero if
VIRTGPU_CONTEXT_PARAM_NUM_RINGS is not passed in. This is backwards
compatibility for virgl, which has one big giant command ring for all
commands.
The maxiumum number of rings is 32. In practice, multi-queue or
multi-ring submission is used for powerful dGPUs and virtio-gpu
may not be the best option in that case (see PCI passthrough or
rendernode forwarding).
3) VIRTGPU_CONTEXT_PARAM_POLL_RING_IDX_MASK
This is a mask of ring indices for which the DRM fd is pollable.
For example, if VIRTGPU_CONTEXT_PARAM_NUM_RINGS is 2, then the mask
may be:
[ring idx] | [1 << ring_idx] | final mask
-------------------------------------------
0 1 1
1 2 3
The "Sommelier" guest Wayland proxy uses this to poll for events
from the host compositor.
Reviewed-by: Anthoine Bourgeois <anthoine.bourgeois@gmail.com>
Tested-by: Anthoine Bourgeois <anthoine.bourgeois@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/7712>
This adds modifiers for GFX9+ AMD GPUs.
As the modifiers need a lot of parameters I split things out in
getters and setters.
- Advantage: simplifies the code a lot
- Disadvantage: Makes it harder to check that you're setting all
the required fields.
The tiling modes seem to change every generatio, but the structure
of what each tiling mode is good for stays really similar. As such
the core of the modifier is
- the tiling mode
- a version. Not explicitly a GPU generation, but splitting out
a new set of tiling equations.
Sometimes one or two tiling modes stay the same and for those we
specify a canonical version.
Then we have a bunch of parameters on how the compression works.
Different HW units have different requirements for these and we
actually have some conflicts here.
e.g. the render backends need a specific alignment but the display
unit only works with unaligned compression surfaces. To work around
that we have a DCC_RETILE option where both an aligned and unaligned
compression surface are allocated and a writer has to sync the
aligned surface to the unaligned surface on handoff.
Finally there are some GPU parameters that participate in the tiling
equations. These are constant for each GPU on the rendering/texturing
side. The display unit is very flexible however and supports all
of them :|
Some estimates:
- Single GPU, render+texture: ~10 modifiers
- All possible configs in a gen, display: ~1000 modifiers
- Configs of actually existing GPUs in a gen: ~100 modifiers
For formats with a single plane everything gets put in a separate
DRM plane. However, this doesn't fit for some YUV formats, so if
the format has >1 plane, we let the driver pack the surfaces into
1 DRM plane per format plane.
This way we avoid X11 rendering onto the frontbuffer with DCC, but
still fit into 4 DRM planes.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/6176>
There is only one queue for now, so for non-shared semaphores, the
implementation is basically a no-op. For shared semaphores, this
always uses syncobjs. This depends on syncobj support in the msm
kernel driver.
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/2769>
Pull new updates from drm-next as of the following commit:
commit f1b4a9217efd61d0b84c6dc404596c8519ff6f59
Merge: 400e91347e1d f3a36d469621
Author: Dave Airlie <airlied@redhat.com>
Date: Tue Oct 22 15:04:00 2019 +1000
Merge tag 'du-next-20191016' of git://linuxtv.org/pinchartl/media into drm-next
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
This adapts the v3d driver to the new CL submit ioctl interface that
allows the driver to request a flush of the caches after the render
job has completed. This seems to eliminate the kernel write violation
errors reported during CTS and Piglit excutions, fixing some CTS tests
and GPU resets along the way.
v2:
- Adapt to changes in the kernel side.
- Disable shader storage and shader images if the kernel doesn't
implement cache flushing.
Fixes CTS tests:
KHR-GLES31.core.shader_image_size.basic-nonMS-fs-float
KHR-GLES31.core.shader_image_size.basic-nonMS-fs-int
KHR-GLES31.core.shader_image_size.basic-nonMS-fs-uint
KHR-GLES31.core.shader_image_size.advanced-nonMS-fs-float
KHR-GLES31.core.shader_image_size.advanced-nonMS-fs-int
KHR-GLES31.core.shader_image_size.advanced-nonMS-fs-uint
KHR-GLES31.core.shader_atomic_counters.advanced-usage-many-draw-calls2
KHR-GLES31.core.shader_atomic_counters.advanced-usage-draw-update-draw
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-int
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std140-matR
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std140-struct
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std430-matC-pad
KHR-GLES31.core.shader_storage_buffer_object.advanced-unsizedArrayLength-fs-std430-vec
Reviewed-by: Eric Anholt <eric@anholt.net>
Taken from drm-misc-next 268de6530aa1 ("drm: mst: Fix query_payload
ack reply struct")
Reviewed-by: Eric Engestrom <eric.engestrom@intel.com>
Reviewed-by: Kristian H. Kristensen <hoegsberg@google.com>