On embedded Linux, we can hardcode the driconf file (00-mesa-defaults.conf) with
no possibility of the file changing after the build. The static driconf
implementation, used on Windows and Android, suffices for that use case. It is
undesireable for these platforms to depend on expat or to spend time during app
start-up parsing driconf XML.
We already have the static driconf implemented, all we need is a meson option to
opt-out of runtime xmlconfig on Linux and use the static version instead.
To opt-out of runtime xmlconfig, build Mesa with -Dxmlconfig=disabled.
v2: Expand out feature.require() since it was only added in meson 0.59.0.
v3: Use more concise Meson syntax (Dylan)
Signed-off-by: Alyssa Rosenzweig <alyssa@collabora.com>
Reviewed-by: Jesse Natalie <jenatali@microsoft.com> [v2]
Reviewed-by: Eric Engestrom <eric@igalia.com> [v2]
Reviewed-by: Emma Anholt <emma@anholt.net>
Tested-by: Chris Healy <healych@amazon.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/19626>
Specifying the name at build time, as opposed to renaming after
the build, serves two purposes:
1. The link from Mesa's OpenGL32.dll and (and EGL/GLES) to the
megadriver is done by filename. If using these frontends, the
megadriver can't be renamed afterwards. And Windows doesn't
have very good symlink support, so that's not really an option
either.
2. The symbol (PDB) filename is also embedded in the DLL using the
build-time expected filename. Renaming can produce odd artifacts
while debugging.
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/7115
Reviewed-by: Bill Kristiansen <billkris@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/18239>
This uses APIs that are not available on Win7. Since this is a build-time
configuration, and since we can't use the SDK version as an indicator
(since you can support Win7 via new SDKs), a new option is added to allow
disabling it, to maintain Win7 support if desired.
Reviewed-by: Jose Fonseca <jfonseca@vmware.com>
Reviewed-by: Yonggang Luo <luoyonggang@gmail.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/17431>
This option has no meaningful effect any more other than pointlessly
renaming the the library. Let's introduce a new default value called
"unspecified", and complain if it's set to anything else.
Reviewed-by: Adam Jackson <ajax@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/16213>
Add a new backend to enable using native driver in a VM guest, via a new
virtgpu context type which (indirectly) makes host kernel interface
available in guest and handles the details of mapping buffers to guest,
etc.
Note that fence-fd's are currently a bit awkward, in that they get
signaled by the guest kernel driver (drm/virtio) once virglrenderer in
the host has processed the execbuf, not when host kernel has signaled
the submit fence. For passing buffers to the host (virtio-wl) the egl
context in virglrenderer is used to create a fence on the host side.
But use of out-fence-fd's in guest could have slightly unexpected
results. For this reason we limit all submitqueues to default priority
(so they cannot be preepmted by host egl context). AFAICT virgl and
venus have a similar problem, which will eventually be solveable once we
have RESOURCE_CREATE_SYNC.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/14900>
MESA_SHADER_READ_PATH is handy but it's not usable in
all cases.
This commit allows to implement an alternative mechanism
without assuming too much about how it's done, nor where/how
the shaders are stored.
When this is enabled MESA_SHADER_DUMP_PATH,
MESA_SHADER_CAPTURE_PATH and MESA_GLSL env var handling is
disabled.
Reviewed-by: Marek Olšák <marek.olsak@amd.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/11621>
Add a moltenvk-dir build option to supply the MacOS Vulkan SDK MoltenVK location.
Force compiler, for zink only, into object-c mode when MoltenVK is used to allow for the MacOS ioSurface and CAMetalLayer types that the headers expose.
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/11129>
This change adds a gallium D3D10 state tracker that works as a WDDM UMD
software driver, similar to Microsoft WARP, but using llvmpipe/softpipe.
The final deliverable is a d3d10sw.dll, which is similar to WARP's
d3d10warp.dll.
This has been used to run Microsoft Windows HCK wgf11* tests with
llvmpipe, and they were at one point passing 100%.
Known limitations:
- TGSI (no NIR)
- D3D10 only (no D3D11 support yet)
- no WINE integration (WINE doesn't implement WDDM DDI.)
For further details see:
- src/gallium/frontends/d3d10umd/README.md
- src/gallium/targets/d3d10sw/README.md
v2: Drop the DXBC-based disassembly. Add missing break statements.
v3: Incorporate Jesse's feedback.
Reviewed-by: Roland Scheidegger <sroland@vmware.com>
Acked-by: Jesse Natalie <jenatali@microsoft.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/10687>
With the idea of branching classic device support in to its own tree now would be a good time to also raise the minimum
requirements to something that is more "modern" on x86.
SSE2 was introduced in 2000(!) by default let's make it the minimum spec now
All the old hardware that is moving to the maintenance branch will finally be out of the way.
For the 64-bit side of the discussion there isn't much changed.
* GCC already enables -msse and -msse2 by default
* Same with clang
* fpmath=sse might remove some extraneous x87 usage
** Clang implies fpmath=sse ALWAYS
For the 32-bit side of things is where the exciting details change
* GCC by default doesn't enable sse1 or sse2
** Does all `float`, `double`, and `long double` math with x87
** -msse2 enables sse2 and sse1, gcc still uses x87 even with those enabled
** -mfpmath=sse moves away from using x87 and instead uses sse1 and sse2
* Clang already default enables sse1/sse2 which then turns on their implied fpmath=sse
What does this mean for users?
On Linux raises the default minimum processor spec to SSE2 supporting CPUs
* Intel requirements raise from P5 (1993) to Netburst (2000)
* AMD requirements raise from Athlon(1999/2000) to Athlon 64 (2003)
* Via requirements raise from C3(2001) to C7 (2005)
What does it mean for package maintainers?
For x86-64 distributions that have i386/i686 multilib, then nothing changes. You're already on a platform guaranteed to support SSE2.
For i386/i686 distributions they will need to weigh their min spec against this. Not sure how many still support classic processors.
Who is left out in the cold?
* Intel Quark (2013)
** Embedded board, doesn't have a GPU, Technically has 1x PCIe 2.0 lane that someone could plug a GPU in to
* Some older transmeta CPUs, but they had a followup that also had SSE2.
** Anyone hacking on these with a modern GPU? I'm guessing they know how to turn this option off
Reviewed-by: Erik Faye-Lund <erik.faye-lund@collabora.com>
Reviewed-by: Adam Jackson <ajax@redhat.com>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/9868>