radv: enable shaderInt16 unconditionally with LLVM and only GFX8+ with ACO

The Vulkan spec says:

"shaderInt16 specifies whether 16-bit integers (signed and unsigned)
are supported in shader code. If this feature is not enabled, 16-bit
integer types must not be used in shader code."

I think it's just safe to enable it because 16-bit integers should
be fully supported with LLVM and also with ACO and GFX8+. On GFX8
and earlier generations, throughput of 16-bit int is same as 32-bit
but that should't change anything.

For GFX6-GFX7 ACO support, we have to implement conversions without
SDWA.

Signed-off-by: Samuel Pitoiset <samuel.pitoiset@gmail.com>
Reviewed-by: Daniel Schürmann <daniel@schuermann.dev>
Part-of: <https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/4874>
This commit is contained in:
Samuel Pitoiset
2020-05-04 12:01:41 +02:00
committed by Marge Bot
parent 64662dd5ba
commit b0a7499d28

View File

@@ -908,7 +908,7 @@ void radv_GetPhysicalDeviceFeatures(
.shaderCullDistance = true,
.shaderFloat64 = true,
.shaderInt64 = true,
.shaderInt16 = pdevice->rad_info.chip_class >= GFX9,
.shaderInt16 = !pdevice->use_aco || pdevice->rad_info.chip_class >= GFX8,
.sparseBinding = true,
.variableMultisampleRate = true,
.inheritedQueries = true,