* rework how shader interface block naming rules are handled
* Fixes 2136
According to the spec, shader interfaces (uniform blocks, buffer
blocks, input blocks, output blocks) all should be matched up via
their block names across all compilation units, not instance names.
Also, all block names can be re-used between all 4 interface types
without conflict. This change makes it so all of these blocks are
matched and remapped using block name and not by instance name.
Additional the rule that matched uniform and buffer blocks must
either be anonymous or named (but not nessearily the same name) is
now imposed.
* add warning if instance names differ between matched shader interfaces
* Add test cases from #2137 which is now fixed as well.
* replace some tab characters with spaces
* buffer blocks and uniform blocks now share the same block namespace
When we use unsized array in shader storage buffer, glslang calculate the offset during delcaring the block, it may lead to incorrect block offsets when its implicit array size changed.
So here is what we do:
1. For GLSL, we add flag explicitOffset in TQualifier, and set it when layout offset is specified explicitly
2. By using this flag we could tell difference as whether it is an explicit offset, and recalculate the block member offset conditionally in OpenGL.
TPoolAllocator is not copy assignable, so this setter could never have
been used. After a recent change (878a24ee2), new versions of Clang
reject this code outright.
The changes to glslang/glslang/MachineIndependent/ParseHelper.cpp exist
purely to prevent even more instances of "warning: enumeration value
‘EOp...’ not handled in switch"
v2: Remove 8-bit types. Overzealous copy-and-paste led to adding
support for a bunch of types that the extension doesn't actually enable.
v3: Update expected test results file. Just changing an expected
results file to make a test pass seems sketchy to me, but I'm not sure
what else to do.
v4: Add missing entry for EOpAbsDifference in
TOutputTraverser::visitBinary. Noticed by JohnK.
Purpose :
According to GLSL SPEC 4.6 ( 4.4.1.4 Compute Shader Inputs), for compute shader input qualifiers, we should declare such qualifiers with same values in the same shader (local_size_x, y and z).
"If such a layout qualifier is declared more than once in the same shader, all those declarations must set the same set of local work-group sizes and set them to the same values; otherwise a compile-time error results."
Why this fix:
If we manually set "local_size_x = 1" and directly following a declaration like "local_size_x = 2", this would not be detected. That is because currently we treat all the '1' as default value and could not restrictly detect whether those are default values.
Test case:
......
layout(local_size_x=1) in;
layout(local_size_x=2) in;
......
So I add test cases for this fix:
1. set local_size_y = 1 => success
2. set local_size_y = 2 => error
3. set local_size_y = 1 => success
The order of error checking was not quite being correct (maybe there is no correct
ordering, when many checks must be done and they affect each other).
So, check for block-name reuse twice.
glslang/include/intermediate.h -> Add a new interface to set TIntermBranch's expression.
glslang/include/Types.h -> Add interface to set Type's basicType and add interface to get basicType form a TSampler.
glslang/MachineIndependent/intermediate.cpp -> Part of the code in createConversion been encapsulating as a new function called buildConvertOp
glslang/MachineIndependent/localintermediate.h -> Export createConversion and
buildConvertOp as a public function
glslang/Public/ShaderLang.h -> Add interface to get shader object and shader source.
Saved about 21K, size down to 380K of MSVC x86 code.
Fixed one bug that needs to be looked at on the master branch:
The test for needing a Vulkan binding has a bug in it, "!layoutAttachment"
which does not mean "no layoutAttachment", because that is non-zero.
This is why some test and test results changed.