- some paths didn't release 'res'
- token is always '\n' after proper acceptance of the directive itself,
so no need to test it, change it to '\n', etc.
- assuming setCurrentColumn(0) is not needed unless there are header tokens,
but not clear why it is ever needed
Note: much of the simplified code read as if the included header tokens had
actually been processed, versus queued up for processing; maybe that explains
some things.
Also, eliminate the 'atom' field of TPpToken.
Parsing a real 300 line shader, through to making the AST, is about 10% faster.
Memory is slightly reduced (< 1%).
The whole google-test suite, inclusive of all testing overhead, SPIR-V generation,
etc., runs 3% faster.
Since this is a code *simplification* that leads to perf. improvement, I'm not
going to invest too much more in measuring the perf. than this. The PP code is
simply now in a better state to see how to further rationalize/improve it.
Removed the preprocesser memory pool.
Removed extra copies and unnecessary allocations of objects related to the ones
that were using the pool.
Replaced some allocated pointers with objects instead, generally using more
modern techiques. There end up being fewer memory allocations/deletions to get right.
Overall combined effect of all changes is to use slightly less memory and
run slightly faster (< 1% for both, but noticable).
As part of simplifying the code base, this change makes it easier to see
PP symbol tracking, which I suspect has an even bigger run-time simplification
to make.
Implement token pasting as per the C++ specification, within the current
style of the PP code.
Non-identifiers (turning 12 ## 10 into the numeral 1210) is not yet covered;
they should be a simple incremental change built on this one.
Addresses issue #255.
Code using atEndOfFile was dead, instead do something useful with
the scanners atEndOfInput(). This allows a better error message
for early termination of cascading errors.
When preprocessing only, some tokens were emitted as <bad token>.
This fixes them to preserve their original content.
This supplants PR #182, with a correction and test results.
This would look ahead for a second #, for token pasting, and if not
found, backup one token. This is fine, unless at the end of line,
which would backup the #, rather than the look ahead.
- Add new keyword int64_t/uint64_t/i64vec/u64vec.
- Support 64-bit integer literals (dec/hex/oct).
- Support built-in operators for 64-bit integer type.
- Add implicit and explicit type conversion for 64-bit integer type.
- Add new built-in functions defined in this extension.
This plumbs both the current file path and the include depth
back up to the includer. This allows the includer to properly
support relative paths.
This also replaces the string copy that was done during include
with a zero-copy method of accomplishing the same thing. This
prevents extra copies of entire files.
Adds parseVersions.h as the base TParseVersions for versioning,
and splits the remainder between TParseContextBase (sharable across parsers)
and TParseContext (now the GLSL-specific part).
There is no calls to the TPpContext that could change the current input,
so every calls to pp->getChar and pp->ungetChar ultimately call this->getch
and this->ungetch while adding overhead of virtual calls and vector::back.
Now extensions required by preprocessor should be checked via
the ppRequireExtensions method. This is more clear and coherent
with the rest of the code.
After parsing a #include directive, we push a TokenizableString
which contains the content of the included file into the input
stack. Henceforth, tokens will be read from the newly pushed
TokenizableString. However, the scanner in TParseContext still
points to the previous input stream. We need to update the scanner
to point to the new input stream inside TokenizableString. Thus,
the setCurrent{String|Line|..} method in TParseContext updates
the status of the correct input stream. After finishing the newly
pushed TokenizableString, we need to restore the scanner to the
previous input stream.