Spoc 481 (step 2): Extend DISCARDFILE coverage to UPDATE, DELETE, and SQL/DDL operations#396
Spoc 481 (step 2): Extend DISCARDFILE coverage to UPDATE, DELETE, and SQL/DDL operations#396
Conversation
Preparation commit. In TRANSDISCARD mode, instead of executing each DML in a subtransaction during replay (and logging to exception_log), dump records directly to a per-database binary file under pg_spock/. This follows the Oracle GoldenGate approach: on error, skip re-execution and just record what was discarded. - Add discardfile_write() to serialize records as length-prefixed JSON to pg_spock/discard_<dboid>.log, using an atomic counter for IDs and a dedicated LWLock for concurrent append safety. - Add spock.discard_read() SQL function to read the file back as a table. - Wire TRANSDISCARD/SUB_DISABLE paths in handle_insert() to use the discardfile instead of subtransaction replay. Further commits should add this behaviour into other handle_* routines.
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip CodeRabbit can use Trivy to scan for security misconfigurations and secrets in Infrastructure as Code files.Add a .trivyignore file to your project to customize which findings Trivy reports. |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/spock_apply.c`:
- Around line 1311-1327: The code forces failed = false after calling
discardfile_write() even though the comment says a logging failure should be
fatal; update the branches that call discardfile_write() (see
discardfile_write(...) invocations when exception_behaviour == TRANSDISCARD ||
exception_behaviour == SUB_DISABLE and the corresponding UPDATE/DELETE/SQL
branches) to check discardfile_write()'s return value, and if it indicates
failure set failed = true and propagate/handle the fatal condition instead of
continuing (e.g., abort current apply, bubble up an error or follow existing
fatal-handling code paths used elsewhere); ensure you reference
exception_behaviour, TRANSDISCARD, SUB_DISABLE, get_local_node,
discardfile_write, and the failed flag when making the change so the audit-write
failure stops normal discard/disable replay.
- Around line 2316-2321: The code currently uses JsonbToCString on
queued_message->message->root and writes that JSON-escaped string to
discardfile_write; instead extract the raw scalar SQL text the same way
handle_sql() does (using JsonbIterator to retrieve the unquoted, unescaped
string from queued_message->message->root) and pass that decoded C string to
discardfile_write (and any logging/exception recording) so the stored SQL is the
original statement text, not a JSON-encoded representation; replace the
JsonbToCString usage with the extraction logic and ensure variables passed to
discardfile_write use that decoded value.
In `@src/spock_exception_handler.c`:
- Around line 532-559: Validate the disk-provided reclen before calling palloc:
ensure reclen is positive and within a sane upper bound (e.g., not larger than a
configured/max allowed record size or a reasonable limit such as
MaxAllocSize/PG_INT32_MAX) and handle invalid values by logging an error/warning
with ereport(errcode(...), errmsg(..., path)) and breaking out (or skipping) to
avoid palloc wraparound/huge allocations; perform this check immediately after
the read of reclen in the loop (before palloc(reclen + 1)), and ensure any
early-exit frees allocated resources and does not call jsonb_in or
tuplestore_putvalues with corrupted data.
- Around line 343-460: The function discardfile_write leaks old_json, new_json,
buf.data and the string returned by quote_qualified_identifier(); fix by storing
the qualified relname in a local char * (e.g. qualified_rel =
quote_qualified_identifier(...)) and passing that to escape_json, then
pfree(qualified_rel) after building the JSON, and ensure you pfree(old_json),
pfree(new_json) and pfree(buf.data) on every exit path (all error branches
already free buf.data in some places—add where missing—and definitely free them
before the successful return at end of discardfile_write). Ensure these frees
occur after the data is no longer needed (after appendStringInfo calls) so no
use-after-free occurs.
In `@tests/regress/sql/replication_set.sql`:
- Around line 278-283: The UPDATE discard-file path isn't being exercised
because the suite switches spock.exception_behaviour away from 'discard' before
the UPDATE tests, so the later SELECT from spock.discard_read() only returns
earlier TRANSDISCARD rows; to fix, ensure the UPDATE test that should produce
discard records runs while spock.exception_behaviour = 'discard' (set
spock.exception_behaviour = 'discard' immediately before the UPDATE/DELETE/SQL
statements that are intended to produce discard records) and then call
spock.discard_read() (or explicitly invoke TRANSDISCARD-generating statements)
to capture the new UPDATE/DELETE/SQL discard entries so the spock.discard_read()
query covers the new code paths.
- Around line 217-223: The SELECT is reading rec->>'id' which the writer
(discardfile_write()) never emits; update the query to read the actual stored
field(s) — e.g., use rec->>'xid' (or COALESCE(rec->>'xid', rec->>'remote_xid')
if both variants must be supported) in place of rec->>'id' for the id column and
use (rec->>'xid')::bigint (or the COALESCE expression cast to bigint) in the
ORDER BY so ordering and payload validation are deterministic; keep
spock.discard_read() as the source and only change the selected/ordered field
names.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c4291293-fcd5-4a8b-b23d-be7ce8ca46b4
⛔ Files ignored due to path filters (1)
tests/regress/expected/replication_set.outis excluded by!**/*.out
📒 Files selected for processing (7)
include/spock_exception_handler.hinclude/spock_worker.hsql/spock--6.0.0-devel.sqlsrc/spock_apply.csrc/spock_exception_handler.csrc/spock_shmem.ctests/regress/sql/replication_set.sql
| sql = JsonbToCString(NULL, | ||
| &queued_message->message->root, 0); | ||
| discardfile_write(local_node->node->name, NULL, | ||
| remote_origin_id, local_node->node->id, | ||
| "SQL", NULL, NULL, remote_xid, | ||
| sql, queued_message->role); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
PostgreSQL JsonbToCString raw-scalar string output: does it include JSON quotes/escaping for string scalars, compared with extracting the value through JsonbIterator as jbvString?
💡 Result:
Yes, PostgreSQL's JsonbToCString (via JsonbToCStringWorker) includes JSON quotes and escaping for string scalars. When processing a WJB_VALUE or WJB_ELEM token with a jbvString JsonbValue via the iterator, JsonbToCStringWorker calls jsonb_put_escaped_value, which for jbvString invokes escape_json on the raw string content (JsonbValue.val.string.val of length val.string.len). This produces properly quoted and escaped JSON string output like ""hello\"world"". In contrast, extracting the raw scalar via JsonbIterator returns a JsonbValue with type=jbvString, where val.string.val points to the unquoted, de-escaped original string bytes stored in jsonb (not null-terminated, length in val.string.len). This raw content lacks surrounding JSON quotes and has no JSON-level escaping (e.g., original " is stored as plain ").
Citations:
- 1: https://doxygen.postgresql.org/jsonb_8c_source.html
- 2: https://github.com/postgres/postgres/blob/master/src/backend/utils/adt/jsonb.c
- 3: https://www.highgo.ca/2024/08/23/how-to-utilize-postgresqls-jsonb-apis-as-a-simple-json-parser-for-your-extension/
- 4: https://doxygen.postgresql.org/jsonb_8h_source.html
- 5: https://doxygen.postgresql.org/structJsonbValue.html
Log the decoded SQL string, not the jsonb payload.
This code serializes the entire jsonb value using JsonbToCString(), which applies JSON quoting and escaping to string scalars (producing output like "\"hello\\\"world\""). In contrast, handle_sql() extracts the raw scalar directly via JsonbIterator, obtaining the original unquoted and de-escaped string content. Using JsonbToCString() introduces an extra JSON encoding layer, altering the stored statement text in both the discard file and spock.exception_log with unwanted JSON syntax.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/spock_apply.c` around lines 2316 - 2321, The code currently uses
JsonbToCString on queued_message->message->root and writes that JSON-escaped
string to discardfile_write; instead extract the raw scalar SQL text the same
way handle_sql() does (using JsonbIterator to retrieve the unquoted, unescaped
string from queued_message->message->root) and pass that decoded C string to
discardfile_write (and any logging/exception recording) so the stored SQL is the
original statement text, not a JSON-encoded representation; replace the
JsonbToCString usage with the extraction logic and ensure variables passed to
discardfile_write use that decoded value.
| -- Check discard file contents (TRANSDISCARD records go here) | ||
| SELECT rec->>'id' AS id, rec->>'node_name' AS node_name, | ||
| rec->>'relname' AS relname, rec->>'operation' AS operation, | ||
| rec->'remote_tuple' AS remote_tuple | ||
| FROM spock.discard_read() AS rec | ||
| ORDER BY (rec->>'id')::bigint; | ||
|
|
There was a problem hiding this comment.
This query is reading a field the writer never emits.
discardfile_write() currently stores xid/remote_xid, not id, so this SELECT returns NULL ids and orders every row by NULL. The regression won't actually validate the payload shape or deterministic ordering.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/regress/sql/replication_set.sql` around lines 217 - 223, The SELECT is
reading rec->>'id' which the writer (discardfile_write()) never emits; update
the query to read the actual stored field(s) — e.g., use rec->>'xid' (or
COALESCE(rec->>'xid', rec->>'remote_xid') if both variants must be supported) in
place of rec->>'id' for the id column and use (rec->>'xid')::bigint (or the
COALESCE expression cast to bigint) in the ORDER BY so ordering and payload
validation are deterministic; keep spock.discard_read() as the source and only
change the selected/ordered field names.
| -- Check discard file contents for UPDATE tests | ||
| SELECT rec->>'id' AS id, rec->>'node_name' AS node_name, | ||
| rec->>'relname' AS relname, rec->>'operation' AS operation, | ||
| rec->'remote_tuple' AS remote_tuple | ||
| FROM spock.discard_read() AS rec | ||
| ORDER BY (rec->>'id')::bigint; |
There was a problem hiding this comment.
This block is not exercising the new UPDATE discard-file path.
After Line 166 the suite switches spock.exception_behaviour to discard, so the UPDATE scenario never writes discard records at all. This query is just rereading earlier TRANSDISCARD output, which leaves the new UPDATE/DELETE/SQL discard branches effectively uncovered.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/regress/sql/replication_set.sql` around lines 278 - 283, The UPDATE
discard-file path isn't being exercised because the suite switches
spock.exception_behaviour away from 'discard' before the UPDATE tests, so the
later SELECT from spock.discard_read() only returns earlier TRANSDISCARD rows;
to fix, ensure the UPDATE test that should produce discard records runs while
spock.exception_behaviour = 'discard' (set spock.exception_behaviour = 'discard'
immediately before the UPDATE/DELETE/SQL statements that are intended to produce
discard records) and then call spock.discard_read() (or explicitly invoke
TRANSDISCARD-generating statements) to capture the new UPDATE/DELETE/SQL discard
entries so the spock.discard_read() query covers the new code paths.
In TRANSDISCARD and SUB_DISABLE modes, handle_commit already logs a single exception entry for the entire discarded transaction. The per-row log_insert_exception call in handle_insert was producing duplicate entries. Move the call inside the DISCARD-mode branch so only DISCARD mode logs per-row exceptions; TRANSDISCARD/SUB_DISABLE rely on the discard file for per-row detail and handle_commit for the audit-trail entry. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
In TRANSDISCARD and SUB_DISABLE modes, only INSERT was being written to the discard file; UPDATE, DELETE, and SQL/DDL operations went through the subtransaction path and were silently discarded. Add the same early-exit discardfile_write() path for all four operation types so that every discarded change is recorded. To support SQL/DDL operations (which have no SpockRelation), teach discardfile_write() to accept rel=NULL: when NULL, the relname field is written as an empty string, tuple serialization is skipped, and two new optional JSON fields (ddl_statement, ddl_user) carry the SQL text and executing role.
Previously, in SUB_DISABLE mode, the replication origin LSN was advanced even when a retried transaction succeeded, causing the transaction to be permanently skipped after the subscription was re-enabled. Now the origin LSN is not advanced for SUB_DISABLE retries, so the transaction will be delivered again and applied normally after the user fixes the root cause. Additionally, log full DML details (relation, tuples) to the exception log table for TRANSDISCARD/SUB_DISABLE modes. The LSN of the record that caused the initial error is saved in shared memory, and during the retry the DML handler identifies the failed record by LSN match and calls log_insert_exception with complete tuple data, matching the level of detail previously available only in DISCARD mode.
Replace the AbortCurrentTransaction + StartTransactionCommand pattern in the TRANSDISCARD/SUB_DISABLE retry path with a read-only transaction. Setting transaction_read_only at the start of the replay prevents any actual DML while still allowing direct catalog writes for exception_log entries via log_insert_exception. This eliminates the need to abort the replayed transaction and start a new one just for logging — the DML-level exception_log entries (with full relation/tuple data) are committed as part of the replay transaction itself. The transdiscard_skip_commit goto is no longer needed. For SUB_DISABLE, the subscription disable logic is consolidated into the post-commit section alongside the existing xact_had_exception handling.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/spock_apply.c (1)
814-855:⚠️ Potential issue | 🟠 MajorHandle the WAL-less retry case when exception logging is off.
This branch now commits the read-only retry transaction on the assumption that
spock.exception_loginserts will make it non-empty, butexception_loggingcan beLOG_NONE. In that case the transaction only appends to the discard file, which is outside PostgreSQL WAL, soCommitTransactionCommand()still behaves like the empty-transaction case handled later in this function and won't durably advance the replication origin. A crash after feedback/progress update can then replay the same remote transaction again. Please track whether any in-transaction write happened in discard mode and fall back toreplorigin_session_advance()when it did not.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spock_apply.c` around lines 814 - 855, When handling TRANSDISCARD/SUB_DISABLE you must detect the WAL-less retry case (when spock.exception_log inserts are not WAL-logged because exception_logging == LOG_NONE) and advance the replication origin manually; add a boolean flag (e.g., wrote_in_transaction_write or wrote_exception_log) that log_insert_exception sets when it actually performs an in-transaction, WAL-visible write, and in the branch around exception_behaviour (where exception_log is read and CommitTransactionCommand() is called) check: if exception_behaviour is TRANSDISCARD or SUB_DISABLE and the flag is false (no in-transaction WAL write occurred) then call replorigin_session_advance(end_lsn) (using the same end_lsn used for the elog) before calling CommitTransactionCommand(); ensure the flag is reset (or stored per-transaction) and reference symbols: exception_behaviour, TRANSDISCARD, SUB_DISABLE, exception_log_ptr/exception_log, log_insert_exception, CommitTransactionCommand(), replorigin_session_advance(), and end_lsn.
♻️ Duplicate comments (4)
src/spock_apply.c (2)
2295-2300:⚠️ Potential issue | 🟠 MajorSerialize the decoded SQL text, not the jsonb scalar.
JsonbToCString()re-encodes string scalars as JSON, so the discard file andspock.exception_logwill store an escaped JSON literal instead of the original SQL statement text.handle_sql()already has the correctJsonbIteratorextraction logic; reuse that here before callingdiscardfile_write()/add_entry_to_exception_log().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spock_apply.c` around lines 2295 - 2300, The code currently calls JsonbToCString on queued_message->message->root which re-encodes a string scalar as JSON and stores an escaped literal; instead reuse the JsonbIterator extraction used in handle_sql() to pull the raw SQL text from queued_message->message->root (extract the JB_STR/DATA scalar value) and pass that decoded C string into discardfile_write() and add_entry_to_exception_log() (replace the JsonbToCString usage and ensure the same variable names like sql, remote_xid, queued_message->role are used when calling discardfile_write/add_entry_to_exception_log).
1290-1308:⚠️ Potential issue | 🟠 MajorPropagate
discardfile_write()failures instead of continuing.These branches all document discard logging as mandatory, but the
falsereturn is ignored and replay proceeds as if the audit record was written. InTRANSDISCARD/SUB_DISABLE, that can silently skip DML/SQL without leaving anything in the discard file.⚙️ Minimal pattern to apply at each call site
- discardfile_write(...); + if (!discardfile_write(...)) + ereport(ERROR, + (errmsg("failed to append discard record for this operation")));Also applies to: 1475-1494, 1599-1617, 2282-2301
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spock_apply.c` around lines 1290 - 1308, The discardfile_write() calls in the TRANSDISCARD/SUB_DISABLE branches (and the other similar sites) currently ignore failures; change each call site to check its boolean return and, on failure, immediately propagate the error instead of continuing replay — e.g., detect discardfile_write(...) == false and return a failure status to the caller (or raise the appropriate error path used by this apply worker) so the worker will follow exception_behaviour rather than continuing; update the handling around exception_log_ptr/my_exception_log_index (and the equivalent blocks at the other listed call sites) so you do not clear or advance state incorrectly when propagation occurs.src/spock_exception_handler.c (2)
532-539:⚠️ Potential issue | 🟠 MajorReject invalid
reclenvalues beforepalloc()/read().
reclencomes straight from disk and is used in bothpalloc(reclen + 1)andread(fd, rec, reclen). A negative or oversized value can wrap the allocation and then handread()a huge byte count, so a corrupt discard file can crash this SQL-callable function instead of stopping cleanly. Guardreclen <= 0 || reclen > MaxAllocSize - 1and warn/break before allocating.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spock_exception_handler.c` around lines 532 - 539, Reject malformed reclen values before allocating/reading: in the loop that reads reclen (the code that calls palloc(reclen + 1) and read(fd, rec, reclen)), validate reclen and bail out if reclen <= 0 or reclen > MaxAllocSize - 1 (or other appropriate MaxAllocSize threshold) to avoid integer wrap/oversized allocations; log or warn about the corrupt record and break/return instead of calling palloc() or read(). Ensure you reference the existing reclen, rec, palloc(...) and read(fd, ...) usages when making this change.
343-460:⚠️ Potential issue | 🟠 MajorRelease the per-record JSON buffers before returning.
old_json,new_json,buf.data, and the qualified relname returned byquote_qualified_identifier()are never freed on the success path, and the early returns only releasebuf.data. UPDATE/DELETE/SQL discard replay can hit this repeatedly fromMessageContext, so memory grows with each discarded record until the transaction ends.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spock_exception_handler.c` around lines 343 - 460, Summary: discardfile_write leaks memory because old_json, new_json, the temporary qualified relname from quote_qualified_identifier(), and buf.data are not freed on the success path (and some early returns miss freeing the JSON buffers). Fix: in discardfile_write (referencing symbols old_json, new_json, buf.data, quote_qualified_identifier(rel->nspname, rel->relname) and function discardfile_write) ensure you store the result of quote_qualified_identifier in a local char * (e.g. qualified_relname), call escape_json using that, and pfree(qualified_relname) afterwards; always pfree(old_json) and pfree(new_json) if non-NULL before any return; and pfree(buf.data) in every exit path (both on errors and on the final successful return) so no allocations (old_json, new_json, qualified relname, buf.data) remain leaked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/spock_apply.c`:
- Around line 814-855: When handling TRANSDISCARD/SUB_DISABLE you must detect
the WAL-less retry case (when spock.exception_log inserts are not WAL-logged
because exception_logging == LOG_NONE) and advance the replication origin
manually; add a boolean flag (e.g., wrote_in_transaction_write or
wrote_exception_log) that log_insert_exception sets when it actually performs an
in-transaction, WAL-visible write, and in the branch around exception_behaviour
(where exception_log is read and CommitTransactionCommand() is called) check: if
exception_behaviour is TRANSDISCARD or SUB_DISABLE and the flag is false (no
in-transaction WAL write occurred) then call replorigin_session_advance(end_lsn)
(using the same end_lsn used for the elog) before calling
CommitTransactionCommand(); ensure the flag is reset (or stored per-transaction)
and reference symbols: exception_behaviour, TRANSDISCARD, SUB_DISABLE,
exception_log_ptr/exception_log, log_insert_exception,
CommitTransactionCommand(), replorigin_session_advance(), and end_lsn.
---
Duplicate comments:
In `@src/spock_apply.c`:
- Around line 2295-2300: The code currently calls JsonbToCString on
queued_message->message->root which re-encodes a string scalar as JSON and
stores an escaped literal; instead reuse the JsonbIterator extraction used in
handle_sql() to pull the raw SQL text from queued_message->message->root
(extract the JB_STR/DATA scalar value) and pass that decoded C string into
discardfile_write() and add_entry_to_exception_log() (replace the JsonbToCString
usage and ensure the same variable names like sql, remote_xid,
queued_message->role are used when calling
discardfile_write/add_entry_to_exception_log).
- Around line 1290-1308: The discardfile_write() calls in the
TRANSDISCARD/SUB_DISABLE branches (and the other similar sites) currently ignore
failures; change each call site to check its boolean return and, on failure,
immediately propagate the error instead of continuing replay — e.g., detect
discardfile_write(...) == false and return a failure status to the caller (or
raise the appropriate error path used by this apply worker) so the worker will
follow exception_behaviour rather than continuing; update the handling around
exception_log_ptr/my_exception_log_index (and the equivalent blocks at the other
listed call sites) so you do not clear or advance state incorrectly when
propagation occurs.
In `@src/spock_exception_handler.c`:
- Around line 532-539: Reject malformed reclen values before allocating/reading:
in the loop that reads reclen (the code that calls palloc(reclen + 1) and
read(fd, rec, reclen)), validate reclen and bail out if reclen <= 0 or reclen >
MaxAllocSize - 1 (or other appropriate MaxAllocSize threshold) to avoid integer
wrap/oversized allocations; log or warn about the corrupt record and
break/return instead of calling palloc() or read(). Ensure you reference the
existing reclen, rec, palloc(...) and read(fd, ...) usages when making this
change.
- Around line 343-460: Summary: discardfile_write leaks memory because old_json,
new_json, the temporary qualified relname from quote_qualified_identifier(), and
buf.data are not freed on the success path (and some early returns miss freeing
the JSON buffers). Fix: in discardfile_write (referencing symbols old_json,
new_json, buf.data, quote_qualified_identifier(rel->nspname, rel->relname) and
function discardfile_write) ensure you store the result of
quote_qualified_identifier in a local char * (e.g. qualified_relname), call
escape_json using that, and pfree(qualified_relname) afterwards; always
pfree(old_json) and pfree(new_json) if non-NULL before any return; and
pfree(buf.data) in every exit path (both on errors and on the final successful
return) so no allocations (old_json, new_json, qualified relname, buf.data)
remain leaked.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: b63574c6-093e-4e1e-a404-f69b276793e8
⛔ Files ignored due to path filters (1)
tests/regress/expected/replication_set.outis excluded by!**/*.out
📒 Files selected for processing (3)
include/spock_exception_handler.hsrc/spock_apply.csrc/spock_exception_handler.c
🚧 Files skipped from review as they are similar to previous changes (1)
- include/spock_exception_handler.h
Summary
discardfile_write()to acceptrel=NULLfor SQL/DDL operations that have no target relation — relname is written as an empty string, tuple serialisation is skipped, and two new optional JSON fields (ddl_statement,ddl_user) carry the SQL text and the executing role!failedblocks tested for TRANSDISCARD/SUB_DISABLE despite already being inside the else-branch that excludes those modes