Configuration parameters (GUCs) v17
These Grand Unified Configuration (GUC) configuration parameters are available with EDB Postgres Extended Server.
Backend parameters
Backend parameters introduce a test probe point infrastructure for injecting sleeps or errors into PostgreSQL and extensions.
Any PROBE_POINT
defined throughout the Postgres code code marks important code paths. These probe points might be activated to signal the current backend or to elog(...) a LOG
/ERROR
/FATAL
/PANIC
. They might also, or instead, add a delay at that point in the code.
Unless explicitly activated, probe points have no effect and add only a single optimizer-hinted branch, so they're safe on hot paths.
When an active probe point is hit and the counter is satisfied, after any specified sleep interval, a log message is always emitted at DEBUG1
or higher.
pg2q.probe_point
The name of a PROBE_POINT
in the code of 2ndQPostgres or in an extension that defines a PROBE_POINT
. This parameter isn't validated. If a nonexistent probe point is named, it's never hit.
Only one probe point can be active. This isn't a list, and attempting to supply a list means nothing matches.
Probe points generally have a unique name, given as the argument to the PROBE_POINT
macro in the code where it's defined. It's also possible to use the same PROBE_POINT
name where multiple code paths trigger the same action of interest. A probe fires when either path is taken.
pg2q.probe_counter
You might need to act on a probe only after a loop is run for the number of times specified with this parameter. In such cases, set this GUC to the number of iterations at which point the probe point fires, and reset the counter.
The default value is 1
, meaning the probe points always fire when the name matches.
pg2q.probe_sleep
Sleep for pg2q.probe_sleep
milliseconds after hitting the probe point. Then fire the action in pg2q.probe_action
.
pg2q.probe_action
Action to take when the named pg2q.probe_point
is hit. Available actions are:
sleep
— Emit aDEBUG
message with the probe name.log
— Emit aLOG
message with the probe name.error
—elog(ERROR, ...)
to raise anERROR
condition.fatal
—elog(FATAL, ...)
.panic
—elog(PANIC, ...)
, which generally then callsabort()
and delivers aSIGABRT
(signal 6) to cause the backend to core dump. The probe point tries to set the core file limit to enable core dumps if the hard ulimit permits.sigint
,sigterm
,sigquit
,sigkill
— Deliver the named signal to the backend that hit the probe point.
pg2q.probe_backend_pid
If nonzero, the probe sleep and action are skipped for backends other than the backend with this ID.
server_2q_version_num
and server_2q_version
The server_2q_version_num
and server_2q_version
configuration parameters allow the 2ndQuadrant-specific version number and version substring, respectively, to be accessible to external modules.
Table-level compression control option
You can set the table-level option compress_tuple_target
to decide when to trigger compression on a tuple. Previously, you used the toast_tuple_target
(or the compile time default) to decide whether to compress a tuple. However, this was detrimental when a tuple is large enough and has a good compression ratio but not large enough to cross the toast threshold.
pg2q.max_tuple_field_size
Restricts the maximum uncompressed size of the internal representation of any one field that can be written to a table, in bytes.
The default pg2q.max_tuple_field_size
is 1073740799 bytes, which is 1024 bytes less than 1 GiB. This value is slightly less than the 1 GiB maximum field size usually imposed by PostgreSQL. This margin helps prevent cases where tuples are committed to disk but can't then be processed by logical decoding output plugins and sent to downstream servers.
Set pg2q.max_tuple_field_size
to 1GB
or 11073741823
to disable the feature.
If your application doesn't rely on inserting large fields, consider setting pg2q.max_tuple_field_size
to a much smaller value, such as 100MB or even less. Among other issues, large fields can:
- Cause surprising application behavior
- Increase memory consumption for the database engine during queries and replication
- Slow down logical replication
While this parameter is enabled, oversized fields cause queries that INSERT
or UPDATE
an oversized field to fail with an ERROR
such as: