You use the following Grand Unified Configuration (GUC) parameters to optimize the behavior of your Tiger Cloud service.

The namespace of each GUC is timescaledb. To set a GUC you specify <namespace>.<GUC name>. For example:

SET timescaledb.enable_tiered_reads = true;
NameTypeDefaultDescription
auto_sparse_indexesBOOLEANtrueThe hypertable columns that are used as index keys will have suitable sparse indexes when compressed. Must be set at the moment of chunk compression, e.g. when the compress_chunk() is called.
bgw_log_levelENUMWARNINGLog level for the scheduler and workers of the background worker subsystem. Requires configuration reload to change.
compress_truncate_behaviourENUMCOMPRESS_TRUNCATE_ONLYDefines how truncate behaves at the end of compression. 'truncate_only' forces truncation. 'truncate_disabled' deletes rows instead of truncate. 'truncate_or_delete' allows falling back to deletion.
compression_batch_size_limitINTEGER1000Setting this option to a number between 1 and 999 will force compression to limit the size of compressed batches to that amount of uncompressed tuples.Setting this to 0 defaults to the max batch size of 1000.
min: 1, max: 1000
compression_orderby_default_functionSTRING"_timescaledb_functions.get_orderby_defaults"Function to use for calculating default order_by setting for compression
compression_segmentby_default_functionSTRING"_timescaledb_functions.get_segmentby_defaults"Function to use for calculating default segment_by setting for compression
current_timestamp_mockSTRINGNULLthis is for debugging purposes
debug_allow_cagg_with_deprecated_funcsBOOLEANfalsethis is for debugging/testing purposes
debug_bgw_scheduler_exit_statusINTEGER0this is for debugging purposes
min: 0, max: 255
debug_compression_path_infoBOOLEANfalsethis is for debugging/information purposes
debug_have_int128BOOLEAN#ifdef HAVE_INT128 truethis is for debugging purposes
debug_require_batch_sorted_mergeBOOLEANfalsethis is for debugging purposes
debug_require_vector_aggENUMDRO_Allowthis is for debugging purposes
debug_require_vector_qualENUMDRO_Allowthis is for debugging purposes, to let us check if the vectorized quals are used or not. EXPLAIN differs after PG15 for custom nodes, and using the test templates is a pain
debug_toast_tuple_targetINTEGER/* bootValue = */ 128this is for debugging purposes
min: /* minValue = */ 1, max: /* maxValue = */ 65535
default_hypercore_use_access_methodBOOLEANfalsegettext_noop(Sets the global default for using Hypercore TAM when compressing chunks.)
enable_bool_compressionBOOLEANtrueEnable bool compression
enable_bulk_decompressionBOOLEANtrueIncreases throughput of decompression, but might increase query memory usage
enable_cagg_reorder_groupbyBOOLEANtrueEnable group by clause reordering for continuous aggregates
enable_cagg_sort_pushdownBOOLEANtrueEnable pushdown of ORDER BY clause for continuous aggregates
enable_cagg_watermark_constifyBOOLEANtrueEnable constifying cagg watermark for real-time caggs
enable_cagg_window_functionsBOOLEANfalseAllow window functions in continuous aggregate views
enable_chunk_appendBOOLEANtrueEnable using chunk append node
enable_chunk_skippingBOOLEANfalseEnable using chunk column stats to filter chunks based on column filters
enable_chunkwise_aggregationBOOLEANtrueEnable the pushdown of aggregations to the chunk level
enable_columnarscanBOOLEANtrueA columnar scan replaces sequence scans for columnar-oriented storage and enables storage-specific optimizations like vectorized filters. Disabling columnar scan will make PostgreSQL fall back to regular sequence scans.
enable_compressed_direct_batch_deleteBOOLEANtrueEnable direct batch deletion in compressed chunks
enable_compressed_skipscanBOOLEANtrueEnable SkipScan for distinct inputs over compressed chunks
enable_compression_indexscanBOOLEANfalseEnable indexscan during compression, if matching index is found
enable_compression_ratio_warningsBOOLEANtrueEnable warnings for poor compression ratio
enable_compression_wal_markersBOOLEANtrueEnable the generation of markers in the WAL stream which mark the start and end of compression operations
enable_compressor_batch_limitBOOLEANfalseEnable compressor batch limit for compressors which can go over the allocation limit (1 GB). This feature willlimit those compressors by reducing the size of the batch and thus avoid hitting the limit.
enable_constraint_aware_appendBOOLEANtrueEnable constraint exclusion at execution time
enable_constraint_exclusionBOOLEANtrueEnable planner constraint exclusion
enable_custom_hashaggBOOLEANfalseEnable creating custom hash aggregation plans
enable_decompression_sorted_mergeBOOLEANtrueEnable the merge of compressed batches to preserve the compression order by
enable_delete_after_compressionBOOLEANfalseDelete all rows after compression instead of truncate
enable_deprecation_warningsBOOLEANtrueEnable warnings when using deprecated functionality
enable_direct_compress_copyBOOLEANfalseEnable experimental support for direct compression during COPY
enable_direct_compress_copy_client_sortedBOOLEANfalseCorrect handling of data sorting by the user is required for this option.
enable_direct_compress_copy_sort_batchesBOOLEANtrueEnable batch sorting during direct compress COPY
enable_dml_decompressionBOOLEANtrueEnable DML decompression when modifying compressed hypertable
enable_dml_decompression_tuple_filteringBOOLEANtrueRecheck tuples during DML decompression to only decompress batches with matching tuples
enable_event_triggersBOOLEANfalseEnable event triggers for chunks creation
enable_exclusive_locking_recompressionBOOLEANfalseEnable getting exclusive lock on chunk during segmentwise recompression
enable_foreign_key_propagationBOOLEANtrueAdjust foreign key lookup queries to target whole hypertable
enable_hypercore_scankey_pushdownBOOLEANtrueEnabling this setting might lead to faster scans when query qualifiers match Hypercore segmentby and orderby columns.
enable_job_execution_loggingBOOLEANfalseRetain job run status in logging table
enable_merge_on_cagg_refreshBOOLEANfalseEnable MERGE statement on cagg refresh
enable_now_constifyBOOLEANtrueEnable constifying now() in query constraints
enable_null_compressionBOOLEANtrueEnable null compression
enable_optimizationsBOOLEANtrueEnable TimescaleDB query optimizations
enable_ordered_appendBOOLEANtrueEnable ordered append optimization for queries that are ordered by the time dimension
enable_parallel_chunk_appendBOOLEANtrueEnable using parallel aware chunk append node
enable_qual_propagationBOOLEANtrueEnable propagation of qualifiers in JOINs
enable_rowlevel_compression_lockingBOOLEANfalseUse only if you know what you are doing
enable_runtime_exclusionBOOLEANtrueEnable runtime chunk exclusion in ChunkAppend node
enable_segmentwise_recompressionBOOLEANtrueEnable segmentwise recompression
enable_skipscanBOOLEANtrueEnable SkipScan for DISTINCT queries
enable_skipscan_for_distinct_aggregatesBOOLEANtrueEnable SkipScan for DISTINCT aggregates
enable_sparse_index_bloomBOOLEANtrueThis sparse index speeds up the equality queries on compressed columns, and can be disabled when not desired.
enable_tiered_readsBOOLEANtrueEnable reading of tiered data by including a foreign table representing the data in the object storage into the query plan
enable_transparent_decompressionENUM1Enable transparent decompression when querying hypertable
enable_tss_callbacksBOOLEANtrueEnable ts_stat_statements callbacks
enable_vectorized_aggregationBOOLEANtrueEnable vectorized aggregation for compressed data
hypercore_arrow_cache_max_entriesINTEGER25000The max number of decompressed arrow segments that can be cached before entries are evicted. This mainly affects the performance of index scans on the Hypercore TAM when segments are accessed in non-sequential order.
min: 1, max: INT_MAX
hypercore_copy_to_behaviorENUMHYPERCORE_COPY_NO_COMPRESSED_DATASet to 'all_data' to return both compressed and uncompressed data via the Hypercore table when using COPY TO. Set to 'no_compressed_data' to skip compressed data.
hypercore_indexam_whitelistSTRING"btree,hash"gettext_noop(List of index access method names supported by hypercore.)
last_tunedSTRINGNULLrecords last time timescaledb-tune ran
last_tuned_versionSTRINGNULLversion of timescaledb-tune used to tune
licenseSTRINGTS_LICENSE_DEFAULTDetermines which features are enabled
materializations_per_refresh_windowINTEGER10The maximal number of individual refreshes per cagg refresh. If more refreshes need to be performed, they are merged into a larger single refresh.
min: 0, max: INT_MAX
max_cached_chunks_per_hypertableINTEGER1024Maximum number of chunks stored in the cache
min: 0, max: 65536
max_open_chunks_per_insertINTEGER1024Maximum number of open chunk tables per insert
min: 0, max: PG_INT16_MAX
max_tuples_decompressed_per_dml_transactionINTEGER100000If the number of tuples exceeds this value, an error will be thrown and transaction rolled back. Setting this to 0 sets this value to unlimited number of tuples decompressed.
min: 0, max: 2147483647
restoringBOOLEANfalseIn restoring mode all timescaledb internal hooks are disabled. This mode is required for restoring logical dumps of databases with timescaledb.
shutdown_bgw_schedulerBOOLEANfalsethis is for debugging purposes
skip_scan_run_cost_multiplierREAL1.0Default is 1.0 i.e. regularly estimated SkipScan run cost, 0.0 will make SkipScan to have run cost = 0
min: 0.0, max: 1.0
telemetry_levelENUMTELEMETRY_DEFAULTLevel used to determine which telemetry to send

Version: 2.21.0

Keywords

Found an issue on this page?Report an issue or Edit this page in GitHub.