Tiger Cloud: Performance, Scale, Enterprise

Self-hosted products

MST

Old API since TimescaleDB v2.18.0 Replaced by `convert_to_rowstore`.
Important

When compressing your data, you can reduce the amount of storage space used. But you should always leave some additional storage capacity. This gives you the flexibility to decompress chunks when necessary, for actions such as bulk inserts.

This section describes commands to use for decompressing chunks. You can filter by time to select the chunks you want to decompress.

Before decompressing chunks, stop any compression policy on the hypertable you are decompressing. The database automatically recompresses your chunks in the next scheduled job. If you accumulate a large amount of chunks that need to be compressed, the troubleshooting guide shows how to compress a backlog of chunks. For more information on how to stop and run compression policies using alter_job(), see the API reference.

There are several methods for selecting chunks and decompressing them.

To decompress a single chunk by name, run this command:

SELECT decompress_chunk('_timescaledb_internal.<chunk_name>');

where, <chunk_name> is the name of the chunk you want to decompress.

To decompress a set of chunks based on a time range, you can use the output of show_chunks to decompress each one:

SELECT decompress_chunk(c, true)
FROM show_chunks('table_name', older_than, newer_than) c;

For more information about the decompress_chunk function, see the decompress_chunk API reference.

If you want to use more precise matching constraints, for example space partitioning, you can construct a command like this:

SELECT tableoid::regclass FROM metrics
WHERE time = '2000-01-01' AND device_id = 1
GROUP BY tableoid;
tableoid
------------------------------------------
_timescaledb_internal._hyper_72_37_chunk

Keywords

Found an issue on this page?Report an issue or Edit this page in GitHub.