Tiger Cloud: Performance, Scale, Enterprise
Self-hosted products
MST
In TimescaleDB v2.11.0 and later, you can use the
UPDATE
and DELETE
commands to modify existing rows in compressed chunks. This works in a similar
way to INSERT
operations. To reduce the amount of decompression, TimescaleDB only attempts to decompress data where it is necessary.
However, if there are no qualifiers, or if the qualifiers cannot be used as filters, calls to UPDATE
and DELETE
may convert large amounts of data to the rowstore and back to the columnstore.
To avoid large scale conversion, filter on the columns you use to segementby
and orderby
. This filters as much data as possible before any data is modified, and reduces the amount of data conversions.
DML operations on the columnstore work if the data you are inserting has unique constraints. Constraints are preserved during the insert operation. TimescaleDB uses a Postgres function that decompresses relevant data during the insert to check if the new data breaks unique checks. This means that any time you insert data into the columnstore, a small amount of data is decompressed to allow a speculative insertion, and block any inserts which could violate constraints.
For TimescaleDB v2.17.0 and later, delete performance is improved on compressed
hypertables when a large amount of data is affected. When you delete whole segments of
data, filter your deletes by
segmentby
column(s) instead of separate deletes.
This considerably increases performance by skipping the decompression step.
Since TimescaleDB v2.21.0 and later,
DELETE
operations on the columnstore
are executed on the batch level, which allows more performant deletion of data of non-segmentby columns
and reduces IO usage.
Warning
This feature requires Postgres 14 or later
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.