Tiger Cloud: Performance, Scale, Enterprise

Self-hosted products

MST

In modern applications, data grows exponentially. As data gets older, it often becomes less useful in day-to-day operations. However, you still need it for analysis. TimescaleDB elegantly solves this problem with automated data retention policies.

Data retention policies delete raw old data for you on a schedule that you define. By combining retention policies with continuous aggregates, you can downsample your data and keep useful summaries of it instead. This lets you analyze historical data - while also saving on storage.

TimescaleDB data retention works on chunks, not on rows. Deleting data row-by-row, for example, with the PostgreSQL DELETE command, can be slow. But dropping data by the chunk is faster, because it deletes an entire file from disk. It doesn't need garbage collection and defragmentation.

Whether you use a policy or manually drop chunks, TimescaleDB drops data by the chunk. It only drops chunks where all the data is within the specified time range.

For example, consider the setup where you have 3 chunks containing data:

  1. More than 36 hours old
  2. Between 12 and 36 hours old
  3. From the last 12 hours

You manually drop chunks older than 24 hours. Only the oldest chunk is deleted. The middle chunk is retained, because it contains some data newer than 24 hours. No individual rows are deleted from that chunk.

Keywords

Found an issue on this page?Report an issue or Edit this page in GitHub.