Tiger Cloud: Performance, Scale, Enterprise
Self-hosted products
MST
Hypertables are designed for real-time analytics, they are Postgres tables that automatically partition your data by
time. Typically, you partition hypertables on columns that hold time values.
Best practice is to use timestamptz
column type. However, you can also partition on
date
, integer
and timestamp
types.
To follow the steps on this page:
Create a target Tiger Cloud service with time-series and analytics enabled.
You need your connection details. This procedure also works for self-hosted TimescaleDB.
Create a hypertable for your time-series data using CREATE TABLE.
For efficient queries on data in the columnstore, remember to segmentby
the column you will use
most often to filter your data:
CREATE TABLE conditions (time TIMESTAMPTZ NOT NULL,location TEXT NOT NULL,device TEXT NOT NULL,temperature DOUBLE PRECISION NULL,humidity DOUBLE PRECISION NULL) WITH (tsdb.hypertable,tsdb.partition_column='time',tsdb.segmentby = 'device',tsdb.orderby = 'time DESC');
If you are self-hosting TimescaleDB v2.19.3 and below, create a Postgres relational table,
then convert it using create_hypertable. You then enable hypercore with a call
to ALTER TABLE.
To convert an existing table with data in it, call create_hypertable
on that table with
migrate_data
to true
. However, if you have a lot of data, this may take a long time.
When you set timescaledb.enable_direct_compress_copy
your data gets compressed in memory during ingestion with COPY
statements.
By writing the compressed batches immediately in the columnstore, the IO footprint is significantly lower.
Also, the columnstore policy you set is less important, INSERT
already produces compressed chunks.
Note
Please note that this feature is a tech preview and not production-ready. Using this feature could lead to regressed query performance and/or storage ratio, if the ingested batches are not correctly ordered or are of too high cardinality.
To enable in-memory data compression during ingestion:
SET timescaledb.enable_direct_compress_copy=on;
Important facts
- High cardinality use cases do not produce good batches and lead to degreaded query performance.
- The columnstore is optimized to store 1000 records per batch, which is the optimal format for ingestion per segment by.
- WAL records are written for the compressed batches rather than the individual tuples.
- Currently only
COPY
is support,INSERT
will eventually follow. - Best results are achieved for batch ingestion with 1000 records or more, upper boundary is 10.000 records.
- Continous Aggregates are not supported at the moment.
As the data cools and becomes more suited for analytics, add a columnstore policy so your data is automatically converted to the columnstore after a specific time interval. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads while also saving significant storage space. In the columnstore conversion, hypertable chunks are compressed by up to 98%, and organized for efficient, large-scale queries. This columnar format enables fast scanning and aggregation, optimizing performance for analytical workloads.
To optimize your data, add a columnstore policy:
CALL add_columnstore_policy('conditions', after => INTERVAL '1d');
You can also manually convert chunks in a hypertable to the columnstore.
You can alter a hypertable, for example to add a column, by using the Postgres
ALTER TABLE
command. This works for both regular and
distributed hypertables.
You add a column to a hypertable using the ALTER TABLE
command. In this
example, the hypertable is named conditions
and the new column is named
humidity
:
ALTER TABLE conditionsADD COLUMN humidity DOUBLE PRECISION NULL;
If the column you are adding has the default value set to NULL
, or has no
default value, then adding a column is relatively fast. If you set the default
to a non-null value, it takes longer, because it needs to fill in this value for
all existing rows of all existing chunks.
You can change the name of a hypertable using the ALTER TABLE
command. In this
example, the hypertable is called conditions
, and is being changed to the new
name, weather
:
ALTER TABLE conditionsRENAME TO weather;
Drop a hypertable using a standard Postgres DROP TABLE
command:
DROP TABLE weather;
All data chunks belonging to the hypertable are deleted.
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.