Tiger Cloud: Performance, Scale, Enterprise

Self-hosted products

MST

Hypercore is the hybrid row-columnar storage engine in TimescaleDB used by hypertables. Traditional databases force a trade-off between fast inserts (row-based storage) and efficient analytics (columnar storage). Hypercore eliminates this trade-off, allowing real-time analytics without sacrificing transactional capabilities.

Hypercore dynamically stores data in the most efficient format for its lifecycle:

  • Row-based storage for recent data: the most recent chunk (and possibly more) is always stored in the rowstore, ensuring fast inserts, updates, and low-latency single record queries. Additionally, row-based storage is used as a writethrough for inserts and updates to columnar storage.
  • Columnar storage for analytical performance: chunks are automatically compressed into the columnstore, optimizing storage efficiency and accelerating analytical queries.

Unlike traditional columnar databases, hypercore allows data to be inserted or modified at any stage, making it a flexible solution for both high-ingest transactional workloads and real-time analytics—within a single database.

When you convert chunks from the rowstore to the columnstore, multiple records are grouped into a single row. The columns of this row hold an array-like structure that stores all the data. For example, data in the following rowstore chunk:

TimestampDevice IDDevice TypeCPUDisk IO
12:00:01ASSD70.1113.4
12:00:01BHDD69.7020.5
12:00:02ASSD70.1213.2
12:00:02BHDD69.6923.4
12:00:03ASSD70.1413.0
12:00:03BHDD69.7025.2

Is converted and compressed into arrays in a row in the columnstore:

TimestampDevice IDDevice TypeCPUDisk IO
[12:00:01, 12:00:01, 12:00:02, 12:00:02, 12:00:03, 12:00:03][A, B, A, B, A, B][SSD, HDD, SSD, HDD, SSD, HDD][70.11, 69.70, 70.12, 69.69, 70.14, 69.70][13.4, 20.5, 13.2, 23.4, 13.0, 25.2]

Because a single row takes up less disk space, you can reduce your chunk size by more than 90%, and can also speed up your queries. This saves on storage costs, and keeps your queries operating at lightning speed.

This page shows you how to get the best results when you set a policy to automatically convert chunks in a hypertable from the rowstore to the columnstore.

To follow the steps on this page:

The code samples in this page use the

data from this key features tutorial.

The compression ratio and query performance of data in the columnstore is dependent on the order and structure of your data. Rows that change over a dimension should be close to each other. With time-series data, you orderby the time dimension. For example, Timestamp:

TimestampDevice IDDevice TypeCPUDisk IO
12:00:01ASSD70.1113.4

This ensures that records are compressed and accessed in the same order. However, you would always have to access the data using the time dimension, then filter all the rows using other criteria. To make your queries more efficient, you segment your data based on the following:

  • The way you want to access it. For example, to rapidly access data about a single device, you segmentby the Device ID column. This enables you to run much faster analytical queries on data in the columnstore.
  • The compression rate you want to achieve. The lower the cardinality of the segmentby column, the better compression results you get.

When TimescaleDB converts a chunk to the columnstore, it automatically creates a different schema for your data. It also creates and uses custom indexes to incorporate the segmentby and orderby parameters when you write to and read from the columnstore.

To set up your hypercore automation:

  1. Connect to your Tiger Cloud service

    In Tiger Cloud Console open an SQL editor. You can also connect to your service using psql.

  2. Enable columnstore on a hypertable

    Create a hypertable for your time-series data using CREATE TABLE. For efficient queries on data in the columnstore, remember to segmentby the column you will use most often to filter your data. For example:

  3. Add a policy to convert chunks to the columnstore at a specific time interval

    Create a columnstore_policy that automatically converts chunks in a hypertable to the columnstore at a specific time interval. For example, convert yesterday's crypto trading data to the columnstore:

    CALL add_columnstore_policy('crypto_ticks', after => INTERVAL '1d');

    TimescaleDB is optimized for fast updates on compressed data in the columnstore. To modify data in the columnstore, use standard SQL.

  4. Check the columnstore policy

    1. View your data space saving:

      When you convert data to the columnstore, as well as being optimized for analytics, it is compressed by more than 90%. This helps you save on storage costs and keeps your queries operating at lightning speed. To see the amount of space saved:

      SELECT
      pg_size_pretty(before_compression_total_bytes) as before,
      pg_size_pretty(after_compression_total_bytes) as after
      FROM hypertable_columnstore_stats('crypto_ticks');

      You see something like:

      beforeafter
      194 MB24 MB
    2. View the policies that you set or the policies that already exist:

      SELECT * FROM timescaledb_information.jobs
      WHERE proc_name='policy_compression';

      See timescaledb_information.jobs.

  5. Pause a columnstore policy

    SELECT * FROM timescaledb_information.jobs where
    proc_name = 'policy_compression' AND relname = 'crypto_ticks'
    -- Select the JOB_ID from the results
    SELECT alter_job(JOB_ID, scheduled => false);

    See alter_job.

  6. Restart a columnstore policy

    SELECT alter_job(JOB_ID, scheduled => true);

    See alter_job.

  7. Remove a columnstore policy

    CALL remove_columnstore_policy('crypto_ticks');

    See remove_columnstore_policy.

  8. Disable columnstore

    If your table has chunks in the columnstore, you have to convert the chunks back to the rowstore before you disable the columnstore.

    ALTER TABLE crypto_ticks SET (timescaledb.enable_columnstore = false);

    See alter_table_hypercore.

For integers, timestamps, and other integer-like types, data is compressed using delta encoding, delta-of-delta, simple-8b, and run-length encoding. For columns with few repeated values, XOR-based and dictionary compression is used. For all other types, dictionary compression is used.

Keywords

Found an issue on this page?Report an issue or Edit this page in GitHub.