Tiger Cloud: Performance, Scale, Enterprise
Self-hosted products
MST
You can migrate your data from self-hosted TimescaleDB to Managed Service for TimescaleDB and automate most of the common operational tasks.
Each MST service has a database named defaultdb
, and a default user account named tsdbadmin
. You use
MST Console to create additional users and databases using the Users
and Databases
tabs.
You can switch between different plan sizes in Managed Service for TimescaleDB. However, during the migration process, choose a plan size that has the same storage size or slightly larger than the currently allocated plan. This allows you to limit the downtime during the migration process and have sufficient compute and storage resources.
Information
Depending on your database size and network speed, migration can take a very long time. During this time, any new writes that happen during the migration process are not included. To prevent data loss, turn off all the writes to the source self-hosted TimescaleDB database before you start migration.
Before migrating for production, do a cold run without turning off writes to the source self-hosted TimescaleDB database. This gives you an estimate of the time the migration process takes, and helps you to practice migrating without causing downtime to your customers.
If you prefer the features of Tiger Cloud, you can easily migrate your data from an MST service to a Tiger Cloud service.
Before you migrate your data, do the following:
Set up the migration machine:
You run the migration commands on the migration machine. It must have enough disk space to hold the dump file.
Install the Postgres
pg_dump
and
pg_restore
utilities on a migration machine.
Install a client to connect to self-hosted TimescaleDB and Managed Service for TimescaleDB.
These instructions use
psql
, but any client works.
Create a target MST service:
For more information, see the Install Managed Service for TimescaleDB. Provision your target MST service with enough space for all your data.
On the source self-hosted TimescaleDB and the target MST service, ensure that you are running:
The same major version of Postgres.
For information, see upgrade Postgres.
The same major version of TimescaleDB
For more information, see Upgrade TimescaleDB to a major version.
To move your data from self-hosted TimescaleDB instance to a MST service, run the following commands from your migration machine:
Take offline the applications that connect to the source self-hosted TimescaleDB instance
The duration of migration is proportional to the amount of data stored in your database. By disconnecting your app from your database, you avoid possible data loss.
Set your connection strings
These variables hold the connection information for the source self-hosted TimescaleDB instance and the target MST service:
export SOURCE="postgres://<user>:<password>@<source host>:<source port>/<db_name>"export TARGET="postgres://tsdbadmin:<password>@<host>:<port>/defaultdb?sslmode=require"Dump the data from your source Tiger Cloud service
pg_dump -d "$SOURCE" --no-owner -Fc -v -f dump.bakPut your target MST service in the right state for restoring
psql -d "$TARGET" -c "SELECT timescaledb_pre_restore();"Upload your data to the target MST service
pg_restore -d "$TARGET" --jobs 4 -Fc dump.bakThe
--jobs
option specifies the number of CPUs to use to dump and restore the database concurrently.Return your target MST service to normal operations
psql -d "$TARGET" -c "SELECT timescaledb_post_restore();"Connect to your new database and update your table statistics by running
ANALYZE
on your entire dataset:
psql -d "$TARGET" defaultdb=> ANALYZE;
To migrate from multiple databases, you repeat this migration procedure one database after another.
If you see the following errors during migration, you can safely ignore them. The migration still runs successfully.
For
pg_dump
:pg_dump: warning: there are circular foreign-key constraints on this table:pg_dump: hypertablepg_dump: You might not be able to restore the dump without using --disable-triggers or temporarily dropping the constraints.pg_dump: Consider using a full dump instead of a --data-only dump to avoid this problem.pg_dump: NOTICE: hypertable data are in the chunks, no data will be copiedDETAIL: Data for hypertables are stored in the chunks of a hypertable so COPY TO of a hypertable will not copy any data.HINT: Use "COPY (SELECT * FROM <hypertable>) TO ..." to copy all data in hypertable, or copy each chunk individually.For
pg_restore
:pg_restore: while PROCESSING TOC:pg_restore: from TOC entry 4142; 0 0 COMMENT EXTENSION timescaledbpg_restore: error: could not execute query: ERROR: must be owner of extension timescaledbCommand was: COMMENT ON EXTENSION timescaledb IS 'Enables scalable inserts and complex queries for time-series data';
Keywords
Found an issue on this page?Report an issue or Edit this page
in GitHub.