top of page

Harnessing the Power of TimescaleDB

Updated: Nov 26

TimescaleDB is an open-source database designed to make SQL scalable for time-series data. It is engineered up from PostgreSQL and packaged as a PostgreSQL extension, maintaining full SQL support.


TimescaleDB

TimescaleDB's features like hypertables, continuous aggregation, and partitioning make it a powerful choice for managing time-series data. These features help in efficiently storing, querying, and analyzing large volumes of time-series data, providing both scalability and performance benefits.


Hypertables:

A hypertable is a virtual table in TimescaleDB that automatically partitions data into smaller chunks across time and space dimensions. It allows you to interact with your data as if it were a single table, while benefiting from the performance optimizations of partitioning.


Key Differences:


Feature

Hypertables (TimescaleDB)

Partitioning (Other Databases)

Ease of Use

Automatic, transparent partitioning

Manual setup and management

Optimization

Designed for time-series workloads

General-purpose partitioning

Query Interface

Single-table abstraction

May require partition-specific logic

Compression

Native support for time-series data

Varies by database


Hypertables simplify working with time-series data by automating partitioning and optimizing performance, whereas other database partitioning requires manual setup and is more general-purpose.


Structure: Hypertables are composed of many smaller tables called chunks. Each chunk contains a subset of the data, partitioned by time and optionally by another dimension (e.g., DeviceID).

Benefits:

Scalability: Efficiently manage large volumes of time-series data.

Performance: Optimized for both write and read operations.

Ease of Use: Interact with hypertables using standard SQL queries.



Continuous Aggregation:

Continuous aggregation is a feature in TimescaleDB that allows you to automatically compute and store aggregates of your data over time, reducing the need to recompute aggregates on-the-fly.

Mechanism: It uses materialized views to store precomputed results. These views are automatically updated as new data is inserted into the hypertable.


Use Cases:

Real-time Analytics: Quickly access aggregated metrics without the overhead of recomputation.

Historical Analysis: Maintain and query historical aggregates efficiently.


Partitioning:

Partitioning in TimescaleDB refers to the automatic division of data into smaller, more

manageable pieces, based on specified criteria such as time intervals.

Types:

Time-based Partitioning: Data is divided into chunks based on time intervals (e.g.,

daily, weekly).

Space-based Partitioning: Data can also be partitioned by another dimension, such

as a device or location.

Advantages:

Improved Query Performance: Queries can target specific partitions, reducing the

amount of data scanned.

Efficient Data Management: Easier to manage data retention policies and perform

maintenance tasks.


Feature table of TimescaleDB:

Feature

TimescaleDB

Data Volume and Ingestion Rate

Handles large volumes efficiently

Query Complexity

Better choice for complex queries, joins, or aggregations due to its SQL support

Scaling Requirements

Offers vertical scaling and some horizontal scaling capabilities. Simpler if vertical scaling is sufficient

Consistency Requirements

Provides strong consistency and ACID compliance

Python Ecosystem Integration

SQL interface integrates more smoothly with pandas, numpy, and other data science libraries

IoT and Distributed Systems

Also suitable for IoT data, but may require additional setup for highly distributed scenarios

Data Model

Relational model with time-series optimizations

Write Performance

Optimized for high-speed inserts, especially for time-series data

Query Performance

Fast for time-based aggregations and range queries

Use Cases

IoT and sensor data, monitoring and metrics, financial data analysis

Data Insertion

COPY command for bulk inserts, parallel insertion, disabled synchronous commits

Implementation Considerations

Easier implementation with standard SQL, designed for vertical scaling

Query Performance

Efficient for simple lookups as well as complex queries

Hypertables

Automatic partitioning by time, transparent query optimization, improved insert and query performance

Partitioning

Automatic time-based partitioning with hypertables, option for space partitioning, improved query performance on large datasets

Continuous Aggregation

Automatic, incremental aggregation of data, significantly faster queries on pre-aggregated data, reduced storage requirements for aggregates, real-time aggregation updates, external tools

Use Case:

About the Data

The test data consists of time-series measurements with associated metadata. Each record contains timestamp, identifiers, and measurement values.


Test Data Generation

A script generates a large number of data, each containing multiple data points.

Testing Datapoints:

A robust time-series schema consists of a timestamp (DateTime/Integer) for temporal tracking, a unique identifier (String/Integer) for data source recognition, and measurement values (Numeric) with their associated metric types (String/Enum).



Testing Execution and Results:

Azure Virtual Machine Information:

System Information:

Architecture: x86_64

CPU op-mode(s): 32-bit, 64-bit

Address sizes: 48 bits physical, 48 bits virtual

CPU(s): 8

Thread(s) per core: 2

Core(s) per socket: 4

Socket(s): 1

VM Details:

Size: Standard L8as v3

vCPUs: 8

RAM: 64 GiB




Test Set 1 : TimescaleDB vs Other DB(Influx) for time series data


Test Execution Details Unique timestamp:

Data Volume Benchmark

  • Test Dataset Size: 10M+ records

  • Data Complexity: Mixed (structured and semi-structured)

  • Time Period: Single test run under controlled conditions


Performance Metric

TimescaleDB

InfluxDB

Write Speed (records/sec)

~60k

~15k

Query Planning (ms)

1-2

50,000+

Query Execution (ms)

<1

8,000+

Conclusion: Based on the test results with a dataset of over 10M records, TimescaleDB significantly outperformed InfluxDB in both query performance and handling mixed data complexity, proving to be a superior choice for time series data workloads.


Test Set 2 : Performance Analysis of TimescaleDB with Optimization Techniques on Time Series Data.

30% of the data has duplicate timestamps so as to collect them together to reduce the number of records going to the database.


Tested with Various Database Optimization techniques Performance Comparison:


Performance Metrics

Basic Setup

Time-Partitioned

Compressed Storage

Parallel Processing

Write Speed (K records/sec)

170

175

150

200

Processing Time (sec)

75

72

89

63

Data Volume (records)

10M+

10M+

10M+

10M+

Resource Usage

Low

Medium

High

High

Complexity

Simple

Medium

Complex

Complex

Best Use Case

Development

Production

Archival Data

High-throughput

  • Basic Setup: Good for development and testing

  • Time-Partitioned: Balanced performance with manageable complexity

  • Compressed Storage: Trade-off between storage and speed

  • Parallel Processing: Highest performance but requires more resources


Conclusion and Recommendations:


Based on our extensive testing:

1. Best Performance Setup

  • Use Parallel Processing for maximum throughput.

  • Implement hypertables with proper time partitioning.

  • Optimize batch sizes based on your hardware capabilities.

2. Future Considerations

  • Monitor long-term compression effectiveness.

  • Consider implementing automated scaling.

  • Plan for data retention strategies.

3. When to Choose Different Approaches

  • Use Hypertable: For time-series specific optimizations.

  • Use Compression: When storage is a primary concern.

  • Use Multi Ingest: For highest performance requirements.



References and Resources:


TimescaleDB Documentation:

Official Documentation: https://docs.timescale.com/




36 views0 comments

Recent Posts

See All

Inputing missing values in time series

Missing values are unavoidable in any data science project. It takes experience and skill to handle the missing data. Typical missing...

Comentarios


bottom of page