TiKV

A distributed, transactional key-value database written in Rust — the storage layer powering TiDB, designed for petabyte-scale data with strong consistency.

TiKV is a distributed, transactional key-value database written in Rust. Originally developed by PingCAP as the storage layer for TiDB, it has since graduated to a top-level CNCF project and is used independently as a general-purpose distributed storage engine. It provides ACID transactions, horizontal scalability, and strong consistency through the Raft consensus protocol — at petabyte scale.

Features

  • Distributed transactions — full ACID transaction support using an optimistic concurrency model inspired by Google's Percolator paper
  • Strong consistency — data is replicated across nodes via the Raft consensus algorithm; reads and writes are consistent across the cluster
  • Horizontal scalability — data is automatically sharded into Regions (default 96MB each) and balanced across nodes; add capacity by adding nodes
  • High availability — as long as a majority of replicas are available, the cluster continues to serve requests; node failures are tolerated automatically
  • CoPro framework — push-down computations (aggregations, predicates) to the storage layer to reduce data transferred to the compute layer
  • Titan — an optional RocksDB plugin for efficient storage of large values, reducing write amplification and space overhead
  • TLS and encryption — supports TLS for client and inter-node communication, and encryption at rest
  • CNCF graduated project — production-proven, vendor-neutral, and governed by the Cloud Native Computing Foundation

Architecture

TiKV uses a two-component architecture:

  • TiKV nodes — store the actual data in RocksDB, handle Raft replication, and serve read/write requests
  • PD (Placement Driver) — a cluster manager that handles metadata, schedules Region leaders, and balances load across TiKV nodes

Each piece of data belongs to a Region. Regions are replicated to multiple TiKV nodes (typically 3 or 5) using Raft. The PD monitors load and splits hot Regions, merges cold ones, and moves replicas to maintain balance.

Installation

TiUP is the official deployment and management tool for the TiKV ecosystem:

# Debian / Ubuntu / Fedora
# No apt or dnf package is available. Use TiUP, the official installer
# (works on all Linux distributions including Debian, Ubuntu, and Fedora):
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh

# Start a local TiKV cluster (1 PD + 1 TiKV node)
tiup playground --mode tikv-slim

# Deploy a production cluster
tiup cluster deploy my-cluster v8.5.0 topology.yaml

Building from source

git clone https://github.com/tikv/tikv.git
cd tikv
cargo build --release

Docker

# Start a minimal local cluster with docker-compose
git clone https://github.com/tikv/tikv.git
cd tikv/docker
docker-compose up -d

Interacting with TiKV

TiKV exposes a gRPC API. The recommended way to interact with it from the command line is via the tikv-ctl control tool:

# Check cluster status
tikv-ctl --host 127.0.0.1:20160 store

# Scan key-value pairs in a range
tikv-ctl --host 127.0.0.1:20160 scan --from 'key_start' --to 'key_end'

# Get a specific key
tikv-ctl --host 127.0.0.1:20160 raw-get --key 'my_key'

# Check region information
tikv-ctl --host 127.0.0.1:20160 region-properties -r 1

# Compact the RocksDB store
tikv-ctl --host 127.0.0.1:20160 compact -d kv

Using TiKV from Rust

cargo add tikv-client
use tikv_client::{RawClient, TransactionClient};

#[tokio::main]
async fn main() -> tikv_client::Result<()> {
    // Raw (non-transactional) client
    let raw = RawClient::new(vec!["127.0.0.1:2379"]).await?;

    raw.put("hello".to_owned(), "world".to_owned()).await?;
    let val = raw.get("hello".to_owned()).await?;
    println!("{:?}", val); // Some("world")

    // Transactional client
    let txn_client = TransactionClient::new(vec!["127.0.0.1:2379"]).await?;
    let mut txn = txn_client.begin_optimistic().await?;
    txn.put("key1".to_owned(), "value1".to_owned()).await?;
    txn.put("key2".to_owned(), "value2".to_owned()).await?;
    txn.commit().await?;

    Ok(())
}

Client libraries

TiKV provides official clients for several languages:

LanguagePackage
Rusttikv-client
Goclient-go
Javaclient-java
Pythonclient-py
Cclient-c

Use cases

  • As TiDB's storage layer — TiKV was originally designed as the distributed storage backend for TiDB, a MySQL-compatible distributed SQL database
  • Standalone KV store — use TiKV directly as a high-availability key-value store for applications that need distributed transactions without a SQL layer
  • Building blocks — projects like Quickwit and other distributed systems use TiKV as a reliable, scalable metadata or document store
  • Cache with durability — unlike Redis, TiKV provides durability and ACID transactions, making it suitable for use cases where data loss is not acceptable