oha
A tiny HTTP load testing tool written in Rust, with a live TUI showing real-time stats, a latency histogram, and requests-per-second charts.
oha is an HTTP load testing tool inspired by
hey and wrk,
written in Rust on top of tokio. It bombards a target URL
with requests and displays a live, animated TUI showing real-time throughput,
latency percentiles, a histogram, and a status code breakdown — all while the
test is running.
Features
- Live TUI — a real-time terminal dashboard updates as requests are sent, showing RPS, latency distribution, and a bar chart of response times
- Async I/O — built on Tokio; handles high concurrency with minimal OS thread overhead
- HTTP/1.1 and HTTP/2 — supports both protocols; force HTTP/2 with
--http2 - Custom headers, bodies, and methods — full control over the request shape
- Latency percentiles — reports p50, p75, p90, p95, p99, and p99.9 in the summary
- Duration or count mode — run for a fixed duration (
-z 30s) or a fixed number of requests (-n 1000) - Redirect following — configurable redirect behaviour
- Unix socket support — send requests over a Unix domain socket
- JSON output — machine-readable results for use in scripts and CI benchmarks
Installation
cargo install oha
Or via your system package manager:
# Debian / Fedora
# Pre-built Linux binaries are available on the
# [releases page](https://github.com/hatoo/oha/releases).
# macOS
brew install oha
# Arch Linux
pacman -S oha
# Nix
nix-env -iA nixpkgs.ohaUsage
# Basic load test — 200 requests, 50 concurrent
oha -n 200 -c 50 https://example.com
# Run for 30 seconds instead of a fixed request count
oha -z 30s -c 100 https://example.com
# POST request with a JSON body
oha -n 500 -c 20 \
-m POST \
-H "Content-Type: application/json" \
-d '{"key":"value"}' \
https://api.example.com/endpoint
# Force HTTP/2
oha --http2 -n 1000 -c 50 https://example.com
# Disable TLS certificate verification
oha --insecure -n 100 https://localhost:8443
# Output results as JSON (useful in CI)
oha -n 1000 -c 50 --json https://example.com
# Test with a specific timeout per request
oha --timeout 5 -n 500 https://example.com
# Disable the live TUI (plain output only)
oha --no-tui -n 200 https://example.comExample Output
After a run completes, oha prints a summary like:
Summary:
Success rate: 100.00%
Total: 5.0123 secs
Slowest: 0.2341 secs
Fastest: 0.0041 secs
Average: 0.0501 secs
Requests/sec: 1995.09
Total data: 4.77 MiB
Size/request: 2.44 KiB
Response time histogram:
0.004 [1] |
0.027 [6185] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.050 [2898] |■■■■■■■■■■■■■■■
0.073 [619] |■■■
0.096 [196] |■
0.119 [72] |
0.142 [20] |
0.165 [7] |
0.188 [1] |
0.211 [0] |
0.234 [1] |
Latency distribution:
10% in 0.0181 secs
25% in 0.0224 secs
50% in 0.0312 secs
75% in 0.0621 secs
90% in 0.0871 secs
95% in 0.1021 secs
99% in 0.1389 secs
Status code distribution:
[200] 10000 responsesTips
- Start with a modest concurrency level (
-c 10) and ramp up to find where your service starts degrading - Use
--jsonin CI to capture benchmark baselines and fail the build if p99 latency regresses - Pair with
hyperfinefor command-line benchmarks andohafor HTTP service benchmarks — they complement each other well - The live TUI is particularly useful for spotting latency spikes in real time that would be invisible in a post-run summary