Let's benchmark an API endpoint in just a few commands.
Run a 10-second benchmark with 10 concurrent connections (defaults):
burl https://api.example.com/health
Increase concurrency and duration:
burl https://api.example.com/users -c 50 -d 30s
| Flag | Description |
|---|---|
-c 50 | Use 50 concurrent connections |
-d 30s | Run for 30 seconds |
burl https://api.example.com/users \
-m POST \
-b '{"name":"test","email":"test@example.com"}' \
-T application/json
# JSON output
burl https://api.example.com/health -f json -o results.json
# LLM-optimized with recommendations
burl https://api.example.com/health --llm json
When you run a benchmark, burl displays results like this:
════════════════════════════════════════════════════════════
burl - HTTP Benchmark Results
════════════════════════════════════════════════════════════
Target
URL: https://api.example.com/health
Method: GET
Connections: 10
Duration: 10.02s
Summary
Total Requests: 1,234
Successful: 1,234
Requests/sec: 123.15
Throughput: 45.23 KB/s
Latency
P50: 12.4ms ← 50% of requests completed in under 12.4ms
P90: 32.1ms ← 90% of requests completed in under 32.1ms
P95: 45.2ms ← 95% of requests completed in under 45.2ms
P99: 89.3ms ← 99% of requests completed in under 89.3ms
| Metric | Description |
|---|---|
| Requests/sec | Throughput - how many requests completed per second |
| P50 (Median) | Half of all requests were faster than this |
| P99 | 99% of requests were faster than this - important for SLAs |
| Success Rate | Percentage of requests that returned 2xx status |
icon: i-lucide-settings title: Request Configuration to: /guide/request-configuration
Learn about headers, bodies, and authentication options.
icon: i-lucide-file-output title: Output Formats to: /guide/output-formats
Explore JSON, CSV, Markdown, and LLM output modes.
icon: i-lucide-terminal title: CLI Reference to: /cli/reference
Complete documentation of all command-line flags.