Reference for burl's LLM-optimized output format (--llm json).
The --llm json output extends the standard JSON with metadata and interpretation:
{
"$schema": "string",
"meta": {
"tool": "string",
"version": "string",
"timestamp": "string"
},
"config": { ... },
"summary": { ... },
"latency_ms": { ... },
"status_codes": { ... },
"errors": { ... },
"interpretation": {
"performance": "string",
"issues": ["string"],
"recommendations": ["string"]
}
}
| Field | Type | Description |
|---|---|---|
tool | string | Always "burl" |
version | string | burl version |
timestamp | string | ISO 8601 timestamp |
The $schema field points to the JSON schema:
{
"$schema": "https://burl.wania.app/schema/v1/result.json"
}
The interpretation object provides automatic analysis of the benchmark results.
| Value | Criteria |
|---|---|
excellent | P99 < 50ms AND success rate = 100% |
good | P99 < 200ms AND success rate > 99% |
acceptable | P99 < 500ms AND success rate > 95% |
poor | P99 >= 500ms OR success rate <= 95% |
Automatically detected problems:
{
"issues": [
"High P99 latency (523ms) indicates tail latency problems",
"5.2% error rate exceeds acceptable threshold",
"Large gap between P50 (45ms) and P99 (523ms) suggests inconsistent performance",
"Frequent timeout errors (23) detected"
]
}
| Condition | Issue Generated |
|---|---|
| P99 > 500ms | "High P99 latency indicates tail latency problems" |
| P99 > 5 * P50 | "Large gap between P50 and P99 suggests inconsistent performance" |
| Error rate > 1% | "Error rate exceeds acceptable threshold" |
| Timeout errors > 0 | "Frequent timeout errors detected" |
| Status 5xx > 0 | "Server errors (5xx) detected" |
Actionable suggestions based on detected issues:
{
"recommendations": [
"Investigate server-side processing for slow requests",
"Check for resource contention under load",
"Consider implementing request timeouts",
"Review error responses for root cause",
"Add circuit breakers to prevent cascade failures"
]
}
{
"$schema": "https://burl.wania.app/schema/v1/result.json",
"meta": {
"tool": "burl",
"version": "0.1.0",
"timestamp": "2024-12-29T15:30:00.000Z"
},
"config": {
"url": "https://api.example.com/users",
"method": "GET",
"connections": 50,
"duration_ms": 60000,
"timeout_ms": 30000,
"warmup_requests": 100
},
"summary": {
"total_requests": 28456,
"successful_requests": 27012,
"failed_requests": 1444,
"requests_per_second": 474.3,
"bytes_per_second": 1567890,
"total_bytes": 94073400,
"duration_ms": 60002,
"success_rate": 0.949
},
"latency_ms": {
"min": 23.4,
"max": 2345.6,
"mean": 156.7,
"stddev": 234.5,
"p50": 98.2,
"p75": 145.6,
"p90": 234.5,
"p95": 456.7,
"p99": 1234.5,
"p999": 2100.3
},
"status_codes": {
"200": 27012,
"503": 1432,
"504": 12
},
"errors": {
"timeout": 12
},
"interpretation": {
"performance": "poor",
"issues": [
"High P99 latency (1234.5ms) indicates tail latency problems",
"5.1% error rate exceeds acceptable threshold",
"Large gap between P50 (98.2ms) and P99 (1234.5ms) suggests inconsistent performance",
"High rate of 503 errors (1432) indicates server overload",
"Timeout errors (12) detected"
],
"recommendations": [
"Investigate server-side processing for slow requests",
"Check for resource contention under load",
"Consider implementing request timeouts client-side",
"Review server capacity - 503 errors suggest overload",
"Add rate limiting to prevent server overload",
"Implement circuit breakers to prevent cascade failures",
"Consider horizontal scaling to handle the load"
]
}
}
The --llm markdown format produces:
# HTTP Benchmark Results
**Generated:** 2024-12-29T15:30:00.000Z
**Tool:** burl v0.1.0
## Configuration
| Setting | Value |
|---------|-------|
| URL | https://api.example.com/users |
| Method | GET |
| Connections | 50 |
| Duration | 60s |
## Summary
| Metric | Value |
|--------|-------|
| Total Requests | 28,456 |
| Successful | 27,012 |
| Failed | 1,444 |
| Success Rate | 94.9% |
| Requests/sec | 474.3 |
| Throughput | 1.49 MB/s |
## Latency Distribution
| Percentile | Latency |
|------------|---------|
| P50 | 98.2ms |
| P90 | 234.5ms |
| P95 | 456.7ms |
| P99 | 1234.5ms |
## Status Codes
| Code | Count | Percentage |
|------|-------|------------|
| 200 | 27,012 | 94.9% |
| 503 | 1,432 | 5.0% |
| 504 | 12 | 0.04% |
## Analysis
**Performance Rating:** Poor
### Issues Detected
- High P99 latency (1234.5ms) indicates tail latency problems
- 5.1% error rate exceeds acceptable threshold
- Large gap between P50 and P99 suggests inconsistent performance
- High rate of 503 errors indicates server overload
### Recommendations
1. Investigate server-side processing for slow requests
2. Check for resource contention under load
3. Review server capacity - 503 errors suggest overload
4. Add rate limiting to prevent server overload
5. Consider horizontal scaling to handle the load
# Generate report and analyze with AI
burl https://api.example.com --llm markdown | pbcopy
# Paste into ChatGPT/Claude with: "Analyze this benchmark and suggest optimizations"
# Check performance programmatically
result=$(burl https://api.example.com --llm json)
performance=$(echo $result | jq -r '.interpretation.performance')
case $performance in
"excellent"|"good")
echo "Performance is acceptable"
;;
"acceptable")
echo "Performance needs attention"
echo $result | jq -r '.interpretation.recommendations[]'
;;
"poor")
echo "CRITICAL: Poor performance!"
echo $result | jq -r '.interpretation.issues[]'
exit 1
;;
esac