fapilog has been tested against structlog and loguru — each using their recommended async configuration — with a 300ms network sink. Here's what happened.
A FastAPI application writes 5 structured log events per request to a simulated network sink with 300ms base latency (typical for HTTP-based log destinations like Loki or CloudWatch). k6 generates constant-rate request traffic for 60 seconds at four load levels: 10, 100, 1,000, and 3,000 RPS. We measure throughput, request latency (p50 and p99), and event preservation.
Each library uses its default async configuration: structlog's await logger.ainfo() (thread pool offload), loguru's enqueue=True (background drain thread), and fapilog's out-of-the-box production preset. Same app, same sink, same load — no tuning for anyone.
Light load — ~50 log events per second. Even here, architectural differences show.
At just 10 RPS, fapilog responds in under 1ms at p50 while structlog's await ainfo() adds 600ms of latency per request — the full sink round-trip. loguru's single drain thread can't keep up even at this load, with p50 latency over 12 seconds as the queue grows unbounded. All three libraries preserve events at this level, but the latency gap is already orders of magnitude.
Moderate load — ~500 log events per second. The gap widens dramatically.
structlog's await ainfo() offloads each log write to a thread pool via run_in_executor. It helps — the event loop isn't directly blocked — but each request still awaits the thread to finish. Under a 300ms sink, the default thread pool saturates quickly, pushing p99 to 2.5 seconds. loguru's enqueue=True uses a single background thread that drains at ~5 events/sec with a 300ms sink, causing unbounded backpressure and 60-second p99 latency.
~5,000 log events per second. loguru can't complete the test. structlog is 408x slower.
At 1,000 RPS, loguru's server errors out entirely — unable to serve any requests. structlog manages only 12 actual RPS with a 24 s p99 latency as its thread pool is overwhelmed. fapilog serves 999 actual RPS with a 59 ms p99 and sub-millisecond p50, activating load shedding to protect request latency.
~15,000 log events per second. Beyond the production preset's design point.
At 3,000 RPS, both structlog and loguru error out — their servers become completely unresponsive. fapilog still serves 2,996 actual RPS with a 70 ms p99 and sub-millisecond p50 — request latency remains excellent. However, event preservation drops significantly as the default queue overflows. This is the expected behaviour when the log event rate exceeds the preset's design point: fapilog prioritises application responsiveness over event completeness. Tuning the queue and worker settings for your specific throughput and memory budget restores full protected-level preservation.
max_queue_size, protected_queue_size, and sink_concurrency to match their workload.fapilog events average 439 bytes. structlog and loguru average ~130 bytes. The extra bytes aren't bloat — they're operational context that the other libraries don't provide.
{ "timestamp": "2026-02-07T16:56:09.854Z", "level": "INFO", "message": "Request completed", "diagnostics": { "host": "prod-web-01", "pid": 86909, "python": "3.11.10", "service": "api" }, "context": { "message_id": "d594...37ca" }, "data": { "method": "POST", "path": "/api/v1/orders", "status_code": 200, "correlation_id": "b38f...55d6", "latency_ms": 0.62 } }
{ "level": "info", "message": "Request completed", "method": "POST", "path": "/api/v1/orders", "status_code": 200, "correlation_id": "b38f...55d6" } No timestamp. No host. No PID. No service name. No message ID. No structured envelope.
fapilog automatically enriches every event with runtime diagnostics (host, PID, Python version, service name), per-event message IDs, ISO timestamps, and a structured envelope that separates operational metadata from business data. All of this is built-in — zero custom code.
On top of this, fapilog's production preset enables three native redactors — URL credential stripping, field masking, and regex-based pattern matching — recursively scanning every log event for sensitive data. structlog and loguru have no equivalent built-in capability. In these benchmarks, fapilog is performing more work per event than the other two libraries.
To get the same metadata and safety from structlog or loguru, you'd write custom processors, context managers, formatters, and redaction logic. fapilog does 3x more work per event, writes 3x more bytes to the sink, runs three redactors on every event, and is still 408x faster.
At 1,000 RPS with a slow sink, fapilog can't write every event. But it chooses what to shed. With protected_levels, ERROR and CRITICAL events get priority retention while INFO is shed first.
The workload emits a realistic level mix: 90% INFO, 5% WARNING, 4% ERROR, 1% CRITICAL. At 100 RPS, fapilog preserves 100% of all levels — no shedding is needed. At 1,000 RPS, the pipeline sheds INFO and WARNING events while protecting every ERROR and CRITICAL. The other libraries have no shedding mechanism — they simply error out under load.
Thread-based async isn't the same as purpose-built async.
structlog's await ainfo() offloads the write to a thread, freeing the event loop. But each request still awaits completion of that thread — the response doesn't return until the 300ms write finishes. At scale, the thread pool becomes the bottleneck. loguru's enqueue=True uses a single background drain thread, which processes writes sequentially at ~3/sec — the queue grows without bound.
fapilog decouples completely: the log event goes into an async queue in under 1ms, the response returns immediately, and a pool of async workers drains to the sink with 64 concurrent writes. Under pressure, adaptive backpressure scales workers and sheds low-priority traffic rather than blocking requests.
Each library's documented, first-party mechanism for non-blocking log writes.
# built-in preset profile = "production" # enables: # async queue + workers # concurrent sink writes # protected ERROR/CRITICAL # 3 native redactors
# replace every log call: - logger.info("msg", **kw) + await logger.ainfo("msg", **kw) # requires async context # each call site must change
# add enqueue to sink: logger.add( sink, enqueue=True ) # background drain thread # single-threaded writes
Reproducible, automated, verified.
Simulates HTTP-based log destinations like Loki, CloudWatch, or Datadog.
Every log event carries a unique correlation ID and sequence number. A post-test harness validates event count, uniqueness, JSON structure integrity, and per-level preservation rates.
Each library runs in an isolated process with the identical FastAPI app, sink, and load profile. k6 uses constant-arrival-rate to maintain steady request pressure regardless of server response time. All libraries use their recommended async configuration.