| Metric | Performance Highlights | |||||||
|---|---|---|---|---|---|---|---|---|
| T1 | T2 | T3 | T4 | T5 | T6 | T7 | T8 | |
| File Count (response) | 1 | 2 | 1 | 2 | 1 | 2 | 1 | 2 |
| File Size (each) | 8kb | 4kb | 64kb | 32kb | 256kb | 128kb | 512kb | 256kb |
| Payload Size (total) | 8kb | 8kb | 64kb | 64kb | 256kb | 256kb | 512kb | 512kb |
| Burst Transfer (mb/s) | 13 | 8 | 97 | 77 | 225 | 171 | 237 | 184 |
| Sustained Transfer (mb/s) | 13 | 9 | 97 | 65 | 219 | 159 | 229 | 194 |
| Burst Requests (rps) | 1653 | 1021 | 1548 | 1223 | 902 | 682 | 473 | 367 |
| Sustained Requests (rps) | 1705 | 1128 | 1552 | 1043 | 876 | 635 | 454 | 387 |
| Burst Latency (ms) | 30 | 54 | 32 | 41 | 57 | 79 | 119 | 142 |
| Sustained Latency (ms) | 31 | 46 | 32 | 53 | 60 | 88 | 121 | 131 |
I built an open-source WSGI core that consumes dynamic batch requests via query strings and bundles the specified files into a multipart stream. Axon is synchronous and implemented in 507 lines of Python with zero dependencies. I designed it for the rapid prototyping of experimental applications that require granular control over the request lifecycle. I needed something simple enough to retool quickly but performant enough to scale.
You can validate these capabilities with the included deployment tools. Get a live demo running in under 5 minutes with the Ubuntu deployment script, then start stress testing with the included client script. The tests reveal Axon's architectural characteristics: on single-core hardware, CPU saturation occurs before network limits; multicore deployments may shift bottlenecks to network throughput depending on processing power and bandwidth constraints. Since performance scales with computational resources rather than I/O, the architecture supports distributed strategies that specialize nodes by file type and size.