Patent-pending · India 2026

Compression built for the smallest payloads

AEE is purpose-built for IoT sensor data — 50 to 200 byte payloads where zlib, LZ4, and Huffman coding fall apart. Zero heap. Zero malloc. Runs on Cortex-M0.

0.44 Compression ratio
1.7KB Decode RAM
3 cy/B Decode speed
0 Heap allocations

General-purpose compression fails below 200 bytes

IoT sensors transmit 50–200 byte payloads over cellular and LoRa. At this scale, zlib's 256KB RAM footprint is prohibitive, LZ4 produces no compression at all, and Huffman trees cost more than the data they encode. AEE uses pre-shared dictionaries tuned to your sensor alphabet — no overhead per packet, no handshake, just smaller packets from day one.

Pre-shared Dictionaries
Sensor and cloud share a rank-ordered dictionary. Zero per-packet overhead — compression starts at byte one.
🔬
Zero Heap, Zero Malloc
Entire encode and decode path runs on caller-provided stack buffers. No allocation, no fragmentation, deterministic timing.
🏗️
Cortex-M0 Ready
1.7KB decode RAM fits the smallest MCUs. No FPU, no 64-bit divide, no platform dependencies. Pure portable C.
📊
Adaptive Mode Selection
Three encoding modes — fixed-width, single-tier, multi-tier — automatically picks the smallest output for each payload.
🔁
Per-Value RLE
Run-length encoding on tier indices compresses repetitive sensor readings further without a separate pass.
🔒
Patent-Pending
Filed March 2026 with 43 claims covering the encoding method, stream architecture, and hardware implementation.

Head-to-head on real sensor data

11 distinct sensor values (temperature/humidity range, 20–30). 100,000 iterations, averaged. Measured on x86 — Cortex-M equivalent at 16 MHz in parentheses.

Codec Payload Ratio Encode Decode RAM
AEE 64 B 0.53 1.4 µs 0.18 µs 1.7 KB
zlib‑1 64 B 0.78 4.9 µs 2.8 µs 256 KB
LZ4 64 B 1.03 0.2 µs 0.01 µs 16 KB
AEE 1 KB 0.44 12 µs 6.3 µs 1.7 KB
zlib‑6 1 KB 0.51 23 µs 6.1 µs 256 KB
LZ4 1 KB 0.98 1.4 µs 0.14 µs 16 KB

Cost savings for a 100K device fleet

Two paths to value — immediate firmware-only savings, or deeper hardware redesign savings for your next-gen platform.

Existing Fleet — Firmware Update Only

₹1.3 Cr/yr
Cellular data reduction + battery life extension. No hardware changes, no procurement cycle. Deploy in weeks.

Next-Gen Hardware Design

₹5+ Cr/yr
Add MCU downgrades, SRAM elimination, and deeper power savings. Design AEE into your next sensor revision.

Based on ₹15/device/year license. Savings estimates assume 100K active devices transmitting hourly over cellular.

Protected and published

Two Indian patent applications covering the encoding algorithm and a novel ternary weight encoding for AI inference.

Patent #1 — AEE Compression Filed March 31, 2026 · Mumbai Patent Office
Claims 43 claims (8 independent) covering encoding, stream architecture, hardware implementation
Patent #2 — NativeTernary Filed April 1, 2026 · Provisional
ArXiv Paper 2604.03336 — 460× smaller framing overhead vs GGUF on BitNet b1.58

Ready to compress smarter?

We license AEE for IoT platforms, chipmakers, satellite constellations, and edge computing. Let's talk about your use case.