Docs

Deferred Publishing

Deferred publishing moves work out of the immediate task path and into a bounded queue. This page explains when to use it and how to keep it from turning into a hidden backlog.

Why this page matters

This page explains how Deferred Publishing fits into the wider ZeroKernel execution model, what problem it is meant to solve, and what trade-off you are actually accepting when you use it in production firmware. The goal is not to treat Deferred Publishing as an isolated API call, but to understand where it sits inside bounded scheduling, queue discipline, fault visibility, and profile selection.

Read this topic as an operational contract. Start from the smallest working path, wire it into a lean profile first, and only expand into richer routing, diagnostics, or transport state after you can prove that the timing outcome is still worth the extra flash and RAM. That mindset is what keeps ZeroKernel useful on small boards instead of turning it into another bloated abstraction.

The safest pattern is always the same: define the runtime boundary, keep the hot path short, measure the effect with compare scripts, and only then scale complexity. The examples below are not filler; they show the smallest repeatable patterns you can lift into real firmware when you need clean integration instead of ad-hoc loops.

Three practical patterns

Core cadence pattern

Use one bounded task for the hot path, then let the scheduler keep the phase aligned over time.

C++
    ZeroKernel.begin(boardMillis);
ZeroKernel.addTask("Fast", fastTask, 10, 0, true);
ZeroKernel.tick();
  
Deferred work pattern

Move non-critical routing and transport out of the immediate task body so fast paths stay predictable.

C++
    const auto key = ZeroKernel.makeTopicKey("telemetry.sample");
ZeroKernel.publishDeferredFast(key, sampleValue);
ZeroKernel.flushEvents();
  
Runtime visibility pattern

Read the timing report and stats together so you can prove the cost of each abstraction layer.

C++
    const auto stats = ZeroKernel.getStats();
const auto timing = ZeroKernel.getTimingReport();
Serial.println(timing.maxTickMs);
  

What to verify while you use it

  • Validate timing before you validate aesthetics. A cleaner API is not a win if fast misses rise.
  • Prefer the smallest profile that still matches the workload, then add optional modules only when the measured payoff is obvious.
  • Keep callbacks and transport steps bounded so watchdog, panic flow, and queue limits remain meaningful.

Common mistakes that make results misleading

  • Do not copy a demo pattern into production firmware without measuring it on the real board and real build profile you plan to ship.
  • Do not read success counters without reading queue depth, timing, and workload label next to them.
  • Do not enable heavier diagnostics and compatibility flags in a lean target just because the defaults looked convenient.

Recommended working sequence

Start from the smallest valid path

Boot the runtime, register the minimum useful task set, and prove that the baseline timing is clean before adding optional layers.

Add one layer, then measure it

Introduce routing, diagnostics, or transport one layer at a time so the cost and payoff remain obvious.

Publish only repeatable results

Update docs, charts, or public claims only after the same workload survives the same validation path more than once.

Why deferred routing exists

Deferred publishing exists to protect the hot path. A fast producer task should not be forced to pay the entire cost of whatever consumers might do next, especially if those consumers may format data, touch transport state, or trigger slower follow-up actions.

By placing the publication into a bounded queue, the producer can remain short while the runtime drains the follow-up work in measured slices. This is one of the most practical ways ZeroKernel keeps firmware responsive without pretending the work is free.

C++
    const auto telemetryKey = ZeroKernel.makeTopicKey("telemetry.sample");
ZeroKernel.publishDeferredFast(telemetryKey, 42);
  

When deferred is the correct choice

  • When a fast sampling task should not directly trigger formatting or transport work.
  • When you need queue depth to become visible and measurable instead of hidden in callback chains.
  • When the consumer can afford a scheduler-turn delay but the producer cannot afford to block.

Deferred routing is not “more advanced direct publish.” It is isolation. Use it when that isolation has measurable runtime value.

Three practical deferred patterns

C++
    const auto sampleKey = ZeroKernel.makeTopicKey("sample.fast");
ZeroKernel.publishDeferredFast(sampleKey, sampleValue);
  

This is the classic producer-consumer split: a fast task emits, then leaves immediately.

C++
    const auto alarmKey = ZeroKernel.makeTopicKey("alarm.edge");
if (thresholdCrossed) {
  ZeroKernel.publishDeferredFast(alarmKey, 1);
}
  

Use this when the event is important but the response should still remain budgeted and observable.

C++
    ZeroKernel.publishDeferredFast(key, payload);
ZeroKernel.flushEvents();
  

This explicit flush pattern is useful in controlled contexts where you want to force queue progress at a known point, but still through the bounded event machinery.

Failure modes and queue pressure

If producers publish faster than the drain budget can consume, the queue will grow, coalescing will start to matter, and eventually the runtime will expose that pressure through queue metrics and drop counters. That is not a flaw in the queue; it is a signal that the publication rate and the drain budget are no longer in balance.

The fix is usually one of three things: reduce producer cadence, raise drain budget deliberately, or split heavy consumers so the queue is not paying for too much work per message.

Deferred publish FAQ

Is deferred always better than direct publish?

No. It is better when it buys isolation. If a signal is tiny and immediate, direct publish may be simpler and cheaper.

What metric should I watch first?

Watch queue depth and drain behavior first. A growing queue is the clearest sign that the design is out of balance.

What usually causes a deferred queue to become unhealthy?

Either the producer fires too often, or the consumers are doing too much per item. Treat that as a workload design issue, not just a queue setting problem.