Why this page matters
This page explains how Scheduler Tick and Drain Flow fits into the wider ZeroKernel execution model, what problem it is meant to solve, and what trade-off you are actually accepting when you use it in production firmware. The goal is not to treat Scheduler Tick and Drain Flow as an isolated API call, but to understand where it sits inside bounded scheduling, queue discipline, fault visibility, and profile selection.
Read this topic as an operational contract. Start from the smallest working path, wire it into a lean profile first, and only expand into richer routing, diagnostics, or transport state after you can prove that the timing outcome is still worth the extra flash and RAM. That mindset is what keeps ZeroKernel useful on small boards instead of turning it into another bloated abstraction.
The safest pattern is always the same: define the runtime boundary, keep the hot path short, measure the effect with compare scripts, and only then scale complexity. The examples below are not filler; they show the smallest repeatable patterns you can lift into real firmware when you need clean integration instead of ad-hoc loops.
Three practical patterns
Use one bounded task for the hot path, then let the scheduler keep the phase aligned over time.
ZeroKernel.begin(boardMillis);
ZeroKernel.addTask("Fast", fastTask, 10, 0, true);
ZeroKernel.tick();
Move non-critical routing and transport out of the immediate task body so fast paths stay predictable.
const auto key = ZeroKernel.makeTopicKey("telemetry.sample");
ZeroKernel.publishDeferredFast(key, sampleValue);
ZeroKernel.flushEvents();
Read the timing report and stats together so you can prove the cost of each abstraction layer.
const auto stats = ZeroKernel.getStats();
const auto timing = ZeroKernel.getTimingReport();
Serial.println(timing.maxTickMs);
What to verify while you use it
- Validate timing before you validate aesthetics. A cleaner API is not a win if fast misses rise.
- Prefer the smallest profile that still matches the workload, then add optional modules only when the measured payoff is obvious.
- Keep callbacks and transport steps bounded so watchdog, panic flow, and queue limits remain meaningful.
Common mistakes that make results misleading
- Do not copy a demo pattern into production firmware without measuring it on the real board and real build profile you plan to ship.
- Do not read success counters without reading queue depth, timing, and workload label next to them.
- Do not enable heavier diagnostics and compatibility flags in a lean target just because the defaults looked convenient.
Recommended working sequence
Boot the runtime, register the minimum useful task set, and prove that the baseline timing is clean before adding optional layers.
Introduce routing, diagnostics, or transport one layer at a time so the cost and payoff remain obvious.
Update docs, charts, or public claims only after the same workload survives the same validation path more than once.
Why tick is the operational heartbeat
tick() is the one call that advances the runtime. Every major property you care about—task cadence, watchdog inspection, queue draining, signal emission, and state transitions—depends on that call happening frequently and consistently.
Because of that, a clean firmware loop usually stays very small. The closer your main loop remains to “call tick, then return,” the more accurately the runtime can enforce its own guarantees. As soon as code around it starts adding manual loops, blocking waits, or ad-hoc retries, you are reintroducing the exact chaos the runtime was meant to remove.
Think of tick() as the execution metronome. You do not want unrelated code improvising around it.
What happens in one scheduler turn
- The runtime reads the active clock source.
- It checks which tasks are due using wrap-safe time arithmetic.
- It picks the best eligible task according to due time, priority, and task state.
- It runs one bounded callback.
- It drains bounded slices of event, command, and work queues.
- It updates watchdog counters, timing reports, and kernel state transitions.
This structure is what keeps the runtime deterministic. It avoids both extremes: doing too little to be useful, and doing so much in one call that a single late turn creates a second hidden scheduler inside the first.
What tick should never become
A healthy tick() path is short and repetitive. It should never become the place where the firmware secretly waits for sockets, retries network calls in a loop, or spins on hardware readiness until a peripheral finally responds.
- Do not block on I/O inside code wrapped around
tick(). - Do not call
tick()recursively from inside a task. - Do not treat queue draining as free; it is budgeted work with real timing cost.
Three loop patterns to recognize
void loop() {
ZeroKernel.tick();
}
This is the preferred steady-state loop. If you can keep the loop at this size, you preserve the clearest runtime behavior.
void loop() {
sampleFrontPanelButtons();
ZeroKernel.tick();
}
This is acceptable only when the extra function is tiny, bounded, and does not start its own blocking work. Use this shape sparingly.
void loop() {
while (!client.connected()) {
reconnectClient();
}
ZeroKernel.tick();
}
This is the shape to avoid. It turns the main loop into a manual retry loop and starves the scheduler while connectivity is bad.
How to tell if tick is still healthy
Use runtime numbers instead of intuition. If fast_miss remains zero, queue depth stays bounded, and maximum tick time does not drift upward after changes, the loop is still behaving as intended. If those numbers degrade, the fix is usually to move work out of the immediate path, not to call tick() more aggressively.
The compare scripts exist to make this visible. They are the fastest way to tell whether a change improved runtime behavior or simply moved cost somewhere harder to see.
Tick FAQ
Should I call tick from inside another task?
No. Call it once from the main loop. Recursive or nested ticking makes the runtime hard to reason about and can break the bounded model.
Why is tick still one of the most important APIs?
Because every claimed property—timing, queue bounds, watchdog updates—depends on tick being called frequently and cleanly.
How do I reduce tick pressure when the project gets bigger?
Move heavy work to deferred queues or modules, reduce unnecessary publish frequency, and keep only the most time-sensitive logic in the immediate task path.