Docs

Architecture and Execution Stack

ZeroKernel stays small by splitting the runtime into a lean core and optional modules. This page explains that stack so you can decide where code belongs before the project grows.

Why this page matters

This page explains how Architecture and Execution Stack fits into the wider ZeroKernel execution model, what problem it is meant to solve, and what trade-off you are actually accepting when you use it in production firmware. The goal is not to treat Architecture and Execution Stack as an isolated API call, but to understand where it sits inside bounded scheduling, queue discipline, fault visibility, and profile selection.

Read this topic as an operational contract. Start from the smallest working path, wire it into a lean profile first, and only expand into richer routing, diagnostics, or transport state after you can prove that the timing outcome is still worth the extra flash and RAM. That mindset is what keeps ZeroKernel useful on small boards instead of turning it into another bloated abstraction.

The safest pattern is always the same: define the runtime boundary, keep the hot path short, measure the effect with compare scripts, and only then scale complexity. The examples below are not filler; they show the smallest repeatable patterns you can lift into real firmware when you need clean integration instead of ad-hoc loops.

Three practical patterns

Repository-first setup

Use this when you want a clean local source of truth and explicit control over updates.

Shell
    git clone git@github.com:ZeroBitsTech/ZeroKernel.git
cd ZeroKernel
bash scripts/run_desktop_tests.sh
  
Smallest valid runtime boot

Start from one bounded task and a visible board clock before you add queue work or network modules.

C++
    ZeroKernel.begin(boardMillis);
ZeroKernel.addTask("Sample", sampleTask, 100, 0, true);
ZeroKernel.tick();
  
PlatformIO profile lock

Pin the profile in build flags so footprint drift is intentional instead of accidental.

INI
    [env:esp32]
platform = espressif32
board = esp32dev
framework = arduino
build_flags =
  -DZEROKERNEL_PROFILE_LEAN_NET
  

What to verify while you use it

  • Validate timing before you validate aesthetics. A cleaner API is not a win if fast misses rise.
  • Prefer the smallest profile that still matches the workload, then add optional modules only when the measured payoff is obvious.
  • Keep callbacks and transport steps bounded so watchdog, panic flow, and queue limits remain meaningful.

Common mistakes that make results misleading

  • Do not copy a demo pattern into production firmware without measuring it on the real board and real build profile you plan to ship.
  • Do not read success counters without reading queue depth, timing, and workload label next to them.
  • Do not enable heavier diagnostics and compatibility flags in a lean target just because the defaults looked convenient.

Recommended working sequence

Start from the smallest valid path

Boot the runtime, register the minimum useful task set, and prove that the baseline timing is clean before adding optional layers.

Add one layer, then measure it

Introduce routing, diagnostics, or transport one layer at a time so the cost and payoff remain obvious.

Publish only repeatable results

Update docs, charts, or public claims only after the same workload survives the same validation path more than once.

Execution layers

LayerResponsibility
Application TasksYour bounded business logic: sampling, alarms, telemetry formatting, control loops.
Optional ModulesTransport helpers such as HTTP pump, MQTT pump, WiFi maintainer, transport metrics.
ZeroKernel CoreScheduler, watchdog, routing, command queue, state model, panic path, timing reports.
Adapters and HALClock sources, idle hints, board glue, watchdog bridges.
MicrocontrollerESP8266, ESP32, RP2040, STM32, or another supported target family.

What belongs in core

  • Anything every firmware instance depends on: task cadence, bounded routing, supervision, fault escalation.
  • Anything that must remain portable and cheap even on small boards.
  • Anything that should not silently disappear when an optional service module is removed.

What belongs in modules

  • Library-specific orchestration such as HTTP or MQTT pacing.
  • State that only makes sense when transport or diagnostics are actually enabled.
  • Workflows that should be opt-in so unused builds do not pay extra footprint.

Architecture FAQ

Why not put network behavior directly in the core?

Because every build would pay for it. The design goal is no meaningful cost unless a module is used.

Can modules become mandatory later?

Only if they solve a truly universal runtime problem. Otherwise they should stay optional.

What is the safest way to validate this page on real hardware?

Start from the leanest profile that still matches the topic, run the narrowest compare script for this behavior, and only then move to heavier mixed workloads. Do not jump straight to a fully loaded build if the base timing is not yet proven.