Docs

Profiles

Use the smallest profile that still fits the workload. ZeroKernel is designed so you only pay for what you actually enable.

Why this page matters

This page explains how Profiles fits into the wider ZeroKernel execution model, what problem it is meant to solve, and what trade-off you are actually accepting when you use it in production firmware. The goal is not to treat Profiles as an isolated API call, but to understand where it sits inside bounded scheduling, queue discipline, fault visibility, and profile selection.

Read this topic as an operational contract. Start from the smallest working path, wire it into a lean profile first, and only expand into richer routing, diagnostics, or transport state after you can prove that the timing outcome is still worth the extra flash and RAM. That mindset is what keeps ZeroKernel useful on small boards instead of turning it into another bloated abstraction.

The safest pattern is always the same: define the runtime boundary, keep the hot path short, measure the effect with compare scripts, and only then scale complexity. The examples below are not filler; they show the smallest repeatable patterns you can lift into real firmware when you need clean integration instead of ad-hoc loops.

Three practical patterns

Full validation sequence

Use this when you need a credible regression pass before publishing numbers or changing docs.

Shell
    bash scripts/run_desktop_tests.sh
bash scripts/run_desktop_benchmark.sh --enforce-performance
bash scripts/run_resource_matrix.sh --enforce-budget
  
Hardware compare pass

Run a focused hardware compare instead of guessing whether a change helped or hurt.

Shell
    bash scripts/run_esp32_modules_compare.sh /dev/ttyUSB1
bash scripts/run_esp32_real_project_demo.sh /dev/ttyUSB1
  
Lean build guard

Lock the build into the intended profile before treating a benchmark or compare as authoritative.

Text
    -DZEROKERNEL_PROFILE_LEAN_NET
-DZEROKERNEL_ENABLE_DIAGNOSTICS=0
-DZEROKERNEL_ENABLE_LEGACY_LABEL_API=0
  

What to verify while you use it

  • Validate timing before you validate aesthetics. A cleaner API is not a win if fast misses rise.
  • Prefer the smallest profile that still matches the workload, then add optional modules only when the measured payoff is obvious.
  • Keep callbacks and transport steps bounded so watchdog, panic flow, and queue limits remain meaningful.

Common mistakes that make results misleading

  • Do not copy a demo pattern into production firmware without measuring it on the real board and real build profile you plan to ship.
  • Do not read success counters without reading queue depth, timing, and workload label next to them.
  • Do not enable heavier diagnostics and compatibility flags in a lean target just because the defaults looked convenient.

Recommended working sequence

Start from the smallest valid path

Boot the runtime, register the minimum useful task set, and prove that the baseline timing is clean before adding optional layers.

Add one layer, then measure it

Introduce routing, diagnostics, or transport one layer at a time so the cost and payoff remain obvious.

Publish only repeatable results

Update docs, charts, or public claims only after the same workload survives the same validation path more than once.

Profile matrix

Profile Purpose
POWER_SAVE Small always-on nodes, low overhead, key-first routing
LEAN_NET Network-capable nodes with tighter queue and metrics defaults
NETWORK_NODE Feature-rich network builds with capability support
EXTENDED Larger firmware with diagnostics and richer observability
DIAGNOSTIC Bring-up, deep debugging, maximum tracing

How to choose

  • POWER_SAVE: smallest always-on nodes, simple sensing, minimal routing.
  • LEAN_NET: network workloads where HTTP/MQTT behavior matters but footprint still needs discipline.
  • NETWORK_NODE: richer network orchestration, capabilities, and more supervision.
  • DIAGNOSTIC: bring-up and investigation, not the first choice for tight production builds.

Recommended rollout

  1. Begin on the leanest profile that compiles and passes your compare script.
  2. Only raise the profile after a measured reason: queue pressure, visibility needs, or module requirements.
  3. Document the chosen profile in firmware notes so resource changes stay explainable later.

Profiles FAQ

Should I choose a larger profile just to be safe?

No. Start lean and only scale up when you have a measured reason.

Can different firmware targets use different profiles?

Yes. That is expected and often the correct way to keep resource usage disciplined.

What is the safest way to validate this page on real hardware?

Start from the leanest profile that still matches the topic, run the narrowest compare script for this behavior, and only then move to heavier mixed workloads. Do not jump straight to a fully loaded build if the base timing is not yet proven.