Skip to content

distributed-counter

A counter shared across cells. Two tasklets running on different machines hold leases on the same memory region and fetch_add against the same AtomicU64. Lease expiry is the synchronization point — when both leases expire and renew, both writers continue to see each other’s increments.

Source

cookbook/distributed-counter/ in the source tree.

The recipe extracts the deterministic core: take a { "current": N } JSON document, return { "previous": N, "new_value": N+1 }. In production you’d replace the plain addition with AtomicU64::fetch_add against memory you got from FabricBuffer::as_atomic_u64(). The wire shape is identical.

pub fn compute(input: &[u8]) -> Result<CounterOutput, &'static str> {
let parsed: CounterInput = serde_json::from_slice(input).map_err(|_| "invalid_input")?;
let new_value = parsed.current.checked_add(1).ok_or("counter_overflow")?;
Ok(CounterOutput { previous: parsed.current, new_value })
}

What’s interesting

  1. Typed errors. invalid_input and counter_overflow are returned as typed strings rather than Box<dyn Error>. The runtime trap path lets you map any Err(_) to a non-zero return code.
  2. The compute() separation. The host-testable function does no I/O; the wasm32 entry point handles pointer marshaling. Same pattern as hello-tasklet.
  3. Production drop-in. Replace parsed.current + 1 with AtomicU64::fetch_add(1, Ordering::SeqCst) against grafos-std::FabricBuffer::as_atomic_u64() and you have a real distributed counter.