distributed-counter
A counter shared across cells. Two tasklets running on different machines hold leases on the same memory region and fetch_add against the same AtomicU64. Lease expiry is the synchronization point — when both leases expire and renew, both writers continue to see each other’s increments.
Source
cookbook/distributed-counter/ in the source tree.
The recipe extracts the deterministic core: take a { "current": N } JSON document, return { "previous": N, "new_value": N+1 }. In production you’d replace the plain addition with AtomicU64::fetch_add against memory you got from FabricBuffer::as_atomic_u64(). The wire shape is identical.
pub fn compute(input: &[u8]) -> Result<CounterOutput, &'static str> { let parsed: CounterInput = serde_json::from_slice(input).map_err(|_| "invalid_input")?; let new_value = parsed.current.checked_add(1).ok_or("counter_overflow")?; Ok(CounterOutput { previous: parsed.current, new_value })}What’s interesting
- Typed errors.
invalid_inputandcounter_overfloware returned as typed strings rather thanBox<dyn Error>. The runtime trap path lets you map anyErr(_)to a non-zero return code. - The
compute()separation. The host-testable function does no I/O; the wasm32 entry point handles pointer marshaling. Same pattern as hello-tasklet. - Production drop-in. Replace
parsed.current + 1withAtomicU64::fetch_add(1, Ordering::SeqCst)againstgrafos-std::FabricBuffer::as_atomic_u64()and you have a real distributed counter.