pub struct CoordinatorCtx<'a, S, W>{ /* private fields */ }Expand description
Programmer-facing coordinator context.
The coordinator is the only role that may perform authority-bearing hostcalls (FBMU, FBBU, leases, services, logging). Workers may compute on shared state and their own scratch, but must not call into those host modules.
Implementations§
Source§impl<'a, S, W> CoordinatorCtx<'a, S, W>
impl<'a, S, W> CoordinatorCtx<'a, S, W>
Sourcepub fn worker_count(&self) -> u16
pub fn worker_count(&self) -> u16
Number of worker lanes that will run.
Target-specific. Bytes available in the shared region (host
mock only; the wasm32 guest exposes length only through
SharedRegion::len_bytes).
Sourcepub fn scratch_bytes_per_worker(&self) -> u32
pub fn scratch_bytes_per_worker(&self) -> u32
Target-specific. Bytes available in each worker’s scratch region (host mock only).
Read access to shared state.
Sourcepub fn scratch(&self, worker_index: u32) -> &W
pub fn scratch(&self, worker_index: u32) -> &W
Read-only access to a specific worker’s scratch slot.
§Safety / discipline
Only call this AFTER a barrier that the worker has reached. Reading
a worker’s scratch before that worker has finished writing to it is
a race the SDK cannot detect. Typical pattern: workers write into
their scratch slots and then call WorkerCtx::barrier; the
coordinator calls its own barrier and only then reads
coord.scratch(i) for each worker.
Worker scratch is NOT a general shared-memory channel between
workers. For ongoing communication during a phase, use the Shared
region with atomics. Worker scratch is for results that get
collected by the coordinator at well-defined phase boundaries.
Takes u32 to match the wasm32 guest signature so the same
source compiles unchanged against both targets.
Sourcepub fn scratch_mut(&mut self, worker_index: u32) -> &mut W
pub fn scratch_mut(&mut self, worker_index: u32) -> &mut W
Target-specific. Mutable access to a single worker’s scratch
(coordinator-only). The host mock takes a worker index because
the coordinator owns &mut references to every lane’s scratch
slot; the wasm32 guest’s CoordinatorCtx::scratch_mut takes no
arguments and returns worker 0’s scratch slot only. Portable
source that needs cross-lane mutable scratch access must
#[cfg]-gate this call.
Sourcepub fn data_worker_count(&self) -> u32
pub fn data_worker_count(&self) -> u32
Number of data workers (excluding the coordinator lane).
Equal to worker_count() - 1. Mirrors the wasm32 guest helper
of the same name; programmer code calling this compiles and runs
identically on both targets.
Sourcepub fn data_scratch(&self, i: u32) -> &W
pub fn data_scratch(&self, i: u32) -> &W
Read-only access to a specific data worker’s scratch slot.
i is the data worker index in 0..data_worker_count().
Internally this maps to wasm worker index i + 1 (skipping the
coordinator’s own scratch slot at wasm index 0, which is unused
under the coordinator-lane model).
Same barrier discipline as CoordinatorCtx::scratch: only call
after a barrier the target data worker has reached.
Sourcepub fn cancel(&self)
pub fn cancel(&self)
Target-specific. Mark the tasklet as cancelled. Host mock
only — on wasm32 the runtime drives cancellation via
fb_tasklet_cancelled() and the guest has no cancel() lever.
Workers will observe this on their next
WorkerCtx::cancelled check or barrier wait.
Sourcepub fn barrier(&self) -> Result<(), Cancelled>
pub fn barrier(&self) -> Result<(), Cancelled>
Behavior diverges across targets. See the “Source portability”
section of docs/grafos/shared-memory-tasklet-programming-model.md.
On the host mock this call:
- Returns
Err(Cancelled)if cancellation has already been committed at the call site. - Does NOT actually block on anything. The coordinator lane is
not a participant in the
std::sync::Barrierused insideCoordinatorCtx::parallel_for_workers(that barrier sizes to the data-worker count only).
On wasm32 ([super::guest::CoordinatorCtx::barrier]) this
same call IS a real cross-lane barrier with the data workers,
because wasm32 worker 0 re-enters tasklet_run alongside
every other worker and all lanes rendezvous at
fb_barrier_wait().
The recommended portable pattern: use
CoordinatorCtx::parallel_for_workers as the phase boundary
on host (which already performs the join), and use
coord.barrier()? between phases on wasm32. A single source
body gated with #[cfg(target_arch = "wasm32")] is the honest
tool here — the SDK does not paper this over.
Phase 48.14 closes Phase 48.3 follow-up #18 (cancellable-barrier upgrade) as WONTFIX: the divergence is documented as an intentional target-specific behavior, not slated for semantic unification.
Sourcepub fn set_output(&mut self, bytes: &[u8])
pub fn set_output(&mut self, bytes: &[u8])
Write the coordinator’s output bytes. Mirrors the wasm32 guest
CoordinatorCtx::set_output: programmer calls this to emit the
tasklet output and then returns Ok(()) from the coordinator
closure.
On wasm32 the runtime copies bytes into the host-managed output
region and the coordinator’s tasklet_run returns the number of
bytes written. Here on the host mock, launch collects the
recorded output and returns it inside SharedTaskletResult.
Sourcepub fn fuel_checkpoint(&mut self, amount: u64) -> Result<(), FuelExhausted>
pub fn fuel_checkpoint(&mut self, amount: u64) -> Result<(), FuelExhausted>
Cooperatively reconcile fuel with the lease-wide shared pool.
Returns Ok(()) on a successful debit. Returns
Err(FuelExhausted) when the shared pool is exhausted, in which
case the coordinator MUST stop computing immediately — there is
no recovery path; a subsequent fuel-consuming instruction will
trap fail-closed.
V2 (shared-pool) workers MUST call this at safe points within
any compute loop that may consume more fuel than the initial
precharge. On V1 (per-worker-slice) launches this method always
returns Err(FuelExhausted) because no shared pool is installed
— V1 source should not call it.
amount == 0 is a silent no-op returning Ok(()).
Mirrors [super::guest::CoordinatorCtx::fuel_checkpoint] —
source compiled for either target observes identical
success/failure semantics.
Sourcepub fn parallel_for_workers<F>(&mut self, body: F) -> Result<(), TaskletError>
pub fn parallel_for_workers<F>(&mut self, body: F) -> Result<(), TaskletError>
Target-specific. Run a worker phase: invokes body once per data worker lane
in parallel, each with its own WorkerCtx. Returns when every
data worker has completed (the implicit tasklet-wide barrier).
Phase 48.3 — runs the body on data worker lanes
1..worker_count(). Worker 0 is the coordinator lane and does
not run the worker closure on either host mock or wasm32
targets — this matches the wasm32 SDK contract in
[super::guest::run_shared_memory_tasklet]. The number of body
invocations is therefore data_worker_count() == worker_count() - 1.
body may compute on shared state (via interior mutability such as
atomics) and on its own scratch. It may NOT perform authority-bearing
hostcalls — those are coordinator-only by contract.
This method is host-mock-only. The wasm32 guest drives workers by
having the runtime call tasklet_run once per lane; a program
writing source for both targets must #[cfg]-gate the outer
entry-point structure (see “Cross-target source patterns” in
the build guide). The worker closure body can still be shared
verbatim between the two entry points.