Skip to main content
Coming soon. The Jobs dashboard UI is not yet exposed in the sidebar. The API described on this page is live and you can submit jobs programmatically today — see the API overview for authentication and endpoint details.
A job is a single execution of a Python script on a specific box. Jobs are Stout’s unit of reproducible, auditable work: every run has an owner, a timestamp, a full log transcript, and any artifacts it produced.

Submitting a job

From Jobs → New Job:
  1. Pick a target box. Only boxes you have access to appear in the picker.
  2. Paste (or upload) a Python script. You can include supporting files and environment variables.
  3. Optionally set a timeout (default 3600 seconds) and a resource lock (lock the box for the duration of the job so nothing else can dispatch to it).
  4. Click Run.
Stout uploads the script to the box’s Python service on port 5000, spawns it inside the Lager container, and begins streaming output. A minimal script:
from lager import Net, NetType

supply = Net.get("supply1", type=NetType.PowerSupply)
supply.set_voltage(3.3)
supply.enable()
print("Supply enabled at 3.3 V")

Job lifecycle

Every job moves through a defined set of states:
StateMeaning
queuedAccepted by the control plane, waiting for a worker to pick it up.
waiting_for_boxWorker has it, but the target box is busy or offline.
dispatchingCurrently uploading the script to the box.
runningBox is actively executing.
uploading_artifactsScript exited; artifacts are being uploaded to object storage.
completedFinal success state.
failedScript exited non-zero, or an infrastructure error occurred.
cancelledA user clicked Cancel, or the run was superseded.
timed_outExceeded its configured timeout.
Transitions are one-way — a completed job never re-enters running.

Watching a job live

The job detail page streams stdout and stderr via server-sent events. You’ll see every line as it appears, interleaved by timestamp. Lines longer than 1 MB are truncated with a notice. Logs are persisted in the database in sequenced chunks, so refreshing the page never loses output — you just replay from the beginning.

Artifacts

If your script writes files to a well-known path (or uses the Lager Python API’s artifact helpers), Stout packages them during the uploading_artifacts phase:
  1. Control plane issues a presigned S3 URL.
  2. Box streams the file up directly (never through the control plane).
  3. Control plane receives a confirmation with the file’s SHA-256 checksum.
Artifacts appear as download links on the job detail page and are retained according to your org’s storage policy.

Retries and timeouts

Stout does not auto-retry failed jobs by default — a hardware test that fails is usually signaling something real, not a transient glitch. To retry manually, click Run again on a completed run; Stout creates a new job with the same script and settings. If you want automatic retries, put the job inside a workflow and configure per-step retry policy there.

Debugging a failed job

  1. Read the last 50 lines of stderr — most failures are straightforward Python errors.
  2. Check the Box column. If several recent jobs from different users failed on the same box, it’s probably a hardware issue, not your code.
  3. Open the audit log and filter by the failing job’s ID to see any infrastructure events (lock acquisitions, maintenance-window rejections) around the same timestamp.
  4. If the box’s heartbeat stopped during the job, the job is marked failed with a “box heartbeat lost” reason. Reach the box over SSH to inspect the container.