Coming soon. The Jobs dashboard UI is not yet exposed in the sidebar. The
API described on this page is live and you can submit jobs programmatically
today — see the API overview for authentication and
endpoint details.
Submitting a job
From Jobs → New Job:- Pick a target box. Only boxes you have access to appear in the picker.
- Paste (or upload) a Python script. You can include supporting files and environment variables.
- Optionally set a timeout (default 3600 seconds) and a resource lock (lock the box for the duration of the job so nothing else can dispatch to it).
- Click Run.
Job lifecycle
Every job moves through a defined set of states:| State | Meaning |
|---|---|
| queued | Accepted by the control plane, waiting for a worker to pick it up. |
| waiting_for_box | Worker has it, but the target box is busy or offline. |
| dispatching | Currently uploading the script to the box. |
| running | Box is actively executing. |
| uploading_artifacts | Script exited; artifacts are being uploaded to object storage. |
| completed | Final success state. |
| failed | Script exited non-zero, or an infrastructure error occurred. |
| cancelled | A user clicked Cancel, or the run was superseded. |
| timed_out | Exceeded its configured timeout. |
Watching a job live
The job detail page streams stdout and stderr via server-sent events. You’ll see every line as it appears, interleaved by timestamp. Lines longer than 1 MB are truncated with a notice. Logs are persisted in the database in sequenced chunks, so refreshing the page never loses output — you just replay from the beginning.Artifacts
If your script writes files to a well-known path (or uses the Lager Python API’s artifact helpers), Stout packages them during the uploading_artifacts phase:- Control plane issues a presigned S3 URL.
- Box streams the file up directly (never through the control plane).
- Control plane receives a confirmation with the file’s SHA-256 checksum.
Retries and timeouts
Stout does not auto-retry failed jobs by default — a hardware test that fails is usually signaling something real, not a transient glitch. To retry manually, click Run again on a completed run; Stout creates a new job with the same script and settings. If you want automatic retries, put the job inside a workflow and configure per-step retry policy there.Debugging a failed job
- Read the last 50 lines of stderr — most failures are straightforward Python errors.
- Check the Box column. If several recent jobs from different users failed on the same box, it’s probably a hardware issue, not your code.
- Open the audit log and filter by the failing job’s ID to see any infrastructure events (lock acquisitions, maintenance-window rejections) around the same timestamp.
- If the box’s heartbeat stopped during the job, the job is marked failed with a “box heartbeat lost” reason. Reach the box over SSH to inspect the container.