Get Started
Run a Worker
Workers are the supply side of the Lightchain AI network. A worker runs whitelisted models via Ollama alongside the Worker Sidecar, serves encrypted inference jobs, and earns fees for every completed job. This guide walks through the end-to-end lifecycle — from installing the sidecar to registering on-chain and maintaining uptime.
Worker operations are still evolving as the network approaches mainnet. Minimum stake amounts and a few contract addresses are still being finalized — the placeholders below will be filled in as the flow stabilizes.
Network reference
Use these values everywhere the worker tools, scripts, or wallets ask for an RPC endpoint and chain ID.1. Prerequisites
Before registering as a worker, make sure you have:- Hardware capable of running the models you plan to serve. Requirements vary per model; see the Model Governance directory for the current whitelisted set and their operating profiles.
- Ollama installed and reachable locally, with the target models pulled.
- LCAI to stake. Each whitelisted model has its own minimum stake defined on-chain in
AIConfig— check the Model Governance page for current values. (Placeholder — final per-model minimums TBD.) - An RPC endpoint for the Lightchain AI network — use
https://rpc.testnet.lightchain.ai/(chain ID8200). - A self-custody wallet that holds your LCAI and will be registered as the worker address.
Install Ollama and pull a model
CodeBASH
cast if you do not already have one you want to use:
CodeBASH
2. Install the Worker Tools
The worker stack ships as two binaries built from thelightchain-worker repository:
lightchain-worker— CLI used for keygen, registration, and status checks.lightchain-worker-sidecar— long-running service that pulls encrypted prompts, drives Ollama, and returns encrypted responses.
CodeBASH
3. Prepare Your Worker
Point the CLI at your keystore, set the password so it can unlock the key, and generate the worker's encryption keypair. Every worker advertises an encryption public key on-chain so users can encrypt prompts for it —keygen produces that pair locally.
CodeBASH
4. Register on the Network
Declare the models you plan to serve and submit registration throughWorkerRegistry. The CLI wraps the on-chain call for you — it attaches your stake and whitelists the model IDs your worker commits to serving.
CodeBASH
StakeTopUp and add or remove models via ModelAdded / ModelRemoved.
You can only serve models that are currently whitelisted on AIConfig. Attempting to register for a delisted or non-existent model will revert.
5. Go Live and Monitor
Start the sidecar only after registration succeeds and your Ollama endpoint is reachable. Check status from the CLI to confirm the worker is healthy and visible to the dispatcher.CodeBASH
- responds to liveness checks used by the dispatcher
- acknowledges (
JobAcknowledged) and completes (JobCompleted) assigned jobs within deadline - maintains an up-to-date online status so the dispatcher can route work to you
JobTimedOut.
6. Slashing and Jailing
Workers are held accountable for correctness and availability. See AIVM EL Architecture — Verification for the full model; in short:- Timeouts (missed job deadlines) → slashed stake and a temporary jail
- Lost disputes (semantic similarity check indicates a bad response) → slashed stake
- Repeat offenses → escalating slashing and eventual long-term jailing
WorkerUnjailed once the penalty window passes, assuming stake remains above the minimum.
7. Deregister and Withdraw
When you are ready to stop serving:- Finish or time out any outstanding jobs.
- Call
WorkerRegistry.deregister(...)(emitsWorkerDeregistered). - Withdraw your remaining stake (
WorkerWithdrawal).
Related Guides
- AIVM EL Architecture — how inference, verification, and slashing fit together.
- Worker API — the indexer and data model for worker/job activity.
- Model Governance — how models get whitelisted and priced.
- Run a Node — consensus-layer node operation (separate role from a worker).