The Worker Explorer is the clearest product view of Lightchain's decentralized AI network. In the Lightchain whitepaper, the network is designed around Proof of Intelligence (PoI), where useful AI execution is paired with verifiability, economic incentives, and transparent governance. This explorer is where those ideas become visible.Instead of showing workers as anonymous infrastructure, the explorer shows them as active economic participants: they support specific models, commit stake, process AI jobs, earn rewards, and can be penalized when performance fails. That makes the page useful for operators, users, and anyone trying to understand whether the network is healthy.
Main page
The main page gives a network-wide view of supply. It shows how many workers exist, how many are currently active, what model coverage looks like, and how the worker set is distributed across stake and status. In whitepaper terms, this is the public-facing view of Lightchain's decentralized AI capacity.
Network overview
The hero section summarizes the size and readiness of the network:
Total workers shows the full operator base available on the selected network.
Active workers shows how much of that base is currently available to process AI demand.
Models supported shows how broad the network's execution capability is.
This reflects the whitepaper's view of Lightchain as AI-native infrastructure rather than a traditional chain with AI added on top. Workers are the supply side of that system, and the explorer turns that supply into something measurable at a glance.The Testnet/Mainnet switch is also important. Testnet is where capacity, reliability, and explorer UX can be observed before activity becomes economically meaningful on mainnet.
Worker directory
This table is the marketplace view of the worker set. Each row combines identity, capability, economic commitment, and availability:
Worker address identifies the operator on-chain.
Supported models shows which AI workloads the worker can serve.
Stake amount shows how much economic backing the operator has committed.
Status shows whether the worker is currently available, paused, or in a failed state.
This matters because the whitepaper ties network trust to incentives and visibility. A worker is not just compute capacity; it is a staked participant whose behavior affects reliability, routing confidence, and settlement outcomes.
What you're seeing
The small explainer cards on the main page define the core concepts in plain language:
Workers are the independent operators supplying decentralized AI compute.
Stake is the locked economic commitment that aligns behavior with network reliability.
Status is the fast UX signal for whether a worker is available, temporarily offline, or operationally restricted.
That framing fits the whitepaper's emphasis on transparent AI infrastructure. Lightchain is not only trying to run AI workloads; it is trying to make participation inspectable and understandable.
Worker details page
The worker details page moves from network discovery to operator inspection. It shows how one worker is performing over time, where it is allocating stake, which models it serves, and whether it has attracted penalties or disputes. This is where the explorer turns protocol trust into operator-level evidence.
Worker summary
The top section establishes identity and credibility:
the worker address anchors the operator on-chain
the status badge shows whether it is currently active
First seen and Last seen help users judge continuity
the summary cards compress the key health indicators: total stake, uptime, jobs completed, and total earned
This matches the whitepaper's core idea that decentralized AI work should be both useful and auditable. A worker that remains online, completes jobs consistently, and accumulates earnings over time gives a stronger trust signal than one with weak uptime or low throughput.The full page view also includes a Quick Facts sidebar, which acts as a fast trust summary for status, model count, recent success rate, and issue reporting.
Supported models
This section breaks the worker down by model instead of treating it as a generic node. For each supported model, the explorer shows:
completed job volume
completion rate
disputes
stake allocated to that model
That is important because Lightchain's AI execution layer is model-specific. The whitepaper presents the network as AI-native infrastructure, so capacity has to be understood in terms of actual model support and model-level performance, not just total node count.
Job history
Job history is the direct activity log for useful AI work. It shows:
which model handled the task
whether the job completed, remained pending, or failed
when execution started and finished
what payout was attached to the job
This is one of the most concrete ways the explorer reflects the whitepaper. Lightchain's pitch is that network participation should result in meaningful AI execution rather than wasted consensus effort. A populated, inspectable job ledger makes that claim legible.
Stake breakdown
The stake breakdown shows how a worker's economic commitment is distributed across its supported models. Instead of only exposing one total stake number, the explorer shows where that commitment is concentrated.That matters because, in the whitepaper, stake is part of the mechanism that aligns incentives and absorbs penalties. A worker that allocates more backing to a model is signaling stronger commitment to serving that part of the network reliably.
Slash history
Slash history is the accountability layer of the page. It records:
when a worker was penalized
which model was affected
what triggered the penalty
how much stake was slashed
whether the case is finalized or still under dispute
This is closely aligned with the whitepaper's trust model. Lightchain depends on verifiable execution, transparent settlement, and economic consequences when performance breaks down. The slashing table turns that design into something users can inspect directly instead of taking it on faith.
Why this explorer matters
Taken together, the explorer expresses the whitepaper in UI form:
the main page shows the breadth of decentralized AI supply
the worker page shows how one operator earns trust over time
stake, performance, and slashing are shown side by side so transparency is tied directly to economics
That combination is important for Lightchain. The protocol is not only building AI execution; it is building verifiable and inspectable AI execution. The Worker Explorer is where that design becomes visible to the community.