WasmEdge Uptime Monitor
Fast, lightweight, portable, and OpenAI compatible WebAssembly runtime for edge AI and LLM inference
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
95.3ms
Mean response time across all checks
Daily Status Overview
Hover for detailsRelated Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalLlamaEdge
The easiest, smallest and fastest local LLM runtime and API server.
LlamaEdge is a lightweight and fast local LLM runtime and API server, powered by Rust & WasmEdge, designed for creating cross-platform LLM agents and web services.
Last checked: 2 hours ago View Status -
OperationalWebLLM
High-Performance In-Browser LLM Inference Engine
WebLLM enables running large language models (LLMs) directly within a web browser using WebGPU for hardware acceleration, reducing server costs and enhancing privacy.
Last checked: 5 hours ago View Status