LMSYS Org Uptime Monitor
Developing open, accessible, and scalable large model systems
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
126.7ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
87ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
88ms
Daily Status Breakdown
Oct-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
86ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
93ms
Daily Status Breakdown
Aug-2025
99.85% uptime
Monthly Uptime
99.85%
Monthly Response Time
103ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
96ms
Daily Status Breakdown
Jun-2025
99.93% uptime
Monthly Uptime
99.93%
Monthly Response Time
93ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
91ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
176ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalModelBench
No-Code LLM Evaluations
ModelBench enables teams to rapidly deploy AI solutions with no-code LLM evaluations. It allows users to compare over 180 models, design and benchmark prompts, and trace LLM runs, accelerating AI development.
Last checked: 1 hour ago View Status -
OperationalLLM Explorer
Discover and Compare Open-Source Language Models
LLM Explorer is a comprehensive platform for discovering, comparing, and accessing over 46,000 open-source Large Language Models (LLMs) and Small Language Models (SLMs).
Last checked: 2 hours ago View Status -
IssuesCompare AI Models
AI Model Comparison Tool
Compare AI Models is a platform providing comprehensive comparisons and insights into various large language models, including GPT-4o, Claude, Llama, and Mistral.
Last checked: 1 hour ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 1 hour ago View Status -
OperationalLega
Large Language Model Governance
Lega empowers law firms and enterprises to safely explore, assess, and implement generative AI technologies. It provides enterprise guardrails for secure LLM exploration and a toolset to capture and scale critical learnings.
Last checked: 1 hour ago View Status -
OperationalLangChain
Build and Deploy LLM-Powered Applications and Agents
LangChain is a comprehensive framework for developing and deploying applications powered by large language models (LLMs), enabling the creation of sophisticated AI agents, chatbots, and data analysis tools. It facilitates building context-aware and reasoning applications.
Last checked: 2 hours ago View Status