phoenix.arize.com Uptime Monitor
Open-source LLM tracing and evaluation
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
388.87ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.83% uptime
Monthly Uptime
99.83%
Monthly Response Time
224ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
258ms
Daily Status Breakdown
Oct-2025
99.86% uptime
Monthly Uptime
99.86%
Monthly Response Time
219ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
222ms
Daily Status Breakdown
Aug-2025
99.71% uptime
Monthly Uptime
99.71%
Monthly Response Time
239ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
215ms
Daily Status Breakdown
Jun-2025
99.72% uptime
Monthly Uptime
99.72%
Monthly Response Time
213ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
213ms
Daily Status Breakdown
Apr-2025
98.52% uptime
Monthly Uptime
98.52%
Monthly Response Time
273ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalArize
Unified Observability and Evaluation Platform for AI
Arize is a comprehensive platform designed to accelerate the development and improve the production of AI applications and agents.
Last checked: 1 hour ago View Status -
OperationalLangfuse
Open Source LLM Engineering Platform
Langfuse provides an open-source platform for tracing, evaluating, and managing prompts to debug and improve LLM applications.
Last checked: 19 hours ago View Status -
OperationalLangtrace
Transform AI Prototypes into Enterprise-Grade Products
Langtrace is an open-source observability and evaluations platform designed to help developers monitor, evaluate, and enhance AI agents for enterprise deployment.
Last checked: 7 hours ago View Status -
OperationalGentrace
Intuitive evals for intelligent applications
Gentrace is an LLM evaluation platform designed for AI teams to test and automate evaluations of generative AI products and agents. It facilitates collaborative development and ensures high-quality LLM applications.
Last checked: 1 hour ago View Status -
OperationalOpenLIT
Open Source Platform for AI Engineering
OpenLIT is an open-source observability platform designed to streamline AI development workflows, particularly for Generative AI and LLMs, offering features like prompt management, performance tracking, and secure secrets management.
Last checked: 1 hour ago View Status -
OperationalAgenta
End-to-End LLM Engineering Platform
Agenta is an LLM engineering platform offering tools for prompt engineering, versioning, evaluation, and observability in a single, collaborative environment.
Last checked: 1 hour ago View Status