Freeplay Uptime Monitor
The All-in-One Platform for AI Experimentation, Evaluation, and Observability
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
201.77ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
99.75% uptime
Monthly Uptime
99.75%
Monthly Response Time
244ms
Daily Status Breakdown
Nov-2025
99.41% uptime
Monthly Uptime
99.41%
Monthly Response Time
228ms
Daily Status Breakdown
Oct-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
223ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
192ms
Daily Status Breakdown
Aug-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
211ms
Daily Status Breakdown
Jul-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
176ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
197ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
202ms
Daily Status Breakdown
Apr-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
260ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalHoneyHive
AI Observability and Evaluation Platform for Building Reliable AI Products
HoneyHive is a comprehensive platform that provides AI observability, evaluation, and prompt management tools to help teams build and monitor reliable AI applications.
Last checked: 2 hours ago View Status -
IssuesReprompt
Collaborative prompt testing for confident AI deployment
Reprompt is a developer-focused platform that enables efficient testing and optimization of AI prompts with real-time analysis and comparison capabilities.
Last checked: 5 hours ago View Status -
OperationalHumanloop
The LLM evals platform for enterprises to ship and scale AI with confidence
Humanloop is an enterprise-grade platform that provides tools for LLM evaluation, prompt management, and AI observability, enabling teams to develop, evaluate, and deploy trustworthy AI applications.
Last checked: 2 hours ago View Status -
OperationalMLflow
ML and GenAI made simple
MLflow is an open-source, end-to-end MLOps platform for building better models and generative AI apps. It simplifies complex ML and generative AI projects, offering comprehensive management from development to production.
Last checked: 2 hours ago View Status -
OperationalArize
Unified Observability and Evaluation Platform for AI
Arize is a comprehensive platform designed to accelerate the development and improve the production of AI applications and agents.
Last checked: 2 hours ago View Status -
OperationalMaxim
Simulate, evaluate, and observe your AI agents
Maxim is an end-to-end evaluation and observability platform designed to help teams ship AI agents reliably and more than 5x faster.
Last checked: 2 hours ago View Status