Langtail Uptime Monitor
The low-code platform for testing AI apps
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
217.8ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
204ms
Daily Status Breakdown
Nov-2025
99.71% uptime
Monthly Uptime
99.71%
Monthly Response Time
193ms
Daily Status Breakdown
Oct-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
329ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
191ms
Daily Status Breakdown
Aug-2025
99.87% uptime
Monthly Uptime
99.87%
Monthly Response Time
202ms
Daily Status Breakdown
Jul-2025
99.73% uptime
Monthly Uptime
99.73%
Monthly Response Time
182ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
201ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
196ms
Daily Status Breakdown
Apr-2025
99.62% uptime
Monthly Uptime
99.62%
Monthly Response Time
193ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalBraintrust
The end-to-end platform for building world-class AI apps.
Braintrust provides an end-to-end platform for developing, evaluating, and monitoring Large Language Model (LLM) applications. It helps teams build robust AI products through iterative workflows and real-time analysis.
Last checked: 3 hours ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 15 minutes ago View Status -
IssuesReprompt
Collaborative prompt testing for confident AI deployment
Reprompt is a developer-focused platform that enables efficient testing and optimization of AI prompts with real-time analysis and comparison capabilities.
Last checked: 3 minutes ago View Status -
OperationalHegel AI
Developer Platform for Large Language Model (LLM) Applications
Hegel AI provides a developer platform for building, monitoring, and improving large language model (LLM) applications, featuring tools for experimentation, evaluation, and feedback integration.
Last checked: 19 minutes ago View Status -
IssuesPrompt Hippo
Test and Optimize LLM Prompts with Science.
Prompt Hippo is an AI-powered testing suite for Large Language Model (LLM) prompts, designed to improve their robustness, reliability, and safety through side-by-side comparisons.
Last checked: 21 minutes ago View Status -
OperationalLangWatch
Monitor, Evaluate & Optimize your LLM performance with 1-click
LangWatch empowers AI teams to ship 10x faster with quality assurance at every step. It provides tools to measure, maximize, and easily collaborate on LLM performance.
Last checked: 9 minutes ago View Status