Reva Uptime Monitor
Use the right LLM for your task
Last 30 Days Performance
Average Uptime
100%
Based on 30-day monitoring period
Average Response Time
227.08ms
Mean response time across all checks
Daily Status Overview
Hover for detailsHistorical Performance
Jan-2026
100% uptime
Monthly Uptime
100%
Monthly Response Time
193ms
Daily Status Breakdown
Dec-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
204ms
Daily Status Breakdown
Nov-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
215ms
Daily Status Breakdown
Oct-2025
99.46% uptime
Monthly Uptime
99.46%
Monthly Response Time
218ms
Daily Status Breakdown
Sep-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
209ms
Daily Status Breakdown
Aug-2025
99.43% uptime
Monthly Uptime
99.43%
Monthly Response Time
203ms
Daily Status Breakdown
Jul-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
189ms
Daily Status Breakdown
Jun-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
194ms
Daily Status Breakdown
May-2025
100% uptime
Monthly Uptime
100%
Monthly Response Time
187ms
Daily Status Breakdown
Related Uptime Monitors
Explore uptime status for similar tools that also have monitoring enabled.
-
OperationalEvalsOne
Evaluate LLMs & RAG Pipelines Quickly
EvalsOne is a platform for rapidly evaluating Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) pipelines using various metrics.
Last checked: 2 weeks ago View Status -
OperationalAdaptive ML
AI, Tuned to Production.
Adaptive ML provides a platform to evaluate, tune, and serve the best LLMs for your business. It uses reinforcement learning to optimize models based on measurable metrics.
Last checked: 2 weeks ago View Status -
OperationalIntura
Compare, Choose, and Save on AI & LLMs
Intura helps businesses experiment with, compare, and deploy AI and LLM models side-by-side to optimize performance and cost before full-scale implementation.
Last checked: 2 weeks ago View Status -
IssuesCompare AI Models
AI Model Comparison Tool
Compare AI Models is a platform providing comprehensive comparisons and insights into various large language models, including GPT-4o, Claude, Llama, and Mistral.
Last checked: 2 weeks ago View Status -
OperationalBenchLLM
The best way to evaluate LLM-powered apps
BenchLLM is a tool for evaluating LLM-powered applications. It allows users to build test suites, generate quality reports, and choose between automated, interactive, or custom evaluation strategies.
Last checked: 2 weeks ago View Status -
OperationalConviction
The Platform to Evaluate & Test LLMs
Conviction is an AI platform designed for evaluating, testing, and monitoring Large Language Models (LLMs) to help developers build reliable AI applications faster. It focuses on detecting hallucinations, optimizing prompts, and ensuring security.
Last checked: 2 weeks ago View Status