Agent skill

vastai-observability

Set up comprehensive observability for Vast.ai integrations with metrics, traces, and alerts. Use when implementing monitoring for Vast.ai operations, setting up dashboards, or configuring alerting for Vast.ai integration health. Trigger with phrases like "vastai monitoring", "vastai metrics", "vastai observability", "monitor vastai", "vastai alerts", "vastai tracing".

Stars 163
Forks 31

Install this agent skill to your Project

npx add-skill https://github.com/majiayu000/claude-skill-registry/tree/main/skills/data/vastai-observability

SKILL.md

Vast.ai Observability

Overview

Set up comprehensive observability for Vast.ai integrations.

Prerequisites

  • Prometheus or compatible metrics backend
  • OpenTelemetry SDK installed
  • Grafana or similar dashboarding tool
  • AlertManager configured

Metrics Collection

Key Metrics

Metric Type Description
vastai_requests_total Counter Total API requests
vastai_request_duration_seconds Histogram Request latency
vastai_errors_total Counter Error count by type
vastai_rate_limit_remaining Gauge Rate limit headroom

Prometheus Metrics

typescript
import { Registry, Counter, Histogram, Gauge } from 'prom-client';

const registry = new Registry();

const requestCounter = new Counter({
  name: 'vastai_requests_total',
  help: 'Total Vast.ai API requests',
  labelNames: ['method', 'status'],
  registers: [registry],
});

const requestDuration = new Histogram({
  name: 'vastai_request_duration_seconds',
  help: 'Vast.ai request duration',
  labelNames: ['method'],
  buckets: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5],
  registers: [registry],
});

const errorCounter = new Counter({
  name: 'vastai_errors_total',
  help: 'Vast.ai errors by type',
  labelNames: ['error_type'],
  registers: [registry],
});

Instrumented Client

typescript
async function instrumentedRequest<T>(
  method: string,
  operation: () => Promise<T>
): Promise<T> {
  const timer = requestDuration.startTimer({ method });

  try {
    const result = await operation();
    requestCounter.inc({ method, status: 'success' });
    return result;
  } catch (error: any) {
    requestCounter.inc({ method, status: 'error' });
    errorCounter.inc({ error_type: error.code || 'unknown' });
    throw error;
  } finally {
    timer();
  }
}

Distributed Tracing

OpenTelemetry Setup

typescript
import { trace, SpanStatusCode } from '@opentelemetry/api';

const tracer = trace.getTracer('vastai-client');

async function tracedVast.aiCall<T>(
  operationName: string,
  operation: () => Promise<T>
): Promise<T> {
  return tracer.startActiveSpan(`vastai.${operationName}`, async (span) => {
    try {
      const result = await operation();
      span.setStatus({ code: SpanStatusCode.OK });
      return result;
    } catch (error: any) {
      span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
      span.recordException(error);
      throw error;
    } finally {
      span.end();
    }
  });
}

Logging Strategy

Structured Logging

typescript
import pino from 'pino';

const logger = pino({
  name: 'vastai',
  level: process.env.LOG_LEVEL || 'info',
});

function logVast.aiOperation(
  operation: string,
  data: Record<string, any>,
  duration: number
) {
  logger.info({
    service: 'vastai',
    operation,
    duration_ms: duration,
    ...data,
  });
}

Alert Configuration

Prometheus AlertManager Rules

yaml
# vastai_alerts.yaml
groups:
  - name: vastai_alerts
    rules:
      - alert: Vast.aiHighErrorRate
        expr: |
          rate(vastai_errors_total[5m]) /
          rate(vastai_requests_total[5m]) > 0.05
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Vast.ai error rate > 5%"

      - alert: Vast.aiHighLatency
        expr: |
          histogram_quantile(0.95,
            rate(vastai_request_duration_seconds_bucket[5m])
          ) > 2
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Vast.ai P95 latency > 2s"

      - alert: Vast.aiDown
        expr: up{job="vastai"} == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Vast.ai integration is down"

Dashboard

Grafana Panel Queries

json
{
  "panels": [
    {
      "title": "Vast.ai Request Rate",
      "targets": [{
        "expr": "rate(vastai_requests_total[5m])"
      }]
    },
    {
      "title": "Vast.ai Latency P50/P95/P99",
      "targets": [{
        "expr": "histogram_quantile(0.5, rate(vastai_request_duration_seconds_bucket[5m]))"
      }]
    }
  ]
}

Instructions

Step 1: Set Up Metrics Collection

Implement Prometheus counters, histograms, and gauges for key operations.

Step 2: Add Distributed Tracing

Integrate OpenTelemetry for end-to-end request tracing.

Step 3: Configure Structured Logging

Set up JSON logging with consistent field names.

Step 4: Create Alert Rules

Define Prometheus alerting rules for error rates and latency.

Output

  • Metrics collection enabled
  • Distributed tracing configured
  • Structured logging implemented
  • Alert rules deployed

Error Handling

Issue Cause Solution
Missing metrics No instrumentation Wrap client calls
Trace gaps Missing propagation Check context headers
Alert storms Wrong thresholds Tune alert rules
High cardinality Too many labels Reduce label values

Examples

Quick Metrics Endpoint

typescript
app.get('/metrics', async (req, res) => {
  res.set('Content-Type', registry.contentType);
  res.send(await registry.metrics());
});

Resources

Next Steps

For incident response, see vastai-incident-runbook.

Didn't find tool you were looking for?

Be as detailed as possible for better results