ModulesAI ModuleOverview

AI Module Overview

The Antarctica AI Module is your one-stop shop for monitoring how your apps are using Large Language Models (LLMs). We provide a set of simple endpoints designed to catch, track, and aggregate your AI telemetry data.

What’s in it for you?

By piping your AI usage data to us, you can finally answer those tricky observability questions:

  • Token Spend: See exactly how many tokens you’re burning on input and output across different models and providers.
  • Performance: Track “Time to First Token” (TTFT) and total latency to see if your AI feels snappy or sluggish.
  • Cost Control: Get a clear breakdown of your spend whether you’re using OpenAI, Claude, Gemini, or all three.

Why use our AI Module?

  1. Dead Simple Ingestion: Our OTM API is built to handle millions of events without breaking a sweat, running entirely in the background.
  2. Clean Data: We use strict schemas so your analytics dashboards stay reliable and error-free.
  3. Idempotency Built-In: Don’t worry about duplicate data. If a network blip causes a retry, we handle it gracefully.

Ready to get started? Head over to the Setup Guide to grab your API keys and start shipping data.