Technical Setup Guide
Getting your AI data into Antarctica is straightforward. You just need to set up a reporting pipeline between your code and our ingestion servers.
It’s a quick two-step process: Get your API Keys and Hook up the API.
Step 1: Grab your API Keys
Before you can send any telemetry, you need a secret key to authenticate your requests.
- Navigate to your Antarctica Dashboard and log in using an Administrator or Developer account.
- Under your selected workspace, locate the AI Module heading in the primary navigation sidebar.
- Access the Configure sub-menu and click on API Keys.
- Click Create API Key. You will be prompted to provide the following configuration:
- Name (Required): Use a strict nomenclature identifying the service (e.g.,
prod-inference-collector-1). - Server location (Required): Select the physical location or region where your server is hosted.
- Environment (Required): Tie the key strictly to a specific environment (e.g.,
prod,development) to segment your analytics correctly. - IP allow list (Optional): Specify your trusted server IP addresses separated by commas. Our Edge network will automatically drop telemetry requests mapping to this key that do not originate from these IP addresses.
- Name (Required): Use a strict nomenclature identifying the service (e.g.,
- Store the generated key securely in a secret manager (e.g., AWS Secrets Manager, HashiCorp Vault) or your application’s
.envconfiguration.
[!WARNING] Don’t lose your key!
For security reasons, we only show you the API key once. If you lose it, you’ll have to revoke it and create a new one.
Step 2: Hook up the Telemetry (Integration)
Once you’ve got your key, it’s time to start sending us data.
Every time your app makes an LLM call (OpenAI, Claude, Gemini, etc.), you’ll want to grab the metadata from the response and ship it our way.
What you’ll need to send:
To get the most out of your dashboard, make sure your app can:
- Make standard
POSTrequests. - Extract Token Counts (
promptandcompletion). - Track Timing (Start time, Time to First Token, and total latency).
For specific code snippets and examples on how to do this in Node.js, Python, or Go, check out our APIs Documentation.