Usage
Once you have a NilAI API key and node base URL, you can start using SecretLLM with any OpenAI-compatible library.
Getting Started with SecretLLM
Getting started with SecretLLM is straightforward:
- Query the /v1/modelsendpoint to check available models
- Select a model and use it with the /v1/chat/completionsendpoint
Since SecretLLM is OpenAI-compatible, you can use any OpenAI library. Here's an example for querying the Llama-3.1-8B model:
- Python
- Node
from openai import OpenAI
# Initialize the OpenAI client
client = OpenAI( # Replace <node> with the specific node identifier
    base_url="https://nilai-<node>.nillion.network/v1/",
    api_key="YOUR_API_KEY"
)
# Send a chat completion request
response = client.chat.completions.create(
    model="meta-llama/Llama-3.1-8B-Instruct",
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant."
        },
        {
            "role": "user",
            "content": "What is your name?"
        }
    ],
    stream=False
)
const OpenAI = require('openai');
// Initialize the OpenAI client
const client = new OpenAI({
  baseURL: 'https://nilai-<node>.nillion.network/v1/',
  apiKey: 'YOUR_API_KEY'
});
// Send a chat completion request
async function getChatCompletion() {
  try {
    const response = await client.chat.completions.create({
      model: 'meta-llama/Llama-3.1-8B-Instruct',
      messages: [
        {
          role: 'system',
          content: 'You are a helpful assistant.'
        },
        {
          role: 'user',
          content: 'What is your name?'
        }
      ],
      stream: false
    });
    console.log(response);
  } catch (error) {
    console.error('Error:', error);
  }
}
// Call the function
getChatCompletion();
SecretLLM Endpoints
SecretLLM provides the following endpoints:
| Name | Endpoint | Description | 
|---|---|---|
| Chat | /v1/chat/completions | Generate AI responses | 
| Models | /v1/models | List available models | 
| Attestation | /v1/attestation/report | Get cryptographic proof of environment | 
| Usage | /v1/usage | Track your token usage | 
| Health | /v1/health | Check service status |