🚀 Knowledge for Your Vector Robot
✅ Server Status: Running
We implement the best knowledge for your Vector robot so that you can have wonderful conversations with it.
📡 API Endpoint
POST /v1/chat/completions
This endpoint accepts OpenAI-compatible chat completion requests.
Wirepod configuration
Configure your Wirepod server according to the screenshot below:
Demonstration on a Vector Robot
With our knowledge, you can make your Vector robot answer questions like shown in the video below:
🔧 Sample API Request
Here's how to make an API request. Use this when you need to test our API.
curl -X POST "http://knowledge.learnwitharobot.com/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN_HERE" \
-d '{
"model": "sonar",
"messages": [
{
"role": "user",
"content": "Hello, how are you today?"
}
],
"max_tokens": 100,
"temperature": 0.7
}'
📋 Request Format
The API accepts standard OpenAI-compatible requests with the following structure:
{
"model": "model-name",
"messages": [
{"role": "user", "content": "Your message here"}
],
"max_tokens": 100,
"temperature": 0.7,
"stream": false
}
🔐 Authentication
All requests must include a valid Bearer token in the Authorization header:
Authorization: Bearer YOUR_TOKEN_HERE
Tokens can be requested from the editor of www.learnwitharobot.com, Amitabha Banerjee. Use Substack to DM him.
📊 Available Models
Supported models are:
- XAI: grok-4-latest, grok-3-latest
- Perplexity: sonar
- Sambanova: Meta-Llama-3.3-70B-Instruct
🚨 Error Codes
- 401: Invalid or expired authorization token
- 429: Rate limit exceeded
- 400: Invalid request format or unsupported model
- 500: Internal server error