Rate Limits
Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per API key (or per IP for unauthenticated requests).
Rate Limit Tiers
| Tier | Per Minute | Per Day | Burst | Use Case |
|---|---|---|---|---|
| Public Key (pk_) | 100 | 10,000 | 150 | Client-side apps |
| Secret Key (sk_) | 1,000 | 100,000 | 1,500 | Server-side apps |
Auth Endpoint Limits
Authentication endpoints have stricter limits to prevent brute-force attacks:
| Endpoint | Limit | Burst |
|---|---|---|
POST /auth/token | 10 req/s | 20 |
POST /auth/validate | 10 req/s | 20 |
POST /auth/refresh | 10 req/s | 20 |
Rate Limit Headers
Every response includes rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1707480060
X-RateLimit-Tier: public_key
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the current window |
X-RateLimit-Remaining | Requests remaining in the current window |
X-RateLimit-Reset | Unix timestamp when the limit resets |
X-RateLimit-Tier | Current tier (public_key or secret_key) |
When You Hit the Limit
When rate limited, you’ll receive a 429 response:
{
"code": 429,
"status": "error",
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Retry after 60 seconds.",
"tier": "public_key"
}
}
Additional headers on 429 responses:
| Header | Description |
|---|---|
Retry-After | Seconds to wait before retrying |
Handling Rate Limits
1. Monitor Headers
Track remaining requests and slow down before hitting limits:
class RateLimitedClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.remaining = Infinity;
this.resetAt = 0;
}
async fetch(path) {
// Wait if we're rate limited
if (this.remaining <= 0) {
const waitMs = Math.max(0, this.resetAt - Date.now());
if (waitMs > 0) {
console.log(`Rate limited. Waiting ${waitMs}ms`);
await sleep(waitMs);
}
}
const res = await fetch(`https://api.skakio.com${path}`, {
headers: { 'Authorization': `Bearer ${await this.getToken()}` }
});
// Update rate limit state
this.remaining = parseInt(res.headers.get('X-RateLimit-Remaining') || '100');
this.resetAt = parseInt(res.headers.get('X-RateLimit-Reset') || '0') * 1000;
if (res.status === 429) {
const retryAfter = parseInt(res.headers.get('Retry-After') || '60');
await sleep(retryAfter * 1000);
return this.fetch(path); // Retry
}
return res.json();
}
}
2. Implement Backoff
Use exponential backoff for 429 responses:
async function fetchWithBackoff(url, options, maxRetries = 5) {
let delay = 1000; // Start with 1 second
for (let attempt = 0; attempt < maxRetries; attempt++) {
const res = await fetch(url, options);
if (res.status === 429) {
const retryAfter = res.headers.get('Retry-After');
const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : delay;
console.log(`Rate limited (attempt ${attempt + 1}). Waiting ${waitTime}ms`);
await sleep(waitTime);
delay *= 2; // Double the delay for next attempt
continue;
}
return res;
}
throw new Error('Max retries exceeded');
}
3. Queue Requests
For batch operations, queue requests with delays:
class RequestQueue {
constructor(requestsPerSecond = 1) {
this.interval = 1000 / requestsPerSecond;
this.queue = [];
this.processing = false;
}
async add(fn) {
return new Promise((resolve, reject) => {
this.queue.push({ fn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing) return;
this.processing = true;
while (this.queue.length > 0) {
const { fn, resolve, reject } = this.queue.shift();
try {
const result = await fn();
resolve(result);
} catch (err) {
reject(err);
}
await sleep(this.interval);
}
this.processing = false;
}
}
// Usage: 2 requests per second
const queue = new RequestQueue(2);
const results = await Promise.all(
listingIds.map(id =>
queue.add(() => api.fetch(`/api/listing/${id}`))
)
);
Best Practices
Cache Responses
Reduce API calls by caching responses:
const cache = new Map();
const CACHE_TTL = 60 * 1000; // 1 minute
async function cachedFetch(path) {
const cached = cache.get(path);
if (cached && Date.now() < cached.expiresAt) {
return cached.data;
}
const data = await api.fetch(path);
cache.set(path, { data, expiresAt: Date.now() + CACHE_TTL });
return data;
}
Batch Where Possible
Use expand parameters to reduce requests:
// Instead of:
const listing = await api.fetch('/api/listing/123');
const publications = await api.fetch('/api/listing/123/publications');
// Do this:
const listing = await api.fetch('/api/listing/123?expand=publication');
// listing.publications is included
Use Pagination Wisely
Request only what you need:
// Don't over-fetch
const page1 = await api.fetch('/api/store-listings?limit=10');
// Paginate lazily
async function* paginateListings() {
let page = 1;
let hasMore = true;
while (hasMore) {
const res = await api.fetch(
`/api/store-listings?page=${page}&limit=20`
);
yield* res.data;
hasMore = page < res.meta.pagination.total_pages;
page++;
}
}
// Usage
for await (const listing of paginateListings()) {
console.log(listing.name);
}
Increasing Limits
If you need higher limits:
- Upgrade to Secret Key - 10x the rate limit of public keys
- Contact Support - Request custom limits for enterprise use
- Optimize - Cache, batch, and reduce unnecessary calls
Pagination Limits
In addition to rate limits, there are data limits:
| Limit | Value | Description |
|---|---|---|
| Max items per request | 50 | Maximum limit parameter |
| Default items per request | 20 | When limit not specified |