Rate Limits

Rate limits protect the API from abuse and ensure fair usage for all users. Limits are applied per API key.

Rate Limit Tiers

All /api/* endpoints require authentication. Rate limits vary by key type:

TierPer MinutePer HourPer DayUse Case
Public Key (pk_)1001,00010,000Client-side apps
Secret Key (sk_)5005,00050,000Server-side apps

Rate Limit Headers

Every response includes rate limit information:

HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1707480060
HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window
X-RateLimit-RemainingRequests remaining in the current window
X-RateLimit-ResetUnix timestamp when the limit resets

When You Hit the Limit

When rate limited, you’ll receive a 429 response:

{
  "code": 429,
  "status": "error",
  "error": {
    "code": "RATE_LIMIT_EXCEEDED",
    "message": "Rate limit exceeded. Retry after 60 seconds.",
    "tier": "public_key"
  }
}

Additional headers on 429 responses:

HeaderDescription
Retry-AfterSeconds to wait before retrying

Handling Rate Limits

1. Monitor Headers

Track remaining requests and slow down before hitting limits:

class RateLimitedClient {
  constructor(apiKey) {
    this.apiKey = apiKey;
    this.remaining = Infinity;
    this.resetAt = 0;
  }

  async fetch(path) {
    // Wait if we're rate limited
    if (this.remaining <= 0) {
      const waitMs = Math.max(0, this.resetAt - Date.now());
      if (waitMs > 0) {
        console.log(`Rate limited. Waiting ${waitMs}ms`);
        await sleep(waitMs);
      }
    }

    const res = await fetch(`https://api.skakio.com${path}`, {
      headers: { 'Authorization': `Bearer ${await this.getToken()}` }
    });

    // Update rate limit state
    this.remaining = parseInt(res.headers.get('X-RateLimit-Remaining') || '100');
    this.resetAt = parseInt(res.headers.get('X-RateLimit-Reset') || '0') * 1000;

    if (res.status === 429) {
      const retryAfter = parseInt(res.headers.get('Retry-After') || '60');
      await sleep(retryAfter * 1000);
      return this.fetch(path); // Retry
    }

    return res.json();
  }
}

2. Implement Backoff

Use exponential backoff for 429 responses:

async function fetchWithBackoff(url, options, maxRetries = 5) {
  let delay = 1000; // Start with 1 second

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const res = await fetch(url, options);

    if (res.status === 429) {
      const retryAfter = res.headers.get('Retry-After');
      const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : delay;

      console.log(`Rate limited (attempt ${attempt + 1}). Waiting ${waitTime}ms`);
      await sleep(waitTime);

      delay *= 2; // Double the delay for next attempt
      continue;
    }

    return res;
  }

  throw new Error('Max retries exceeded');
}

3. Queue Requests

For batch operations, queue requests with delays:

class RequestQueue {
  constructor(requestsPerSecond = 1) {
    this.interval = 1000 / requestsPerSecond;
    this.queue = [];
    this.processing = false;
  }

  async add(fn) {
    return new Promise((resolve, reject) => {
      this.queue.push({ fn, resolve, reject });
      this.process();
    });
  }

  async process() {
    if (this.processing) return;
    this.processing = true;

    while (this.queue.length > 0) {
      const { fn, resolve, reject } = this.queue.shift();

      try {
        const result = await fn();
        resolve(result);
      } catch (err) {
        reject(err);
      }

      await sleep(this.interval);
    }

    this.processing = false;
  }
}

// Usage: 2 requests per second
const queue = new RequestQueue(2);

const results = await Promise.all(
  storeIds.map(id =>
    queue.add(() => api.fetch(`/api/store/${id}`))
  )
);

Best Practices

Cache Responses

Reduce API calls by caching responses:

const cache = new Map();
const CACHE_TTL = 60 * 1000; // 1 minute

async function cachedFetch(path) {
  const cached = cache.get(path);
  if (cached && Date.now() < cached.expiresAt) {
    return cached.data;
  }

  const data = await api.fetch(path);
  cache.set(path, { data, expiresAt: Date.now() + CACHE_TTL });

  return data;
}

Batch Where Possible

Use expand parameters to reduce requests:

// Instead of:
const listing = await api.fetch('/api/listing/123');
const publications = await api.fetch('/api/listing/123/publications');

// Do this:
const listing = await api.fetch('/api/listing/123?expand=publication');
// listing.publications is included

Use Pagination Wisely

Request only what you need:

// Don't over-fetch
const page1 = await api.fetch('/api/store/123/listings?limit=10');

// Paginate lazily
async function* paginateListings(storeId) {
  let page = 1;
  let hasMore = true;

  while (hasMore) {
    const res = await api.fetch(
      `/api/store/${storeId}/listings?page=${page}&limit=20`
    );
    yield* res.data;

    hasMore = page < res.pagination.totalPages;
    page++;
  }
}

// Usage
for await (const listing of paginateListings('store_123')) {
  console.log(listing.name);
}

Increasing Limits

If you need higher limits:

  1. Upgrade to Secret Key - 5x the rate limit of public keys
  2. Contact Support - Request custom limits for enterprise use
  3. Optimize - Cache, batch, and reduce unnecessary calls

Pagination Limits

In addition to rate limits, there are data limits:

LimitValueDescription
Max items per request50Maximum limit parameter
Default items per request20When limit not specified