Skip to main content

Rate Limits

API rate limits are based on your subscription plan to ensure fair usage and system stability.

Handling Rate Limit Errors

When you exceed your rate limit, you’ll receive a 429 Too Many Requests response:
{
  "code": 429,
  "msg": "Rate limit reached for requests.",
  "status": "ERROR",
  "success": false
}

Rate Limit Tiers

Your rate limit allowance depends on your subscription plan:
PlanRequests/MinuteRequests/Day
FreeN/AN/A
Basic100100,000
EnterpriseCustomCustom
Check your dashboard at aml.bitrace.io for your current rate limits and usage statistics.

Best Practices

1. Implement Exponential Backoff

Use exponential backoff when handling rate limit errors to avoid overwhelming the API:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    try {
      const response = await fetch(url, options);

      if (response.status === 429) {
        const retryAfter = response.headers.get('Retry-After') || 60;
        const waitTime = Math.pow(2, i) * 1000; // Exponential backoff

        console.log(`Rate limited. Waiting ${waitTime}ms...`);
        await new Promise(resolve => setTimeout(resolve, waitTime));
        continue;
      }

      return response;
    } catch (error) {
      if (i === maxRetries - 1) throw error;
      // Wait before retrying with exponential backoff
      await new Promise(resolve => setTimeout(resolve, Math.pow(2, i) * 1000));
    }
  }
}

// Usage
const response = await makeRequestWithRetry(
  'https://api.bitrace.io/api/v1/tracker/kya/entities?address=0x123&network=eth',
  {
    headers: {
      'X-Access-Key': process.env.BITRACE_API_KEY
    }
  }
);

2. Use Batching Efficiently

Batch endpoints consume multiple requests worth of quota:
  • 1 batch request with 10 addresses = 10 API calls
  • Plan your batching strategy based on your rate limit
Recommended approach:
// Instead of multiple individual requests
for (const address of addresses) {
  await checkEntity(address); // 1 call per address
}

// Use batch endpoints
await checkEntitiesInBatch(addresses); // 1 call total, billed as N requests

3. Cache Responses

Implement caching to reduce redundant API calls:
const NodeCache = require('node-cache');
const cache = new NodeCache({ stdTTL: 300 }); // Cache for 5 minutes

async function getEntityWithCache(address, network) {
  const cacheKey = `${address}-${network}`;

  // Check cache first
  const cached = cache.get(cacheKey);
  if (cached) {
    console.log('Cache hit!');
    return cached;
  }

  // Make API request
  const result = await getEntity(address, network);

  // Store in cache
  cache.set(cacheKey, result);

  return result;
}

Rate Limit by Endpoint

Different endpoints may have different rate limits:
Endpoint TypeRate Limit
Single address queriesStandard rate
Batch queriesBilled by number of addresses
Transaction screeningStandard rate
Custom risk scoresHigher rate (async processing)

Increasing Your Limits

Enterprise Options

For high-volume needs, contact us for:
  • Custom rate limits
  • SLA guarantees
  • Dedicated support
  • Priority processing

Optimization Tips

Before upgrading, try these optimizations:
  1. Implement caching - Reduce redundant calls
  2. Use batching - More efficient for multiple queries
  3. Optimize frequency - Don’t poll more than necessary
  4. Use webhooks - For real-time updates (if available)

Troubleshooting

Issue: Frequently Hitting Rate Limits

Solution: Implement exponential backoff and caching

Issue: 429 Errors Persist

Solution: Check that you’re respecting the Retry-After header

Issue: Rate Limit Too Low

Solution: Upgrade your plan or optimize your usage

Issue: Batch Requests Consuming Too Much Quota

Solution: Batch requests are billed per address. Consider batching only when necessary.

See Also