To ensure system stability and fair resource distribution, ZenCrawl applies Concurrency Limits based on your subscription plan.
Unlike traditional APIs that limit requests per minute (RPM), we limit requests in parallel. This is perfect for web scraping, where response times vary based on the target website's speed.
The Core Concept: Concurrency
Think of Concurrency as the number of "lanes" on a highway.
- Credits determine how far you can drive (Total Volume).
- Concurrency determines how many cars you can drive at once (Speed/Throughput).
Example: If your plan allows 10 Concurrent Requests, you can have 10 separate scraping jobs running at the exact same split-second. As soon as one finishes, a slot opens up for the next one.
Plan Limits
The following limits apply to the Universal Scraper API (Fetch, Extract, Target).
| Plan | Max Concurrent Requests | Concurrent Browsers* | Best For |
|---|---|---|---|
| Free | 1 | 1 | Testing connectivity. |
| Starter | 10 | 2 | Sequential or small-batch scraping. |
| Growth | 50 | 10 | Parallel scraping for business workflows. |
| Scale | 200 | 50 | High-velocity data extraction. |
| Enterprise | Custom | Custom | Massive scale & dedicated clusters. |
* Concurrent Browsers: Applies specifically when using Headless Browser mode or JS Rendering. These consume significantly more resources, so they have a tighter concurrency limit within your total allowance.
What happens if I hit the limit?
If you exceed your concurrency limit (e.g., trying to start an 11th request when your limit is 10), the API will immediately return a 429 Too Many Requests error.
Managing Throughput
Since our limits are based on concurrency, your effective speed (RPM) depends on how fast the target websites respond.
Formula:
Throughput (RPM) = (60s / Average Request Time) * Concurrency Limit
Scenario: Scraping a fast site (2s latency)
- Plan: Starter (10 Concurrency)
- Calculation:
(60 / 2) * 10= 300 Requests / Minute
Scenario: Scraping a slow site (10s latency)
- Plan: Starter (10 Concurrency)
- Calculation:
(60 / 10) * 10= 60 Requests / Minute
Best Practices & Handling 429s
1. Control Your Concurrency
Do not simply flood the API. Use a queue system to control how many requests you send at once.
Node.js Example (p-limit)
import pLimit from 'p-limit';
// If you are on the Starter Plan, set this to 10
const limit = pLimit(10);
const urls = [...]; // List of 1000 URLs
const tasks = urls.map(url => {
return limit(() => fetch('https://api.zencrawl.com/v1/fetch', { ... }));
});
await Promise.all(tasks);