Scraping modern websites isn't easy—dynamic content, anti-bot measures, and scaling challenges can stop you cold. SuperScraper API cuts through these obstacles by offering a REST-based scraping solution that works seamlessly with ScrapingBee, ScrapingAnt, and ScraperAPI interfaces. Whether you're extracting product data, monitoring competitors, or collecting research material, this tool handles the heavy lifting while you focus on results.
SuperScraper API is built on Apify's Actor platform and runs in an experimental Standby mode—meaning it's always ready to respond via HTTP REST API without traditional startup delays. You send a URL, and it returns fully rendered HTML, complete with options for screenshots, custom JavaScript execution, and anti-blocking measures.
Core capabilities include:
Dynamic content rendering using headless browsers for JavaScript-heavy sites
Anti-blocking protection through datacenter and residential proxy options, plus browser fingerprinting evasion
Automatic scaling to handle anything from single requests to large batch operations
Screenshot capture of full pages, viewports, or specific elements
Custom extraction rules to pull structured data without writing scrapers from scratch
You'll need an Apify API token from your account integrations page. Apify offers a free tier to test things out.
Authenticate by passing your token in the Authorization header:
bash
curl -X GET
'https://super-scraper-api.apify.actor/?url=https://apify.com/store&wait_for=.ActorStoreItem-title&screenshot=true&json_response=true'
--header 'Authorization: Bearer '
Alternatively, append the token as a query parameter for quick browser testing:
bash
curl -X GET 'https://super-scraper-api.apify.actor/?url=https://apify.com/store&wait_for=.ActorStoreItem-title&json_response=true&token='
Costs depend on computing resources, network usage, and storage consumed during scraping. Variables like target site complexity, proxy type (datacenter vs. residential), and API parameters all influence final pricing.
The best approach? Run a small test batch to see real-world costs. For example, a free-tier account running 30 sequential requests plus 50 batched requests will give you a baseline. Higher subscription tiers reduce per-request costs significantly.
When you're dealing with strict anti-scraping measures or need reliable data extraction at scale, investing in a solution that just works saves time and headaches. If you're looking for a proven alternative with straightforward pricing and robust proxy infrastructure, check out 👉 ScraperAPI for hassle-free web scraping that handles blocks and CAPTCHAs automatically. It's designed to remove the technical complexity so you can focus on using the data.
SuperScraper API maintains compatibility with ScrapingBee, ScrapingAnt, and ScraperAPI parameter sets. Pick whichever syntax you're familiar with—they all work.
Most popular options include:
url (required): Target webpage URL
render_js: Use headless browser for dynamic content (true/false, default true)
json_response: Return detailed JSON with metadata (true/false, default false)
screenshot: Capture viewport image (true/false, default false)
screenshot_full_page: Capture entire page (true/false, default false)
screenshot_selector: Capture specific element by CSS selector
wait: Wait time in milliseconds after page load
wait_for: CSS selector to wait for before proceeding
wait_browser: Browser event to wait for (load, domcontentloaded, networkidle)
block_resources: Block images and CSS (true/false, default true)
premium_proxy: Use residential proxies (true/false, default false)
country_code: Target specific country by 2-letter ISO code (requires premium_proxy=true for non-US)
cookies: Custom cookies in name1=value1;name2=value2 format
js_scenario: Execute custom JavaScript instructions after page load
extract_rules: Stringified JSON with data extraction rules
timeout: Maximum response time in milliseconds (default 140,000)
device: Emulate desktop or mobile (default desktop)
Similar functionality with slightly different naming:
browser: Enable JavaScript rendering (equivalent to render_js)
proxy_type: Choose datacenter or residential proxies
wait_for_selector: CSS selector to wait for (equivalent to wait_for)
block_resource: Block specific resource types (image, media, stylesheet, etc.)
proxy_country: Target country by 2-letter ISO code
js_snippet: Base64-encoded JavaScript to execute on page
Minimal but effective:
render: Enable JavaScript rendering (equivalent to render_js)
wait_for_selector: CSS selector to wait for
premium: Use residential proxies
ultra_premium: Same as premium
keep_headers: Forward all headers to target (removes Authorization)
device_type: Emulate desktop or mobile
binary_target: Treat target as file download (true/false, only works with render=false)
The extract_rules parameter lets you define what data to pull using CSS selectors and output formats. No need to parse HTML manually.
Shortened syntax:
json
{
"title": "h1",
"link": "a@href"
}
Full syntax with options:
json
{
"allLinks": {
"selector": "a",
"type": "list",
"output": {
"text": "a",
"url": "a@href"
}
}
}
Real example extracting blog links:
javascript
const extractRules = {
title: 'h1',
allLinks: {
selector: 'a',
type: 'list',
output: {
title: 'a',
link: 'a@href',
},
},
};
const resp = await axios.get('https://super-scraper-api.apify.actor/', {
params: {
url: 'https://blog.apify.com/',
extract_rules: JSON.stringify(extractRules),
},
headers: {
Authorization: 'Bearer ',
},
});
console.log(resp.data);
Output:
json
{
"title": "Apify Blog",
"allLinks": [
{
"title": "Data for generative AI & LLM",
"link": "https://apify.com/data-for-generative-ai"
},
{
"title": "Product matching AI",
"link": "https://apify.com/product-matching-ai"
}
]
}
Use the js_scenario parameter to automate interactions like clicking buttons, filling forms, or scrolling. Instructions execute sequentially after page load.
Example—clicking a button:
json
{
"instructions": [
{"wait_for": "#cookie-banner"},
{"click": "#accept-cookies"},
{"wait": 2000},
{"scroll_y": 500}
]
}
Set json_response=true to get execution reports, including results from evaluate instructions in the evaluate_results field.
Supported instructions:
wait: Pause for milliseconds ({"wait": 3000})
wait_for: Wait for CSS selector ({"wait_for": "#element"})
click: Click element ({"click": "#button"})
wait_for_and_click: Combined wait and click ({"wait_for_and_click": "#button"})
scroll_x / scroll_y: Scroll pixels ({"scroll_y": 1000})
fill: Fill input field ({"fill": ["#search", "query text"]})
evaluate: Run custom JavaScript ({"evaluate": "document.title"})
By default, failed instructions halt execution. Set "strict": false to continue on errors.
SuperScraper API delivers flexibility by supporting multiple API standards and powerful customization options. Whether you're scraping static pages or navigating complex JavaScript-heavy sites, it handles the infrastructure so you can focus on extracting the data you need. For teams prioritizing simplicity and reliability over setup complexity, exploring purpose-built solutions like 👉 ScraperAPI ensures you're spending time on analysis, not debugging proxy rotations.