If you've ever tried to scrape data at scale or automate browser tasks, you know the drill. Your IP gets flagged, requests get blocked, and suddenly you're stuck refreshing the same error page. That's where proxy services step in, and if you're working with PrivateProxy.me, knowing how to actually implement it across different environments can save you hours of trial and error.
This guide walks through practical proxy integration examples across multiple programming languages and frameworks. Whether you're building a data pipeline in Python, automating tests with Selenium, or running headless browsers with Puppeteer, you'll find working code snippets that actually function in production environments.
Before diving into code examples, understand that PrivateProxy supports two authentication methods: credential-based (username and password) and IP-based authorization. The credential approach works anywhere but requires passing authentication details with each request. IP authorization is cleaner—once you whitelist your server's IP in the dashboard, requests authenticate automatically without credentials.
For most development scenarios, especially when working with rotating residential proxies or backconnect configurations, credential authentication offers more flexibility. You can switch between different proxy endpoints without constantly updating IP whitelists.
PHP developers often reach for cURL when making HTTP requests through proxies. The implementation is straightforward—you initialize a cURL handle, set the target URL, specify the proxy server details, and pass authentication credentials through CURLOPT_PROXYUSERPWD. Here's a working example:
php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"http://api.privateproxy.me:10738");
curl_setopt($ch, CURLOPT_PROXY, "2.57.20.194:5432");
curl_setopt($ch, CURLOPT_PROXYUSERPWD, "pvtyproxies:ajd89akjdAdk");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
$curl_scraped_page = curl_exec($ch);
curl_close($ch);
echo $curl_scraped_page;
This pattern works reliably for scraping tasks where you need to rotate IPs or maintain anonymity. The test endpoint returns your current proxy IP, confirming the connection routes through PrivateProxy's infrastructure.
When you're building scrapers that need to handle multiple concurrent requests, 👉 choosing the right proxy configuration for distributed scraping operations becomes critical for maintaining both speed and reliability.
JavaScript developers working with Node.js face a slightly more complex setup, especially when dealing with HTTPS requests that require tunnel connections. The native http module handles proxy authentication through the CONNECT method, establishing a tunnel before forwarding the actual request.
The key difference between HTTP and HTTPS proxy requests lies in how the connection is established. For HTTPS, you first create a CONNECT tunnel to the proxy, then pipe the actual HTTPS request through that authenticated socket. This two-step process ensures end-to-end encryption while still routing through the proxy server.
javascript
const http = require('http');
const https = require('https');
function getWithProxy(url, proxy) {
const parsedUrl = new URL(url);
const proxy_ip = proxy['ip']
const proxy_port = proxy['port']
const proxy_auth = 'Basic ' + Buffer.from(proxy['login'] + ':' + proxy['password']).toString('base64')
let host = parsedUrl.hostname;
if (parsedUrl.port !== '') {
host += ':' + parsedUrl.port;
} else {
host += parsedUrl.protocol == 'http:' ? ':80' : ':443';
}
return new Promise((resolve, reject) => {
http.request({
port: proxy_port,
host: proxy_ip,
method: 'CONNECT',
path: host,
headers: {
'Host': host,
'Proxy-Authorization': proxy_auth
}
}).on('connect', (res, socket, head) => {
if (res.statusCode !== 200) {
reject(new Error(Proxy connection failed: ${res.statusCode}));
return;
}
const real_opts = parsedUrl.protocol == "http:" ?
{createConnection: () => socket} :
{socket: socket};
const t = parsedUrl.protocol == "http:" ? http : https;
const real_req = t.request(url, real_opts, (res) => {
res.setEncoding('utf-8')
let rawData = []
res.on('data', (chunk) => rawData.push(chunk));
res.on('end', () => resolve(rawData.join('')));
});
real_req.on('error', (e) => reject(e));
real_req.end();
}).end();
});
}
Selenium automation presents unique challenges because you're controlling actual browsers rather than making direct HTTP requests. Firefox and Chrome handle proxy configuration differently, and credential authentication requires different approaches for each browser.
Firefox through Selenium supports credential-based proxies via the selenium-wire extension. This wrapper intercepts network traffic and injects proxy authentication headers transparently:
python
from seleniumwire import webdriver
from selenium.webdriver.firefox.options import Options
wire_options = {
'proxy': {
'http': 'http://pvtyproxies:ajd89akjdAdk@2.57.20.194:5432',
'https': 'http://pvtyproxies:ajd89akjdAdk@2.57.20.194:5432',
'no_proxy': 'localhost,127.0.0.1'
}
}
options = Options()
options.headless = True
browser = webdriver.Firefox(
options=options,
seleniumwire_options=wire_options
)
For IP-authorized connections, Firefox's native profile preferences work without additional dependencies. You set proxy preferences directly on the Firefox profile, keeping the setup cleaner for production deployments.
Chrome takes a different route—it doesn't natively support proxy authentication in automated mode. The workaround involves generating a temporary Chrome extension that handles authentication programmatically. While this adds complexity, it's the only reliable method for credential-based proxies with headless Chrome.
For IP authorization, Chrome's command-line proxy flag works perfectly and supports headless mode. This makes IP whitelisting the preferred approach when running Chrome automation at scale.
Puppeteer developers need to handle proxy authentication through the proxy-chain package, which creates a local proxy server that adds authentication headers before forwarding requests to the actual proxy. This indirect approach works around Puppeteer's limitations with authenticated proxies.
When you're managing browser automation across multiple parallel sessions, 👉 implementing proxy rotation strategies with residential proxy pools helps distribute requests and avoid rate limiting.
Beyond basic proxy setup, several browser configurations significantly impact anonymity and success rates:
WebRTC Leaks: Even with a proxy configured, WebRTC can expose your real IP through peer connection requests. Disable WebRTC in browser settings or use extensions that block WebRTC connections entirely.
Browser Fingerprinting: Sites increasingly rely on browser fingerprints rather than IP addresses for tracking. Use anti-fingerprinting extensions or configure user agents and canvas settings to match the proxy's geographic location.
Timezone Alignment: When your system timezone doesn't match your proxy location, sites can flag the discrepancy. Set your system time to match the proxy region, or configure timezone spoofing in your automation framework.
Backconnect Rotation Intervals: If using rotating proxies, set swap intervals to at least 5 minutes. Frequent IP changes mid-session trigger fraud detection systems and can get you blocked faster than using a static IP.
There's no technical difference in how you implement backconnect versus static proxies in your code—both use the same connection format and authentication methods. The distinction happens server-side. Backconnect proxies automatically rotate the outgoing IP address based on your configured interval, while static proxies maintain the same IP throughout your session.
This means you can switch between proxy types without changing application code, making backconnect proxies ideal for scenarios where you need IP diversity but don't want to manage rotation logic yourself.
When building custom scrapers or automation scripts, always include browser-like headers. Sites analyze request patterns, and missing user agents or accept headers immediately flag automated traffic. Set a realistic user agent string, include accept-language headers, and match header patterns to actual browser requests.
The difference between a blocked scraper and a successful one often comes down to these details rather than the proxy itself.