If you've ever wondered how to actually use proxy servers in your code, you're in the right place. This guide walks you through the most practical ways to integrate PrivateProxy.me into your projects, whether you're working with PHP, Python, Node.js, or building browser automation tools.
Before diving into code examples, let's clear up something important: there are two ways to authenticate with proxy servers. You can either use credentials (username and password) or whitelist your IP address. If you've already added your IP to the authorized list, you don't need to specify credentials in your code—the proxy will recognize your requests automatically. Both methods work equally well, so pick whichever fits your workflow better.
One quick note: if you're writing custom software to access websites through proxies, always include standard HTTP headers like User-Agent. This makes your requests look like they're coming from a real browser, which helps avoid detection and blocking.
👉 Get reliable proxy servers with flexible authentication options
PHP's cURL library makes proxy integration straightforward. Here's a working example that connects through a proxy and returns your current IP address:
php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://api.privateproxy.me:10738");
curl_setopt($ch, CURLOPT_PROXY, "2.57.20.194:5432");
curl_setopt($ch, CURLOPT_PROXYUSERPWD, "pvtyproxies:ajd89akjdAdk");
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
$curl_scraped_page = curl_exec($ch);
curl_close($ch);
echo $curl_scraped_page;
The key here is CURLOPT_PROXY for the server address and CURLOPT_PROXYUSERPWD for authentication. Once configured, all your requests route through the proxy automatically.
Node.js requires a bit more setup since you need to establish a CONNECT tunnel for HTTPS requests. This example shows how to handle both HTTP and HTTPS through a proxy:
javascript
const http = require('http');
const https = require('https');
function getWithProxy(url, proxy) {
const parsedUrl = new URL(url);
const proxy_ip = proxy['ip'];
const proxy_port = proxy['port'];
const proxy_auth = 'Basic ' + Buffer.from(proxy['login'] + ':' + proxy['password']).toString('base64');
let host = parsedUrl.hostname;
if (parsedUrl.port !== '') {
host += ':' + parsedUrl.port;
} else {
host += parsedUrl.protocol === 'http:' ? ':80' : ':443';
}
return new Promise((resolve, reject) => {
http.request({
port: proxy_port,
host: proxy_ip,
method: 'CONNECT',
path: host,
headers: {
'Host': host,
'Proxy-Authorization': proxy_auth
}
}).on('connect', (res, socket, head) => {
if (res.statusCode !== 200) {
reject(new Error(Proxy returned: ${res.statusCode} ${res.statusMessage}));
return;
}
const protocol = parsedUrl.protocol === "http:" ? http : https;
const real_opts = parsedUrl.protocol === "http:"
? { createConnection: () => socket }
: { socket: socket };
const real_req = protocol.request(url, real_opts, (res) => {
res.setEncoding('utf-8');
let rawData = [];
res.on('data', (chunk) => rawData.push(chunk));
res.on('end', () => resolve(rawData.join('')));
});
real_req.on('error', (e) => reject(e));
real_req.end();
}).end();
});
}
Python developers often use the requests library, but you can also work with proxies at a lower level using urllib or http.client if needed.
Go's net/http package has built-in proxy support through the Transport configuration:
go
package main
import (
"fmt"
"io/ioutil"
"net/http"
"net/url"
"time"
)
func main() {
proxy, _ := url.Parse("http://pvtyproxies:ajd89akjdAdk@2.57.20.194:5432")
proxyFunc := http.ProxyURL(proxy)
tr := &http.Transport{
MaxIdleConns: 10,
IdleConnTimeout: 30 * time.Second,
DisableCompression: true,
Proxy: proxyFunc,
}
client := &http.Client{Transport: tr}
resp, _ := client.Get("http://api.privateproxy.me:10738")
defer resp.Body.Close()
bytes, _ := ioutil.ReadAll(resp.Body)
fmt.Print(string(bytes))
}
When using proxies with browsers, there's a critical security issue you need to address: WebRTC leaks. Even when routing through a proxy, WebRTC can expose your real IP address—sometimes even your local network IP. Here's what you should do:
Disable WebRTC in your browser before using proxies
Install tracker-blocking extensions like Ghostery to prevent browser fingerprinting
Set your system time to match the proxy server's timezone when possible
For rotating proxies (backconnect), use exchange intervals of 5 minutes or longer to avoid frequent IP changes
👉 Access premium proxies perfect for browser automation
Selenium with Firefox (geckodriver) offers two authentication approaches. With credential authentication, you'll need selenium-wire:
python
from seleniumwire import webdriver
from selenium.webdriver.firefox.options import Options
wire_options = {
'proxy': {
'http': 'http://pvtyproxies:ajd89akjdAdk@2.57.20.194:5432',
'https': 'http://pvtyproxies:ajd89akjdAdk@2.57.20.194:5432',
'no_proxy': 'localhost,127.0.0.1'
}
}
options = Options()
options.headless = True
browser = webdriver.Firefox(
options=options,
seleniumwire_options=wire_options
)
browser.get("http://api.privateproxy.me:10738")
For IP authentication, the setup is simpler since you don't need selenium-wire:
python
from selenium import webdriver
proxy_ip = '2.57.20.194'
proxy_port = 5432
profile = webdriver.FirefoxProfile()
profile.set_preference("network.proxy.type", 1)
profile.set_preference("network.proxy.http", proxy_ip)
profile.set_preference("network.proxy.http_port", proxy_port)
profile.set_preference("network.proxy.ssl", proxy_ip)
profile.set_preference("network.proxy.ssl_port", proxy_port)
profile.update_preferences()
browser = webdriver.Firefox(firefox_profile=profile)
Chrome requires a different approach with credential authentication. You actually need to generate a temporary extension that handles the proxy authentication:
python
import zipfile
import hashlib
def generate_extension(proxy, credentials):
ip, port = proxy.split(':')
login, password = credentials.split(':')
manifest_json = """{
"version": "1.0.0",
"manifest_version": 2,
"name": "Chrome Proxy",
"permissions": ["proxy", "tabs", "storage", "", "webRequest", "webRequestBlocking"],
"background": {"scripts": ["background.js"]},
"minimum_chrome_version": "22.0.0"
}"""
background_js = """
var config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "http",
host: "%s",
port: parseInt(%s)
}
}
};
chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});
function callbackFn(details) {
return {
authCredentials: {username: "%s", password: "%s"}
};
}
chrome.webRequest.onAuthRequired.addListener(callbackFn, {urls: [""]}, ['blocking']);
""" % (ip, port, login, password)
sha1 = hashlib.sha1()
sha1.update(("%s:%s" % (proxy, credentials)).encode('utf-8'))
filename = sha1.hexdigest() + ".zip"
with zipfile.ZipFile(filename, 'w') as zp:
zp.writestr("manifest.json", manifest_json)
zp.writestr("background.js", background_js)
return filename
Keep in mind that Chrome with credential authentication has limitations—you can't run it in headless mode, and you need to generate a separate extension for each proxy-credential combination.
Puppeteer works smoothly with proxies when combined with the proxy-chain package:
javascript
const puppeteer = require('puppeteer');
const proxyChain = require('proxy-chain');
(async() => {
const oldProxyUrl = 'http://pvtyproxies:ajd89akjdAdk@2.57.20.194:5432';
const newProxyUrl = await proxyChain.anonymizeProxy(oldProxyUrl);
const browser = await puppeteer.launch({
args: [--proxy-server=${newProxyUrl}]
});
const page = await browser.newPage();
await page.goto('https://httpbin.org/ip');
const element = await page.$('pre');
const text = await page.evaluate(element => element.textContent, element);
console.log(text);
await browser.close();
await proxyChain.closeAnonymizedProxy(newProxyUrl, true);
})();
The proxy-chain library handles the credential conversion, making the integration much cleaner than manual setup.
A common question is whether backconnect (rotating) proxies work differently from regular proxies. The answer is no—from a technical standpoint, you use them exactly the same way. The only difference is what the target website sees: with backconnect proxies, the IP address changes automatically based on your rotation settings, while regular proxies maintain the same IP throughout the session.
This makes backconnect proxies ideal for web scraping and data collection where you need to avoid rate limiting or IP-based blocking, without changing any code or configuration on your end.