Use Rotating Proxy in Scraping Scripts

With our rotating proxy, your script needs only one gateway proxy to do the scrape jobs. Each request is redirected to a new IP.

We release new services rotating open proxy (unstable public proxies) and rotating premium proxy (stable premium proxies) recently. They aim to make it easier for scripts to switch IP while doing scrape tasks.

Regular Proxy

When using regular proxies, your scripts need to do these things in order to use different IPs to scrape the web pages.

  • Get a proxy list from your proxy provider by API (example).
  • Use a proxy from the list to scrape web pages.
  • Change to another proxy to avoid that IP being blocked.
  • After a while (one hour), get a new proxy list (Step 1).

Rotating Proxy

When using our rotating proxy service, your script needs only one proxy to do the jobs. It doesn’t need to download a proxy list and change proxies. We rotate the IPs for you.

Use one gateway proxy to access thousands of IPs

Use the Proxy

Our rotating proxy supports both HTTP(S) and Socks5. If you use IP authentication, no username/password is required.

Open Fast Rotating Option
Open Fast Rotating Option
Rotating Proxy Authentication

Sample Scripts

Here are some sample scripts showing how to use our rotating proxy as an HTTP(S) proxy with username/password authentication.

In the code, we use as the demo proxy host. You should use the real proxy host or IP in your script.

We use the URL for the test. It returns its visitor’s IP. You should see a new IP every time you use our rotating proxy to access it.

# Change the URL to your target website
curl --proxy \

# Sample output
import requests

proxies = {
    "http": "",
    "https": ""

# Pretend to be Firefox
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0',
    'Accept-Language': 'en-US,en;q=0.5'

# Change the URL to your target website
url = ""
    r = requests.get(url, proxies=proxies, headers=headers, timeout=20)
except Exception as e:
# - a scraping framework for Python
# 1. Enable HttpProxyMiddleware in your
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 1

# 2. Pass proxy to request via request.meta
# Change the URL to your target website
request = Request(url="")
request.meta['proxy'] = ""
yield request
// use
var request = require('request');

// Change the URL to your target website
var url = '';
var proxy = '';

    url: url,
    proxy: proxy
}, function (error, response, body) {
    if (error) {
    } else {
// Change the URL to your target website
$url = '';
$proxy_ip = '';
$proxy_port = '2000';
$userpass = 'username:password';

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_PROXYPORT, $proxy_port);
curl_setopt($ch, CURLOPT_PROXYTYPE, 'HTTP');
curl_setopt($ch, CURLOPT_PROXY, $proxy_ip);
curl_setopt($ch, CURLOPT_PROXYUSERPWD, $userpass);
$data = curl_exec($ch);

echo $data;
import org.apache.http.HttpHost;
import org.apache.http.client.fluent.*;

public class Example {
    public static void main(String[] args) throws Exception {
        HttpHost proxy = new HttpHost("", 2000);

        // Change the URL to your target website
        String res = Executor.newInstance()
            .auth(proxy, "username", "password")
using System;
using System.Net;

class Example
    static void Main()
        var client = new WebClient();
        client.Proxy = new WebProxy("");
        client.Proxy.Credentials =
          new NetworkCredential("username", "password");

        // Change the URL to your target website

require 'uri'
require 'net/http'

uri = URI.parse('')
proxy = Net::HTTP::Proxy('', 2000,  'user', 'pass')
req =

result = proxy.start(,uri.port) do |http|

puts result.body
Imports System.Net

Module Example
    Sub Main()
        Dim Client As New WebClient
        Client.Proxy = New WebProxy("")
        Client.Proxy.Credentials = _
          New NetworkCredential("username", "password")
    End Sub
End Module
youtube-dl --proxy  \
'use strict';
const puppeteer = require('puppeteer');

(async() => {
  const browser = await puppeteer.launch({    
    // You need to whitelist your IP before using it
    args: [ '' ]
  const page = await browser.newPage();
  await page.goto('');
  await browser.close();

If you use Selenium, here are some sample codes showing how to use our Rotating Proxy with it.