python学习-Selenium爬虫之使用代理ip的方法-创新互联

建始网站制作公司哪家好,找创新互联!从网页设计、网站建设、微信开发、APP开发、自适应网站建设等网站项目制作,到程序开发,运营维护。创新互联从2013年成立到现在10年的时间,我们拥有了丰富的建站经验和运维经验,来保证我们的工作的顺利进行。专注于网站建设就选创新互联

今天给大家分享的是如何在爬取数据的时候防止 IP被封,今天给大家分享两种方法,希望大家可以认真学习,再也不用担心被封IP啦。

第一种:

降低访问速度,我们可以使用 time模块中的sleep,使程序每运行一次后就睡眠1s,这样的话就可以大大的减少ip被封的几率

第二种:

为了提高效率,我们可以使用代理 ip来解决,ip是亿牛云的动态转发代理,以下是代理配置过程的示例

Selenium

     from   selenium   import  webdriver

     import   string

     import   zipfile

     # 代理服务器

    proxyHost = "t.16yun.cn"

    proxyPort = "31111"

     # 代理隧道验证信息

    proxyUser = "username"

    proxyPass = "password"

     def   create_proxy_auth_extension (proxy_host, proxy_port,

                                   proxy_username, proxy_password,

                                   scheme= 'http' , plugin_path= None ):

         if  plugin_path is None :

            plugin_path = r 'C:/{}_{}@t.16yun.zip' .format(proxy_username, proxy_password)

        manifest_json = """        {            "version": "1.0.0",            "manifest_version": 2,            "name": "16YUN Proxy",            "permissions": [                "proxy",                "tabs",                "unlimitedStorage",                "storage",                "",                "webRequest",                "webRequestBlocking"            ],            "background": {                "scripts": ["background.js"]            },            "minimum_chrome_version":"22.0.0"        }        """

        background_js = string.Template(

             """            var config = {                mode: "fixed_servers",                rules: {                    singleProxy: {                        scheme: "${scheme}",                        host: "${host}",                        port: parseInt(${port})                    },                    bypassList: ["foobar.com"]                }              };

            chrome.proxy.settings.set({value: config, scope: "regular"}, function() {});

            function callbackFn(details) {                return {                    authCredentials: {                        username: "${username}",                        password: "${password}"                    }                };            }

            chrome.webRequest.onAuthRequired.addListener(                callbackFn,                {urls: [""]},                ['blocking']            );            """

        ).substitute(

            host=proxy_host,

            port=proxy_port,

            username=proxy_username,

            password=proxy_password,

            scheme=scheme,

        )

         with  zipfile.ZipFile(plugin_path, 'w' ) as  zp:

            zp.writestr( "manifest.json" , manifest_json)

            zp.writestr( "background.js" , background_js)

         return  plugin_path

    proxy_auth_plugin_path = create_proxy_auth_extension(

        proxy_host=proxyHost,

        proxy_port=proxyPort,

        proxy_username=proxyUser,

        proxy_password=proxyPass)

    option = webdriver.ChromeOptions()

    option.add_argument( "--start-maximized" )

     # 如报错 chrome-extensions

     # option.add_argument("--disable-extensions")

    option.add_extension(proxy_auth_plugin_path)

    driver = webdriver.Chrome(chrome_options=option)

    driver.get( "http://httpbin.org/ip" )

好 了,今天关于 python学习的分享就到这里,上边的那段代码可以直接使用,但是里边的代理应该已经过期,大家在使用的时候可能需要联系代理商开通服务,最后呢希望大家能够收藏起来,要记得做笔记哦。好记性不如烂笔头。


本文标题:python学习-Selenium爬虫之使用代理ip的方法-创新互联
URL地址:http://azwzsj.com/article/dechgg.html