原创

Python爬虫:scrapy利用splash爬取动态网页

本文为博主原创文章,欢迎转载,请注明出处
本文链接:https://pengshiyu.blog.csdn.net/article/details/81625830

依赖库:

pip install scrapy-splash

配置settings.py

# splash服务器地址
SPLASH_URL = 'http://localhost:8050'

# 支持cache_args(可选)
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}

# 下载中间件设置
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810
}

# 设置去重过滤器
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'

# 启用缓存系统
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

代码示例

将原有Request 替换成 SplashRequest即可

# -*- coding: utf-8 -*-
import scrapy
from scrapy import cmdline
from scrapy_splash import SplashRequest


class ToscrapeJsSpider(scrapy.Spider):
    name = "toscrape_js"
    allowed_domains = ["toscrape.com"]
    start_urls = (
        'http://quotes.toscrape.com/js/',
    )

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, args={"timeout": 5, "image": 0, 'wait': '5'})

    def parse(self, response):
        quotes = response.css(".quote .text::text").extract()
        for quote in quotes:
            print(quote)


if __name__ == '__main__':
    cmdline.execute("scrapy crawl toscrape_js".split())

SplashRequest参数说明
url: 待爬页面
headers: 与Request相同
cookies:与Request相同
args: {dict}传递给Splash的参数, wait很重要,需要给js足够的执行时间
cache_args: {list}让Splash缓存的参数
endpoint: Splash端点服务,默认render.html
splash_url: Splash服务器地址,默认为配置文件中的SPLASH_URL

设置代理:

args={
    'proxy': 'http://proxy_ip:proxy_port'
}

参考:
using proxy with scrapy-splash

文章最后发布于: 2018-08-13 10:44:46
展开阅读全文
0 个人打赏
私信求帮助

没有更多推荐了,返回首页

©️2019 CSDN 皮肤主题: 编程工作室 设计师: CSDN官方博客

分享到微信朋友圈

×

扫一扫,手机浏览