且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

使用 Scrapy 从网站查找和下载 pdf 文件

更新时间:2023-10-05 10:37:04

蜘蛛逻辑似乎不正确.

我快速浏览了您的网站,似乎有几种类型的页面:

I had a quick look at your website, and seems there are several types of pages:

  1. http://www.pwc.com/us/en/tax-services/publications/research-and-insights.html 初始页面
  2. 特定文章的网页,例如http://www.pwc.com/us/en/tax-services/publications/insights/australia-introduces-new-foreign-resident-cgt-withholding-regime.html 可从第 1 页导航
  3. 实际 PDF 位置,例如http://www.pwc.com/us/en/state-local-tax/newsletters/salt-insights/assets/pwc-wotc-precertification-period-extended-to-june-29.pdf 可以从第 2 页导航
  1. http://www.pwc.com/us/en/tax-services/publications/research-and-insights.html the initial page
  2. Webpages for specific articles, e.g. http://www.pwc.com/us/en/tax-services/publications/insights/australia-introduces-new-foreign-resident-cgt-withholding-regime.html which could be navigated from page #1
  3. Actual PDF locations, e.g. http://www.pwc.com/us/en/state-local-tax/newsletters/salt-insights/assets/pwc-wotc-precertification-period-extended-to-june-29.pdf which could be navigated from page #2

因此正确的逻辑是:先获取#1 页面,然后获取#2 页面,然后我们可以下载那些#3 页面.
但是,您的蜘蛛尝试直接从 #1 页面中提取指向 #3 页面的链接.

Thus the correct logic looks like: get the #1 page first, get #2 pages then, and we could download those #3 pages.
However your spider tries to extract links to #3 pages directly from the #1 page.

我已经更新了你的代码,这里有一些实际有效的东西:

I have updated your code, and here's something that actually works:

import urlparse
import scrapy

from scrapy.http import Request

class pwc_tax(scrapy.Spider):
    name = "pwc_tax"

    allowed_domains = ["www.pwc.com"]
    start_urls = ["http://www.pwc.com/us/en/tax-services/publications/research-and-insights.html"]

    def parse(self, response):
        for href in response.css('div#all_results h3 a::attr(href)').extract():
            yield Request(
                url=response.urljoin(href),
                callback=self.parse_article
            )

    def parse_article(self, response):
        for href in response.css('div.download_wrapper a[href$=".pdf"]::attr(href)').extract():
            yield Request(
                url=response.urljoin(href),
                callback=self.save_pdf
            )

    def save_pdf(self, response):
        path = response.url.split('/')[-1]
        self.logger.info('Saving PDF %s', path)
        with open(path, 'wb') as f:
            f.write(response.body)