且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

如何将新列添加到 csv 的 Scrapy 输出?

更新时间:2023-11-30 18:52:22

您可以在 start_requests 中获取 ID 并使用 meta={'id> 分配给请求': id_} 和稍后在 parse 中,您可以使用 response.meta['id'] 获取 ID.

You can get ID in start_requests and assign to request using meta={'id': id_} and later in parse you can get ID using response.meta['id'].

这样你就会在 parse 中有正确的 ID.

This way you will have correct ID in parse.

我使用字符串 data 而不是文件来创建工作示例.

I use string data instead of file to create working example.

#!/usr/bin/env python3

import scrapy

data = '''https://www.ceneo.pl/48523541, 1362
https://www.ceneo.pl/46374217, 2457'''

class QuotesSpider(scrapy.Spider):

    name = "quotes" 

    def start_requests(self):
        #f = open('urls.csv', 'r')

        f = data.split('\n')

        for row in f:
            url, id_ = row.split(',')

            url = url.strip()
            id_ = id_.strip()

            #print(url, id_)

            # use meta to assign value 
            yield scrapy.Request(url=url, callback=self.parse, meta={'id': id_})

    def parse(self, response):
        # use meta to receive value
        id_ = response.meta["id"]

        all_prices = response.xpath('(//td[@class="cell-price"] /a/span/span/span[@class="value"]/text())[position() <= 10]').extract()
        all_sellers = response.xpath('(//tr/td/div/ul/li/a[@class="js_product-offer-link"]/text())[position()<=10]').extract()

        all_sellers = [ item.replace('Opinie o ', '') for item in all_sellers ]

        for price, seller in zip(all_prices, all_sellers):
            yield {'urlid': id_, 'price': price.strip(), 'seller': seller.strip()}

# --- it runs without project and saves in `output.csv` ---

from scrapy.crawler import CrawlerProcess

c = CrawlerProcess({
    'USER_AGENT': 'Mozilla/5.0',
    'FEED_FORMAT': 'csv',
    'FEED_URI': 'output.csv', 
})
c.crawl(QuotesSpider)
c.start()

顺便说一句:有标准函数 id() 所以我使用变量 id_ 而不是 id

BTW: there is standard function id() so I use variable id_ instead of id