我剛開始使用 Scrapy 來抓取網站。我有 9000 多個網址要抓取。我已經嘗試過并且它起作用了,除了我想根據 url 在 json 文件中輸出結果(如果我從 url1 中抓取了 10 個項目,我希望這些項目與 url1 放在一個 json 對象中,對于 url2 等也是如此) .){"url1": "www.reddit.com/page1", "results1: { ["name": "blabla", "link": "blabla", ], ["name": "blabla", "link": "blabla", ], ["name": "blabla", "link": "blabla", ] }, {"url2": "www.reddit.com/page2", "results2: { ["name": "blabla", "link": "blabla", ], ["name": "blabla", "link": "blabla", ], ["name": "blabla", "link": "blabla", ] }是否可以這樣做,還是最好先抓取整個網站,然后在工作后對其進行排序?我現在的代碼:import scrapyclass glenmarchSpider(scrapy.Spider): name = "glenmarch" def start_requests(self): start_urls = reversed([ 'https://www.glenmarch.com/cars/results?make=&model=&auction_house_id=&auction_location=&year_start=1913&year_end=1916&low_price=&high_price=&auction_id=&fromDate=&toDate=&keywords=AC+10+HP&show_unsold_cars=0&show_unsold_cars=1?limit=9999', 'https://www.glenmarch.com/cars/results?make=&model=&auction_house_id=&auction_location=&year_start=1918&year_end=1928&low_price=&high_price=&auction_id=&fromDate=&toDate=&keywords=AC+12+HP&show_unsold_cars=0&show_unsold_cars=1?limit=9999' ]) for url in start_urls: yield scrapy.Request(url, callback=self.parse) def parse(self, response): for caritem in response.css("div.car-item-border"): yield { "model": caritem.css("div.make::text").get(), "price": caritem.css("div.price::text").get(), "auction": caritem.css("div.auctionHouse::text").get(), "date": caritem.css("div.date::text").get(), "auction_url": caritem.css("div.view-auction a::attr(href)").get(), "img": caritem.css("img.img-responsive::attr(src)").get() }
添加回答
舉報
0/150
提交
取消