python当当网爬虫
                                                            生活随笔
收集整理的這篇文章主要介紹了
                                python当当网爬虫
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.                        
                                ?? 最終要實現的是將當當網上面的書籍信息,書籍名字,網址和評論數爬取,存入到數據庫中。(首先要做的是創建好數據庫,創建的數據庫名字為dd,創建的表為books,字段為title,link,comment)。
1、創建項目 scrapy startproject dangdang
2、進入項目文件夾創建爬蟲文件
>scrapy genspider –t basic dd dangdang.com
3、用pycharm打開這個項目
編輯items.py文件
# -*- coding: utf-8 -*- # Define here the models for your scraped items # See documentation in: # https://doc.scrapy.org/en/latest/topics/items.html import scrapy class DangdangItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()title=scrapy.Field()link=scrapy.Field()comment=scrapy.Field()編輯dd.py
# -*- coding: utf-8 -*- import scrapy from dangdang.items import DangdangItem from scrapy.http import Request class DdSpider(scrapy.Spider):name = 'dd'allowed_domains = ['dangdang.com']start_urls = ['http://dangdang.com/']def parse(self, response):item=DangdangItem()item['title']=response.xpath('//a[@class="pic"]/@title').extract()item['link'] = response.xpath('//a[@class="pic"]/@href').extract()item['comment'] = response.xpath('//a[@class="search_comment_num"]/text()').extract()yield itemfor i in range(2,101):#循環爬多頁的東西url='http://category.dangdang.com/pg'+str(i)+'-cp01.54.06.00.00.00.html'yield Request(url,callback=self.parse)在seetings.py文件中打開pipelines
ITEM_PIPELINES = {
 ??? 'dangdang.pipelines.DangdangPipeline': 300,
}
Pipelines.py文件,將數據寫入數據庫
# -*- coding: utf-8 -*- # Define your item pipelines here # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import pymysql class DangdangPipeline(object):def process_item(self, item, spider):conn=pymysql.connect(host='localhost',port=3306,user='root',passwd='123456',db='dd')for i in range(0,len(item['title'])):title=item['title'][i]link=item['link'][i]comment=item['comment'][i]sql="insert into books(title,link,comment)values('"+title+"','"+link+"','"+comment+"')"conn.query(sql)conn.commit()conn.close()return item?
?
?
總結
以上是生活随笔為你收集整理的python当当网爬虫的全部內容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: GIS原理篇 地图投影
- 下一篇: 2021据大数据调查-中国的程序员数量是
