Scrapy学习——爬取智联招聘网站案例
Scrapy學習——爬取智聯招聘網站案例
- 安裝scrapy
- 下載
- 安裝
- 準備
- 分析
- 代碼
- 結果
安裝scrapy
如果直接使用pip安裝會在安裝Twisted報錯,所以我們需要手動安裝。
下載
安裝scrapy需要手動下載Twisted、pyOpenSSL。
Twisted:https://www.lfd.uci.edu/~gohlke/pythonlibs/
pyOpenSSL:https://pypi.org/simple/pyopenssl/
注:選擇與自己python版本一致的安裝包
安裝
pip install 下載的文件名準備
創建項目:
scrapy startproject zhaopin這里是引用
創建爬蟲:
scrapy genspider zp zhaopin.com分析
我們爬取一個網站就得需要分析他的網頁源代碼
我分析之后發現,可以直接爬取每個城市的每頁,他的所有信息都是可以直接在每頁的找到,下面就是每個職位的所有信息
通過xpath來獲取標簽
智聯招聘網站需要登陸才能爬取,我使用的是cookie登陸,先登陸之后將cookie復制下來
將cookie復制下來,然后使用復制的cookice進行登陸
代碼
zhaopin爬蟲文件代碼
import scrapy import re,json import time import requests class ZhaopinSpider(scrapy.Spider):name = 'zhaopin'allowed_domains = ['zhaopin.com']def start_requests(self):cookies = 瀏覽器上面的cookie復制粘貼在這里self.cookies = {i.split("=")[0]: i.split("=")[1] for i in cookies.split("; ")}self.headers = {'User-Agent':"Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",}yield scrapy.Request('https://www.zhaopin.com/',callback=self.parse,cookies=self.cookies,headers=self.headers)def parse(self, response):# 自動添加第一次請求的cookiestart_city = 480end_city = 950print("開始爬取")for i in range(start_city,end_city):print("城市ID:",i)url_city = "https://sou.zhaopin.com/?jl={0}".format(i)yield scrapy.Request(url=url_city,callback=self.parse_page,cookies=self.cookies,headers=self.headers)def parse_page(self, response):page = 1next_page = response.xpath("//div[@class='pagination clearfix']//div[@class='pagination__pages']//button[@class='btn soupager__btn soupager__btn--disable']/text()").extract_first()if next_page == "下一頁":print("正在爬取1:")yield scrapy.Request(url=response.request.url,callback=self.parse_zp,cookies=self.cookies,headers=self.headers)elif response.xpath("//div[@class='sou-main']//div[@class='sou-main__center clearfix']//div[@class='positionList-hook']//div[@class='page-empty__tips']//span/text()").extract_first() != None:print("未搜索到:",response.request.url)returnelse:print("正在爬取2:")for i in range(2, 40, 1):url_page = response.request.url + "&p={0}".format(page)page += 1yield scrapy.Request(url=url_page,callback=self.parse_zp,cookies=self.cookies,headers=self.headers)def parse_zp(self, response):item = {}list_body = response.xpath("//div[@class='joblist-box__item clearfix']")print("URL:",response.request.url)for body in list_body:#工作名字item['title'] = body.xpath(".//div[@class='iteminfo__line iteminfo__line1']//div[@class='iteminfo__line1__jobname']//span[@class='iteminfo__line1__jobname__name']/text()").extract_first()list_li = body.xpath(".//div[@class='iteminfo__line iteminfo__line2']//ul//li")#學歷item['Education'] = list_li[2].xpath("./text()").extract_first()# 工作地點item['job_location'] = list_li[0].xpath("./text()").extract_first()#工作時間item['job_time'] = list_li[1].xpath("./text()").extract_first()#工資money = body.xpath(".//div[@class='iteminfo__line iteminfo__line2']//div[@class='iteminfo__line2__jobdesc']//p/text()").extract_first()item['money'] = money.split()#工作需要info = body.xpath(".//div[@class='iteminfo__line iteminfo__line3']//div[@class='iteminfo__line3__welfare']//div")info_list = []for i in info:info_list.append(i.xpath("./text()").extract_first())item['job_info'] = " ".join(info_list)# #公司名item['Company_name'] = body.xpath("//div[@class='iteminfo__line iteminfo__line1']//div[@class='iteminfo__line1__compname']//span[@class='iteminfo__line1__compname__name']/text()").extract_first()company = body.xpath(".//div[@class='iteminfo__line iteminfo__line2']//div[@class='iteminfo__line2__compdesc']//span")# 公司人數item['company_number'] = company[1].xpath("./text()").extract()# 公司類型item['company_type'] = company[0].xpath("./text()").extract()yield item先在items寫入以下代碼
# Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.htmlimport scrapy class RecruitmentItem(scrapy.Item):# 工作名字title = scrapy.Field()# 工資money = scrapy.Field()# 學歷Education = scrapy.Field()# 工作描述# job_desc = scrapy.Field()#工作時間job_time = scrapy.Field()#工作需要job_info = scrapy.Field()# 工作地點job_location = scrapy.Field()# 公司名Company_name = scrapy.Field()# 公司類型company_type = scrapy.Field()# 公司人數company_number = scrapy.Field()pipelines代碼
# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html# useful for handling different item types with a single interface from itemadapter import ItemAdapterclass RecruitmentPipeline:def process_item(self, item, spider):print(item)with open("智聯招聘.txt", "a+", encoding="utf-8") as f: for i in dict(item).items():f.write("".join(i[1])+"\t")f.write("\n")# return itemdef open_spider(self,spider):with open("智聯招聘.txt", "w", encoding="utf-8") as f: f.write("工作名\t學歷\t工作地點\t工作時間\t工資\t公司名\t企業人數\t企業類型\n")settings.py
需要在配置文件中放開幾個配置
#管道文件
ITEM_PIPELINES = {
‘Recruitment.pipelines.RecruitmentPipeline’: 300,
}
#是否使用自定義的cookies
COOKIES_ENABLED = True
#是否遵循robots.txt規則
ROBOTSTXT_OBEY = False
結果
結語 :scrapy爬蟲之爬取智聯招聘筆記
總結
以上是生活随笔為你收集整理的Scrapy学习——爬取智联招聘网站案例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: [摘]毕业论文之感谢篇
- 下一篇: 工笔佛像怎么看和基本线条怎么画