Request/Response【学习笔记03】
生活随笔
收集整理的這篇文章主要介紹了
Request/Response【学习笔记03】
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
Request
Request 部分源碼:
# 部分代碼 class Request(object_ref):def __init__(self, url, callback=None, method='GET', headers=None, body=None, cookies=None, meta=None, encoding='utf-8', priority=0,dont_filter=False, errback=None):self._encoding = encoding # this one has to be set firstself.method = str(method).upper()self._set_url(url)self._set_body(body)assert isinstance(priority, int), "Request priority not an integer: %r" % priorityself.priority = priorityassert callback or not errback, "Cannot use errback without a callback"self.callback = callbackself.errback = errbackself.cookies = cookies or {}self.headers = Headers(headers or {}, encoding=encoding)self.dont_filter = dont_filterself._meta = dict(meta) if meta else None@propertydef meta(self):if self._meta is None:self._meta = {}return self._meta其中,比較常用的參數(shù):
url: 就是需要請(qǐng)求,并進(jìn)行下一步處理的urlcallback: 指定該請(qǐng)求返回的Response,由那個(gè)函數(shù)來(lái)處理。method: 請(qǐng)求一般不需要指定,默認(rèn)GET方法,可設(shè)置為"GET", "POST", "PUT"等,且保證字符串大寫headers: 請(qǐng)求時(shí),包含的頭文件。一般不需要。內(nèi)容一般如下:# 自己寫過(guò)爬蟲(chóng)的肯定知道Host: media.readthedocs.orgUser-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:33.0) Gecko/20100101 Firefox/33.0Accept: text/css,*/*;q=0.1Accept-Language: zh-cn,zh;q=0.8,en-us;q=0.5,en;q=0.3Accept-Encoding: gzip, deflateReferer: http://scrapy-chs.readthedocs.org/zh_CN/0.24/Cookie: _ga=GA1.2.1612165614.1415584110;Connection: keep-aliveIf-Modified-Since: Mon, 25 Aug 2014 21:59:35 GMTCache-Control: max-age=0meta: 比較常用,在不同的請(qǐng)求之間傳遞數(shù)據(jù)使用的。字典dict型request_with_cookies = Request(url="http://www.example.com",cookies={'currency': 'USD', 'country': 'UY'},meta={'dont_merge_cookies': True})encoding: 使用默認(rèn)的 'utf-8' 就行。dont_filter: 表明該請(qǐng)求不由調(diào)度器過(guò)濾。這是當(dāng)你想使用多次執(zhí)行相同的請(qǐng)求,忽略重復(fù)的過(guò)濾器。默認(rèn)為False。errback: 指定錯(cuò)誤處理函數(shù)Response
# 部分代碼 class Response(object_ref):def __init__(self, url, status=200, headers=None, body='', flags=None, request=None):self.headers = Headers(headers or {})self.status = int(status)self._set_body(body)self._set_url(url)self.request = requestself.flags = [] if flags is None else list(flags)@propertydef meta(self):try:return self.request.metaexcept AttributeError:raise AttributeError("Response.meta not available, this response " \"is not tied to any request")大部分參數(shù)和上面的差不多:
status: 響應(yīng)碼 _set_body(body): 響應(yīng)體 _set_url(url):響應(yīng)url self.request = request發(fā)送POST請(qǐng)求
-
可以使用?yield scrapy.FormRequest(url, formdata, callback)方法發(fā)送POST請(qǐng)求。
-
如果希望程序執(zhí)行一開(kāi)始就發(fā)送POST請(qǐng)求,可以重寫Spider類的start_requests(self)方法,并且不再調(diào)用start_urls里的url。
模擬登陸
使用FormRequest.from_response()方法模擬用戶登錄
通常網(wǎng)站通過(guò)?實(shí)現(xiàn)對(duì)某些表單字段(如數(shù)據(jù)或是登錄界面中的認(rèn)證令牌等)的預(yù)填充。
使用Scrapy抓取網(wǎng)頁(yè)時(shí),如果想要預(yù)填充或重寫像用戶名、用戶密碼這些表單字段, 可以使用 FormRequest.from_response() 方法實(shí)現(xiàn)。
下面是使用這種方法的爬蟲(chóng)例子:
import scrapyclass LoginSpider(scrapy.Spider):name = 'example.com'start_urls = ['http://www.example.com/users/login.php']def parse(self, response):return scrapy.FormRequest.from_response(response,formdata={'username': 'john', 'password': 'secret'},callback=self.after_login)def after_login(self, response):# check login succeed before going onif "authentication failed" in response.body:self.log("Login failed", level=log.ERROR)return# continue scraping with authenticated session...知乎爬蟲(chóng)案例參考:
zhihuSpider.py爬蟲(chóng)代碼
#!/usr/bin/env python # -*- coding:utf-8 -*- from scrapy.spiders import CrawlSpider, Rule from scrapy.selector import Selector from scrapy.linkextractors import LinkExtractor from scrapy import Request, FormRequest from zhihu.items import ZhihuItemclass ZhihuSipder(CrawlSpider) :name = "zhihu"allowed_domains = ["www.zhihu.com"]start_urls = ["http://www.zhihu.com"]rules = (Rule(LinkExtractor(allow = ('/question/\d+#.*?', )), callback = 'parse_page', follow = True),Rule(LinkExtractor(allow = ('/question/\d+', )), callback = 'parse_page', follow = True),)headers = {"Accept": "*/*","Accept-Language": "en-US,en;q=0.8,zh-TW;q=0.6,zh;q=0.4","Connection": "keep-alive","Content-Type":" application/x-www-form-urlencoded; charset=UTF-8","User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2125.111 Safari/537.36","Referer": "http://www.zhihu.com/"}#重寫了爬蟲(chóng)類的方法, 實(shí)現(xiàn)了自定義請(qǐng)求, 運(yùn)行成功后會(huì)調(diào)用callback回調(diào)函數(shù)def start_requests(self):return [Request("https://www.zhihu.com/login", meta = {'cookiejar' : 1}, callback = self.post_login)]def post_login(self, response):print 'Preparing login'#下面這句話用于抓取請(qǐng)求網(wǎng)頁(yè)后返回網(wǎng)頁(yè)中的_xsrf字段的文字, 用于成功提交表單xsrf = response.xpath('//input[@name="_xsrf"]/@value').extract()[0]print xsrf#FormRequeset.from_response是Scrapy提供的一個(gè)函數(shù), 用于post表單#登陸成功后, 會(huì)調(diào)用after_login回調(diào)函數(shù)return [FormRequest.from_response(response, #"http://www.zhihu.com/login",meta = {'cookiejar' : response.meta['cookiejar']},headers = self.headers, #注意此處的headersformdata = {'_xsrf': xsrf,'email': '123456@qq.com','password': '123456'},callback = self.after_login,dont_filter = True)]def after_login(self, response) :for url in self.start_urls :yield self.make_requests_from_url(url)def parse_page(self, response):problem = Selector(response)item = ZhihuItem()item['url'] = response.urlitem['name'] = problem.xpath('//span[@class="name"]/text()').extract()print item['name']item['title'] = problem.xpath('//h2[@class="zm-item-title zm-editable-content"]/text()').extract()item['description'] = problem.xpath('//div[@class="zm-editable-content"]/text()').extract()item['answer']= problem.xpath('//div[@class=" zm-editable-content clearfix"]/text()').extract()return itemItem類設(shè)置
from scrapy.item import Item, Fieldclass ZhihuItem(Item):# define the fields for your item here like:# name = scrapy.Field()url = Field() #保存抓取問(wèn)題的urltitle = Field() #抓取問(wèn)題的標(biāo)題description = Field() #抓取問(wèn)題的描述answer = Field() #抓取問(wèn)題的答案name = Field() #個(gè)人用戶的名稱setting.py 設(shè)置抓取間隔
BOT_NAME = 'zhihu'SPIDER_MODULES = ['zhihu.spiders'] NEWSPIDER_MODULE = 'zhihu.spiders' DOWNLOAD_DELAY = 0.25 #設(shè)置下載間隔為250ms總結(jié)
以上是生活随笔為你收集整理的Request/Response【学习笔记03】的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 如何安装matlab2016b
- 下一篇: CentOS下的Mysql的安装和使用