scrapy爬取斗图表情
生活随笔
收集整理的這篇文章主要介紹了
scrapy爬取斗图表情
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
用scrapy爬取斗圖表情,其實呀,我是運用別人的博客寫的,里面的東西改了改就好了,推存鏈接“ http://www.cnblogs.com/jiaoyu121/p/6992587.html ”
首先建立項目:scrapy startproject doutu
在scrapy框架里先寫自己要爬取的是什么,在item里面寫。
import scrapyclass DoutuItem(scrapy.Item):
# define the fields for your item here like
img_url = scrapy.Field()
name = scrapy.Field()
寫完以后就是該寫主程序了doutula.py
# -*- coding: utf-8 -*-
import scrapy
import os
import requests
from doutu.items import DoutuItem
class DoutulaSpider(scrapy.Spider):
name = "doutula"
allowed_domains = ["www.doutula.com"]
這是斗圖網站
start_urls = ['https://www.doutula.com/photo/list/?page={}'.format(i) for i in range(1,847)]
斗圖循環的頁數
def parse(self, response):
i=0
for content in response.xpath('//*[@id="pic-detail"]/div/div[1]/div[2]/a'):
這里的xpath是:
i+=1
item = DoutuItem()
item['img_url'] = 'http:'+ content.xpath('//img/@data-original').extract()[i]
這里的xpath是圖片的超鏈接:
?
item['name']=content.xpath('//p/text()').extract()[i]文中的extract()用來返回一個list(就是系統自帶的那個) 里面是一些你提取的內容,[i]是結合前面的i的循環每次獲取下一個標簽內容,如果不這樣設置,就會把全部的標簽內容放入一個字典的值中。
這里是圖片的名字:
try:
if not os.path.exists('doutu'):
os.makedirs('doutu')
r = requests.get(item['img_url'])
filename = 'doutu\\{}'.format(item['name'])+item['img_url'][-4:]用來獲取圖片的名稱,最后item['img_url'][-4:]是獲取圖片地址的最后四位這樣就可以保證不同的文件格式使用各自的后綴。
with open(filename,'wb') as fo:
fo.write(r.content)
except:
print('Error')
yield item
下面是修改setting的配置文件:
BOT_NAME = 'doutu'
SPIDER_MODULES = ['doutu.spiders']
NEWSPIDER_MODULE = 'doutu.spiders'
DOWNLOADER_MIDDLEWARES = {
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'doutu.middlewares.RotateUserAgentMiddleware': 400,
}
ROBOTSTXT_OBEY = False
CONCURRENT_REQUESTS =16
DOWNLOAD_DELAY = 0.2
COOKIES_ENABLED = False
配置middleware.py配合settings中的UA設置可以在下載中隨機選擇UA有一定的反ban效果,在原有代碼基礎上加入下面代碼。這里的user_agent_list可以加入更多。
import random
from scrapy.downloadermiddlewares.useragent import UserAgentMiddleware
from scrapy import signals
class RotateUserAgentMiddleware(UserAgentMiddleware):
def __init__(self, user_agent=''):
self.user_agent = user_agent
def process_request(self, request, spider):
ua = random.choice(self.user_agent_list)
if ua:
print(ua)
request.headers.setdefault('User-Agent', ua)
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
]
下面寫一個啟動項目的文件:
from scrapy.cmdline import execute
execute(['scrapy','crawl','doutula'])
下面是見證奇跡的時刻啦:
?
?我這是屬于摘錄,復制的 但是我做了一遍,已經大部分已經更改成我看的懂得套路了 ? ? ? ? ?轉載于:https://www.cnblogs.com/lianghongrui/p/7061447.html
總結
以上是生活随笔為你收集整理的scrapy爬取斗图表情的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: AspectJ详解
- 下一篇: Linux PostgreSQL离线下载