python爬虫实践 —— 一、入门篇
Scrapy爬蟲(chóng)實(shí)踐 —— 一、入門(mén)篇
- 前言
- 一、選擇爬蟲(chóng)框架——Scrapy
- 二、Scrapy安裝
- 1.引入庫(kù)
- 2.安裝
- 3.驗(yàn)證
- 三、Scrapy的第一個(gè)爬蟲(chóng)工程
- 1. 使用框架創(chuàng)建新工程
- 2. 第一只蟲(chóng)
- 3. 開(kāi)門(mén)、放蟲(chóng)
- 參考
前言
有朋友最近讓我?guī)兔ψ鰝€(gè)爬蟲(chóng),想想也沒(méi)咋做過(guò),順便就來(lái)補(bǔ)個(gè)課。本文就介紹了一下做爬蟲(chóng)過(guò)程中踩下的坑。
一、選擇爬蟲(chóng)框架——Scrapy
一直都聽(tīng)說(shuō)說(shuō)Python用來(lái)爬蟲(chóng)是最合適的,所以當(dāng)然就找了Scrapy來(lái)練手。
Scrapy基本大家想要入門(mén)都會(huì)知道,它是一個(gè)爬蟲(chóng)的應(yīng)用框架,主要包括爬取網(wǎng)頁(yè)、結(jié)構(gòu)化解構(gòu)。主要用于數(shù)據(jù)挖掘、信息處理或者歷史歸檔。
二、Scrapy安裝
我的python是在visual studio code中編寫(xiě)的,安裝的是python3.8的版本,這部分就不贅述了。
1.引入庫(kù)
Scrapy由純python寫(xiě)就,因此它依賴于以下的庫(kù):
- lxml
- parsel
- w3lib
- twisted
- cryptography and pyOpenSSL
上述庫(kù)均可以直接用pip install安裝
2.安裝
pip install scrapy3.驗(yàn)證
打開(kāi)bash,進(jìn)入工作區(qū)
$ scrapy Scrapy 2.5.0 - no active projectUsage:scrapy <command> [options] [args]Available commands:bench Run quick benchmark testcommandsfetch Fetch a URL using the Scrapy downloadergenspider Generate new spider using pre-defined templatesrunspider Run a self-contained spider (without creating a project)settings Get settings valuesshell Interactive scraping consolestartproject Create new projectversion Print Scrapy versionview Open URL in browser, as seen by Scrapy[ more ] More commands available when run from project directoryUse "scrapy <command> -h" to see more info about a command大功告成,安裝成功
這一步執(zhí)行的時(shí)候出錯(cuò),VS Code的老毛病了,關(guān)掉VS Code重啟一下就好了
三、Scrapy的第一個(gè)爬蟲(chóng)工程
1. 使用框架創(chuàng)建新工程
scrapy startproject quote創(chuàng)建后,可以用tree命令看一下:
E:\PythonWork\Scrapy\quote>tree /f 文件夾 PATH 列表 卷序列號(hào)為 78FD-091E E:. │ scrapy.cfg #爬蟲(chóng)配置文件 │ └─quote #項(xiàng)目python模塊,可以在此引入我們的代碼│ items.py #item定義文件│ middlewares.py #中間件文件,包含IP代理設(shè)置等可以在此文件中處理│ pipelines.py #管道處理文件│ settings.py #設(shè)定文件│ __init__.py │└─spiders #爬蟲(chóng)所在文件夾__init__.py入門(mén)的場(chǎng)景下我們基本關(guān)注爬蟲(chóng)所在的文件夾即可
2. 第一只蟲(chóng)
在Scrapy中,爬蟲(chóng)是定義爬取行為和頁(yè)面的一個(gè)類。應(yīng)該是Spider子類并且定義初始化的請(qǐng)求。例如爬取的鏈接,以及定義如何處理分析爬取的頁(yè)面的內(nèi)容并解析數(shù)據(jù)。
我們?cè)趒uote\spiders文件夾下創(chuàng)建quote_spider.py文件,文件內(nèi)容如下:
parse()方法通常用來(lái)處理返回內(nèi)容,將爬取的數(shù)據(jù)提取至字典,或?qū)ふ蚁乱粋€(gè)要爬取的URL(通過(guò)調(diào)用Request方法進(jìn)行下一步的爬取)
注意,如果提示編碼錯(cuò)誤,把中文注釋刪除即可
3. 開(kāi)門(mén)、放蟲(chóng)
scrapy crawl quotes這條命令運(yùn)行名為quotes的爬蟲(chóng),該名字在我們剛才添加的文件中定義了
2021-05-20 16:11:29 [scrapy.utils.log] INFO: Scrapy 2.5.0 started (bot: quote) 2021-05-20 16:11:29 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 20.3.0, Python 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1k 25 Mar 2021), cryptography 3.4.7, Platform Windows-7-6.1.7601-SP1 2021-05-20 16:11:29 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor 2021-05-20 16:11:29 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'quote','NEWSPIDER_MODULE': 'quote.spiders','ROBOTSTXT_OBEY': True,'SPIDER_MODULES': ['quote.spiders']} 2021-05-20 16:11:29 [scrapy.extensions.telnet] INFO: Telnet Password: cf9b8b15e70bb2c3 2021-05-20 16:11:29 [scrapy.middleware] INFO: Enabled extensions: ['scrapy.extensions.corestats.CoreStats','scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.logstats.LogStats'] 2021-05-20 16:11:30 [scrapy.middleware] INFO: Enabled downloader middlewares: ['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware','scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware','scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware','scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware','scrapy.downloadermiddlewares.useragent.UserAgentMiddleware','scrapy.downloadermiddlewares.retry.RetryMiddleware','scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware','scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware','scrapy.downloadermiddlewares.redirect.RedirectMiddleware','scrapy.downloadermiddlewares.cookies.CookiesMiddleware','scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware','scrapy.downloadermiddlewares.stats.DownloaderStats'] 2021-05-20 16:11:30 [scrapy.middleware] INFO: Enabled spider middlewares: ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware','scrapy.spidermiddlewares.offsite.OffsiteMiddleware','scrapy.spidermiddlewares.referer.RefererMiddleware','scrapy.spidermiddlewares.urllength.UrlLengthMiddleware','scrapy.spidermiddlewares.depth.DepthMiddleware'] 2021-05-20 16:11:30 [scrapy.middleware] INFO: Enabled item pipelines: [] 2021-05-20 16:11:30 [scrapy.core.engine] INFO: Spider opened 2021-05-20 16:11:30 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2021-05-20 16:11:30 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2021-05-20 16:11:31 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://quotes.toscrape.com/robots.txt> (referer: None) 2021-05-20 16:11:31 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://quotes.toscrape.com/page/1/> (referer: None) 2021-05-20 16:11:31 [quotes] DEBUG: Saved file quotes-1.html 2021-05-20 16:11:31 [scrapy.core.scraper] DEBUG: Scraped from <200 http://quotes.toscrape.com/page/1/>看到這樣的結(jié)果,我們就能夠認(rèn)為爬蟲(chóng)已經(jīng)正常工作了。
查看我們的目錄下,目錄下生成了quotes-1.html和quotes-2.html
參考
- https://docs.scrapy.org/en/latest/intro/tutorial.html
總結(jié)
以上是生活随笔為你收集整理的python爬虫实践 —— 一、入门篇的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: Leet Code题解 - 1559.
- 下一篇: ROS2学习(一).Ubuntu 20.