Python:基于scrapy-redis两种形式的分布式爬虫(scrapy+redis)
360
2022-07-29
时隔数月,国庆期间想做个假期旅游的分析展示。
1、通过Python爬取旅游网站上数据,并存储到数据库
2、通过Echart/FineReport/Superset等数据分析工具对数据展示
环境:
Win10
Python:3.7
Scrapy:1.5.1
使用Pycharm开发
Scrapy文档教程中有Scrapy的安装指导,不过在Windows下安装当初确实遇到许多坑
使用方法大致是进入项目的workspace使用命令行创建一个Scrapy项目,这个项目中有Scrapy的配置、模块等
如:
cd E:Pythonworkspace
E:\PythonWorkspace>scrapy startproject project_name
然后会创建project_name的目录
project_name/
scrapy.cfg
project_name/
__init__.py
items.py
pipelines.py
settings.py
spiders/
__init__.py
...
这些文件分别是:
scrapy.cfg: 项目的配置文件
project_name/: 该项目的python模块。之后您将在此加入代码。
project_name/items.py: 项目中的item文件.
project_name/pipelines.py: 项目中的pipelines文件.
project_name/settings.py: 项目的设置文件.
project_name/spiders/: 放置spider代码的目录.
拷自文档。。
首先写一个Scrapy项目练手,可按照教程,但URL是被墙了的。
以下代码是获取去哪儿网首页Title
Spider爬虫文件:该爬虫name为test
import scrapy
class test(scrapy.spiders.Spider):
name = "test"
# allowed_domains = ["qunar"]
start_urls = ["https://qunar.com/"]
def parse(self,response):
title = str(response.xpath('/html/head/title')[0])
print(title)
不出意外的报错。。
坑1:
run该文件无效,什么都没执行。Scrapy项目不执行,但返回值为运行结束
Process finished with exit code 0
Scrapy项目需通过命令行执行,例如在该项目根目录执行 scrapy crawl spider_name
或者在Pycharm中看到init文件,在其中加上
from scrapy import cmdline
cmdline.execute("scrapy crawl spider_name".split());
然后报错:
def write(self, data, async=False):
^
SyntaxError: invalid syntax
按照网上说法,将该文件中所有async改为其他关键词如shark,报错消失,出现
import win32api
ModuleNotFoundError: No module named 'win32api'
Windows系统上安装win32api模块即可,命令行执行,这里通过豆瓣源安装
C:\Users\yinyunqi>pip install -i https://pypi.douban.com/simple pypiwin32
Looking in indexes: https://pypi.douban.com/simple
Collecting pypiwin32
Downloading https://pypi.doubanio.com/packages/d0/1b/2f292bbd742e369a100c91faa0483172cd91a1a422a6692055ac920946c5/pypiwin32-223-py3-none-any.whl
Collecting pywin32>=223 (from pypiwin32)
Downloading https://pypi.doubanio.com/packages/a3/8a/eada1e7990202cd27e58eca2a278c344fef190759bbdc8f8f0eb6abeca9c/pywin32-224-cp37-cp37m-win_amd64.whl (9.0MB)
100% |████████████████████████████████| 9.1MB 260kB/s
Installing collected packages: pywin32, pypiwin32
Successfully installed pypiwin32-223 pywin32-224
重新运行打印如下
E:\Python\Python37\python.exe E:/PythonWorkspace/NationalAna/NationalAna/__init__.py
2018-09-29 14:07:49 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: NationalAna)
2018-09-29 14:07:49 [scrapy.utils.log] INFO: Versions: lxml 4.2.4.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.7.0b2 (v3.7.0b2:b0ef5c979b, Feb 28 2018, 02:24:20) [MSC v.1912 64 bit (AMD64)], pyOpenSSL 18.0.0 (OpenSSL 1.1.0h 27 Mar 2018), cryptography 2.3, Platform Windows-10-10.0.17134-SP0
2018-09-29 14:07:49 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'NationalAna', 'NEWSPIDER_MODULE': 'NationalAna.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['NationalAna.spiders']}
2018-09-29 14:07:49 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2018-09-29 14:07:50 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-09-29 14:07:50 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-09-29 14:07:50 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-09-29 14:07:50 [scrapy.core.engine] INFO: Spider opened
2018-09-29 14:07:50 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-09-29 14:07:50 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-09-29 14:07:50 [scrapy.core.engine] DEBUG: Crawled (200)
2018-09-29 14:07:50 [scrapy.core.engine] DEBUG: Crawled (200)
2018-09-29 14:07:50 [scrapy.core.engine] INFO: Closing spider (finished)
2018-09-29 14:07:50 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 498,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 23167,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 9, 29, 6, 7, 50, 801314),
'log_count/DEBUG': 3,
'log_count/INFO': 7,
'response_received_count': 2,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2018, 9, 29, 6, 7, 50, 318350)}
2018-09-29 14:07:50 [scrapy.core.engine] INFO: Spider closed (finished)
Process finished with exit code 0
嗯 这样说明Scrapy已经可以使用,下面就可以愉快的推进了任务了
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~