网络爬虫性能提升与功能拓展的研究与实现
[Abstract]:With the rapid development of the network, the World wide Web has become the carrier of a lot of information. How to extract and utilize the information effectively becomes a huge challenge. In order to meet this demand, the network crawler came into being. It is a program or script that automatically grabs World wide Web information according to certain rules. Firstly, this paper introduces the history of web crawler and its application field. By analyzing the mainstream web crawler, it is found that today's web crawler mainly serves search engine and prepares data resources for subject oriented user query. Based on the highly extensible crawling architecture of web crawlers, the importance of traditional crawlers to search engines has gradually weakened its flexibility and functional characteristics. Then, this paper discusses some indexes to evaluate the performance of web crawlers, and then introduces the optimization strategies of small and medium-sized web crawlers from two aspects of performance improvement and function expansion. In terms of performance improvement, this paper introduces several optimization schemes according to different function modules. First, choose Gzip-deflate compression code transmission to reduce the network transmission time by reducing the amount of transmission; second, asynchronous request download, increase bandwidth utilization and CPU utilization; third, use breadth first crawling, Using Bloom filter to achieve large-scale URL re-detection; fourth, using well-designed regular expressions to extract page links; fifthly, strictly regularizing the URL crawled to reduce the error of URL to the reptile misleading; sixth, The optimized thread pool efficiently manages multithreading. In the aspect of function expansion, this paper mainly tries to distinguish the traditional reptile from the following three aspects. First, static page performance analysis provides performance improvement advice to the website; second, it acts as an automated test tool for performing test cases on a specified page; third, customizable focused data extraction, According to the needs of the user for the specified format of data capture. Based on the verification of the above optimization strategy, the .NET platform is particularly suitable for lightweight crawlers. The crawler is developed in Visual Studio 2008 with C # language based on. Net platform. The program runs in command-line mode and is highly configurable based on files.
【学位授予单位】:吉林大学
【学位级别】:硕士
【学位授予年份】:2012
【分类号】:TP391.3
【参考文献】
相关期刊论文 前10条
1 李悦;;搜索引擎技术的产生与发展综述[J];福建电脑;2010年05期
2 吕晓峰,董守斌,张凌;并行数据采集器任务分配策略的设计与实现[J];华中科技大学学报(自然科学版);2003年S1期
3 周源远,王继成,郑刚,张福炎;Web页面清洗技术的研究与实现[J];计算机工程;2002年09期
4 周立柱,林玲;聚焦爬虫技术研究综述[J];计算机应用;2005年09期
5 尹江;尹治本;黄洪;;网络爬虫效率瓶颈的分析与解决方案[J];计算机应用;2008年05期
6 王华,马亮,顾明;线程池技术研究与应用[J];计算机应用研究;2005年11期
7 刘金红;陆余良;;主题网络爬虫研究综述[J];计算机应用研究;2007年10期
8 程岚岚;;基于正则表达式的大规模网页术语对抽取研究[J];情报杂志;2008年11期
9 许笑;张伟哲;张宏莉;方滨兴;;广域网分布式Web爬虫[J];软件学报;2010年05期
10 邹志华;陈玉健;刘强;;一种维护WAP网站的网络爬虫的设计[J];微计算机信息;2006年21期
相关会议论文 前1条
1 朴星海;赵铁军;郑德权;张迪;;面向Blog的网络爬行器设计与实现[A];中文信息处理前沿进展——中国中文信息学会二十五周年学术会议论文集[C];2006年
相关硕士学位论文 前3条
1 何世林;基于Java技术的搜索引擎研究与实现[D];西南交通大学;2006年
2 朱良峰;主题网络爬虫的研究与设计[D];南京理工大学;2008年
3 刘喜亮;面向主题的网络爬虫设计与实现[D];湖南大学;2009年
,本文编号:2137449
本文链接:https://www.wllwen.com/kejilunwen/sousuoyinqinglunwen/2137449.html