目录
信息在无论何时何地都是至关重要的。随着万维网的高速发展,信息呈现指数形式的爆炸增长。传统的信息处理延申至互联网领域时,常常需要将分布在各网站的信息下载到本地在进一步处理。然而在大量数据采集时,明显传统的方法并不适用,传统方法在网络查询信息时无法避免的一个问题就是眼花缭乱的信息让人难以辨别和舍取,为了更为高效和准确的获取想要的大量时信息,网络爬虫不失为一个绝佳的手段。通过自定的规则,可以从指定的网站挖掘相关的信息,并进行筛选后获得更为精准的信息。
本为的网络爬虫程序主要采用Python脚本语言。使用Tkinter库构造图形界面便于操作,即通过点击对应按钮触发相应功能。数据存储并没有使用SQL和NoSQL,网络爬虫爬取的结果直接以xlsx文件保存,以便于数据的读取并将数据可视化。数据分析采用matplolib库,以pandas库读取xlsx文件,读取获得的数据生成散点图或柱状图以便于观察。
Information is vital wherever and whenever it is. With the rapid development of the World Wide Web, the explosion of information in the form of an index has grown. When traditional information processing is extended to the Internet domain, it is often necessary to download information distributed on various websites to the local area for further processing. However, in the case of a large amount of data collection, the obvious traditional method is not applicable. One problem that the traditional method cannot avoid when searching for information on the network is that the dazzling information is difficult to distinguish and retract, in order to obtain the desired one more efficiently and accurately. A lot of information, web crawlers are a great way. Through custom rules, relevant information can be mined from designated websites and filtered to obtain more accurate information.
This web crawler mainly uses the Python scripting language. The graphical interface is constructed with the Tkinter module for easy operation, that is, the corresponding function is triggered by clicking the corresponding button. The data store does not use SQL and NoSQL, and the results of the web crawler crawl are saved directly in the xlsx file to facilitate data reading and visualization of the data. The data analysis uses the matplolib module to read the xlsx file with the pandas module, and the obtained data is read to generate a scatter plot or a histogram for observation.
Key words : Internet worm ;Python ;data mining ; data analysis