时间:2021-05-22
本文为大家分享了Python爬虫包BeautifulSoup学习实例,具体内容如下
BeautifulSoup
使用BeautifulSoup抓取豆瓣电影的一些信息。
# -*- coding: utf-8 -*-# @Author: HaonanWu# @Date: 2016-12-24 16:18:01# @Last Modified by: HaonanWu# @Last Modified time: 2016-12-24 17:25:33import urllib2import jsonfrom bs4 import BeautifulSoupdef nowplaying_movies(url): user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) soup_packetpage = BeautifulSoup(response, 'lxml') items = soup_packetpage.findAll("li", class_="list-item") # items = soup_packetpage.findAll("li", {"class" : "list-item"}) 等价写法 movies = [] for item in items: if item.attrs['data-category'] == 'nowplaying': movie = {} movie['title'] = item.attrs['data-title'] movie['score'] = item.attrs['data-score'] movie['director'] = item.attrs['data-director'] movie['actors'] = item.attrs['data-actors'] movies.append(movie) print('%(title)s|%(score)s|%(director)s|%(actors)s' % movie) return moviesif __name__ == '__main__': url = 'https://movie.douban.com/nowplaying/beijing/' movies = nowplaying_movies(url) print('%s' % json.dumps(movies, sort_keys=True, indent=4, separators=(',', ': ')))HTMLParser
使用HTMLParser实现上述功能
这里有一些HTMLParser的基础教程
由于HtmlParser自2006年以后就再没更新,目前很多人推荐使用jsoup代替它。
# -*- coding: utf-8 -*-# @Author: HaonanWu# @Date: 2016-12-24 15:57:54# @Last Modified by: HaonanWu# @Last Modified time: 2016-12-24 17:03:27from HTMLParser import HTMLParserimport urllib2import jsonclass MovieParser(HTMLParser): def __init__(self): HTMLParser.__init__(self) self.movies = [] def handle_starttag(self, tag, attrs): def _attr(attrlist, attrname): for attr in attrlist: if attr[0] == attrname: return attr[1] return None if tag == 'li' and _attr(attrs, 'data-title') and _attr(attrs, 'data-category') == 'nowplaying': movie = {} movie['title'] = _attr(attrs, 'data-title') movie['score'] = _attr(attrs, 'data-score') movie['director'] = _attr(attrs, 'data-director') movie['actors'] = _attr(attrs, 'data-actors') self.movies.append(movie) print('%(title)s|%(score)s|%(director)s|%(actors)s' % movie)def nowplaying_movies(url): headers = {'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'} req = urllib2.Request(url, headers=headers) s = urllib2.urlopen(req) parser = MovieParser() parser.feed(s.read()) s.close() return parser.moviesif __name__ == '__main__': url = 'https://movie.douban.com/nowplaying/beijing/' movies = nowplaying_movies(url) print('%s' % json.dumps(movies, sort_keys=True, indent=4, separators=(',', ': ')))以上全部为本篇文章的全部内容,希望对大家的学习有所帮助,也希望大家多多支持。
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
本文实例讲述了python爬虫学习笔记之Beautifulsoup模块用法。分享给大家供大家参考,具体如下:相关内容:什么是beautifulsoupbs4的使
最近在学习Python,自然接触到了爬虫,写了一个小型爬虫软件,从初始Url解析网页,使用正则获取待爬取链接,使用beautifulsoup解析获取文本,使用自
python爬虫模块BeautifulSoup简介简单来说,BeautifulSoup是python的一个库,最主要的功能是从网页抓取数据。官方解释如下:Bea
本文实例讲述了Python实现的爬虫功能。分享给大家供大家参考,具体如下:主要用到urllib2、BeautifulSoup模块#encoding=utf-8i
下面就是使用Python爬虫库BeautifulSoup对文档树进行遍历并对标签进行操作的实例,都是最基础的内容html_doc="""TheDormouse'