时间:2021-05-22
下面就是使用Python爬虫库BeautifulSoup对文档树进行遍历并对标签进行操作的实例,都是最基础的内容
html_doc = """<html><head><title>The Dormouse's story</title></head><p class="title"><b>The Dormouse's story</b></p><p class="story">Once upon a time there were three little sisters; and their names were<a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>,<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and<a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;and they lived at the bottom of a well.</p><p class="story">...</p>"""from bs4 import BeautifulSoupsoup = BeautifulSoup(html_doc,'lxml')一、子节点
一个Tag可能包含多个字符串或者其他Tag,这些都是这个Tag的子节点.BeautifulSoup提供了许多操作和遍历子结点的属性。
1.通过Tag的名字来获得Tag
print(soup.head)print(soup.title)<head><title>The Dormouse's story</title></head><title>The Dormouse's story</title>通过名字的方法只能获得第一个Tag,如果要获得所有的某种Tag可以使用find_all方法
soup.find_all('a')[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]2.contents属性:将Tag的子节点通过列表的方式返回
head_tag = soup.headhead_tag.contents[<title>The Dormouse's story</title>]title_tag = head_tag.contents[0]title_tag<title>The Dormouse's story</title>title_tag.contents["The Dormouse's story"]3.children:通过该属性对子节点进行循环
for child in title_tag.children: print(child)The Dormouse's story4.descendants: 不论是contents还是children都是返回直接子节点,而descendants对所有tag的子孙节点进行递归循环
for child in head_tag.children: print(child)<title>The Dormouse's story</title>for child in head_tag.descendants: print(child)<title>The Dormouse's story</title>The Dormouse's story5.string 如果tag只有一个NavigableString类型的子节点,那么tag可以使用.string得到该子节点
title_tag.string"The Dormouse's story"如果一个tag只有一个子节点,那么使用.string可以获得其唯一子结点的NavigableString.
head_tag.string"The Dormouse's story"如果tag有多个子节点,tag无法确定.string对应的是那个子结点的内容,故返回None
print(soup.html.string)None6.strings和stripped_strings
如果tag包含多个字符串,可以使用.strings循环获取
for string in soup.strings: print(string)The Dormouse's storyThe Dormouse's storyOnce upon a time there were three little sisters; and their names wereElsie,Lacie andTillie;and they lived at the bottom of a well.....string输出的内容包含了许多空格和空行,使用strpped_strings去除这些空白内容
for string in soup.stripped_strings: print(string)The Dormouse's storyThe Dormouse's storyOnce upon a time there were three little sisters; and their names wereElsie,LacieandTillie;and they lived at the bottom of a well....二、父节点
1.parent:获得某个元素的父节点
title_tag = soup.titletitle_tag.parent<head><title>The Dormouse's story</title></head>字符串也有父节点
title_tag.string.parent<title>The Dormouse's story</title>2.parents:递归的获得所有父辈节点
link = soup.afor parent in link.parents: if parent is None: print(parent) else: print(parent.name)pbodyhtml[document]三、兄弟结点
sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>",'lxml')print(sibling_soup.prettify())<html> <body> <a> <b> text1 </b> <c> text2 </c> </a> </body></html>1.next_sibling和previous_sibling
sibling_soup.b.next_sibling<c>text2</c>sibling_soup.c.previous_sibling<b>text1</b>在实际文档中.next_sibling和previous_sibling通常是字符串或者空白符
soup.find_all('a')[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]soup.a.next_sibling # 第一个<a></a>的next_sibling是,\n',\n'soup.a.next_sibling.next_sibling<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>2.next_siblings和previous_siblings
for sibling in soup.a.next_siblings: print(repr(sibling))',\n'<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>' and\n'<a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>';\nand they lived at the bottom of a well.'for sibling in soup.find(id="link3").previous_siblings: print(repr(sibling))' and\n'<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>',\n'<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>'Once upon a time there were three little sisters; and their names were\n'四、回退与前进
1.next_element和previous_element
指向下一个或者前一个被解析的对象(字符串或tag),即深度优先遍历的后序节点和前序节点
last_a_tag = soup.find("a", id="link3")print(last_a_tag.next_sibling)print(last_a_tag.next_element);and they lived at the bottom of a well.Tillielast_a_tag.previous_element' and\n'2.next_elements和previous_elements
通过.next_elements和previous_elements可以向前或向后访问文档的解析内容,就好像文档正在被解析一样
for element in last_a_tag.next_elements: print(repr(element))'Tillie'';\nand they lived at the bottom of a well.''\n'<p class="story">...</p>'...''\n'更多关于使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作的方法与文章大家可以点击下面的相关文章
声明:本页内容来源网络,仅供用户参考;我单位不保证亦不表示资料全面及准确无误,也不保证亦不表示这些资料为最新信息,如因任何原因,本网内容或者用户因倚赖本网内容造成任何损失或损害,我单位将不会负任何法律责任。如涉及版权问题,请提交至online#300.cn邮箱联系删除。
说起爬虫一般想到的情况是,使用python中都通过requests库获取网页内容,然后通过beautifulSoup进行筛选文档中的标签和内容。但是这样有个问题
python爬虫模块BeautifulSoup简介简单来说,BeautifulSoup是python的一个库,最主要的功能是从网页抓取数据。官方解释如下:Bea
BeautifulSoup是一个用来从HTML或XML文件中提取数据的Python库,它利用大家所喜欢的解析器提供了许多惯用方法用来对文档树进行导航、查找和修改
Python3安装第三方爬虫库BeautifulSoup4,供大家参考,具体内容如下在做Python3爬虫练习时,从网上找到了一段代码如下:#使用第三方库Bea
本文为大家分享了Python爬虫包BeautifulSoup学习实例,具体内容如下BeautifulSoup使用BeautifulSoup抓取豆瓣电影的一些信息