基本简介
在使用lxml库解析网页数据时,每次都需要编写和测试xpath的路径表达式,显得非常烦琐。为了解决这个问题,python还提供了beautiful soup提取html或xml文档的节点,beautiful soup使用起来更加便捷,受到了开发人员的推崇。
beautifulsoup简称
简称:bs4
什么是beatifulsoup
beautifulsoup,和lxml一样,是一个html的解析器,主要功能也是解析和提取数据
优缺点
- 缺点:效率没有lxml的效率高
- 优点:接口设计人性化,使用方便
安装以及创建
安装
pip install bs4
导入
from bs4 import beautifulsoup
创建对象
服务器响应的文件生成对象
soup = beautifulsoup(response.text, 'lxml')
本地文件生成对象
soup = beautifulsoup(open('1.html','r','encoding='utf-8'), 'lxml')
注意:默认打开文件的编码格式gbk所以需要指定打开编码格式
节点定位
根据标签名查找节点
soup.a
【注】只能找到第一个a
soup.a.name
soup.a.attrs
练习文档html(后面的所有都是根据这个来练习):
<!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>title</title> </head> <body> <div> <ul> <li id="l1">张三</li> <li id="l2">李四</li> <li>王五</li> <a href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" " class="a1">人生苦短 我爱python</a> <span>嘿嘿嘿</span> </ul> </div> <a href="" title=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" a2">百度</a> <div id="d1"> <span> 哈哈哈 </span> </div> <p id="p1" class="p1">呵呵呵</p> </body> </html>
代码:
from bs4 import beautifulsoup # 创建对象 html = beautifulsoup(open('bs4的基本使用.html','r',encoding='utf-8'),'lxml') # 获取a标签对象 print(html.a) # 结果:<a class="a1" href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ">人生苦短 我爱python</a> # 获取a标签的name print(html.a.name) # 结果:a # 获取a标签的属性 print(html.a.attrs) # 结果:{'href': '', 'id': '', 'class': ['a1']}
函数
find(返回一个对象)
语法:
find('a'):只找到第一个a标签 find('a', title='名字') find('a', class_='名字')
代码:
from bs4 import beautifulsoup # 创建对象 html = beautifulsoup(open('bs4的基本使用.html','r',encoding='utf-8'),'lxml') # 获取a标签 res = html.find('a') print(res) # 结果:<a class="a1" href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ">人生苦短 我爱python</a> # 获取title是a2的a标签 res = html.find('a',title = 'a2') print(res) # 结果:<a href="" title=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" a2">百度</a> # 获取class为a1的a标签,注意由于class是关键字,所以后面要添加一个下划线 res = html.find('a',class_ = 'a1') print(res) # 结果:<a class="a1" href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ">人生苦短 我爱python</a>
find_all(返回一个列表)
语法:
find_all('a') 查找到所有的a find_all(['a', 'span']) 返回所有的a和span find_all('a', limit=2) 只找前两个a
代码:
from bs4 import beautifulsoup # 创建对象 html = beautifulsoup(open('bs4的基本使用.html','r',encoding='utf-8'),'lxml') # 获取所有的a标签 res = html.find_all('a') print(res) # 结果:[<a class="a1" href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ">人生苦短 我爱python</a>, <a href="" title=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" a2">百度</a>] # 获取所有的a标签和span标签 res = html.find_all(['a','span']) print(res) # 结果:[<a class="a1" href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ">人生苦短 我爱python</a>, <span>嘿嘿嘿</span>, <a href="" title=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" a2">百度</a>, <span> # 哈哈哈 # </span>] # 注意:结果里面的格式和源码一致 # 方法1:使用limit参数 # 获取前两个li标签 res = html.find_all('li',limit=2) # 方法2:使用下标索引 # res = html.find_all('li')[:2] print(res) # 结果:[<li id="l1">张三</li>, <li id="l2">李四</li>]
select(根据选择器得到节点对象)【推荐】
语法:
select('标签名称'):select('p')
select('标签class'):select('.page')
select('标签id'):select('#page')
属性选择器
- select('li[属性]'):select('li[class]')
- select('li[属性=属性值]'):select('li[class = "page"]')
层级选择
- div span:select('div span')
- div > span:select('div>span')
其它
一次选择多个:select('li,span')
代码:
from bs4 import beautifulsoup # 创建对象 html = beautifulsoup(open('bs4的基本使用.html','r',encoding='utf-8'),'lxml') """普通类型""" # 获取所有的li标签 res = html.select('li') print(res) # 结果:[<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>] # 获取class为a1的标签 res = html.select('.a1') print(res) # 结果:[<a class="a1" href="" id=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" ">人生苦短 我爱python</a>] # 获取id为l1的标签 res = html.select('#l1') print(res) # 结果:[<li id="l1">张三</li>] # 获取所有a标签和span标签 res = html.select('li,span') print(res) # 结果: # [<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>, <span>嘿嘿嘿</span>, <span> # 哈哈哈 # </span>] # 注意:结果也是按照源码格式输出 """属性选择器类型""" # 获取有title属性的标签 res = html.select('[title]') print(res) # 结果:[<a href="" title=" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" a2">百度</a>] # 获取有id属性的li标签 res = html.select('li[id]') print(res) # 结果:[<li id="l1">张三</li>, <li id="l2">李四</li>] # 获取id属性值为l1的标签 res = html.select('[id = "l1"]') print(res) # 结果:[<li id="l1">张三</li>] # 获取li标签里面id属性值为l2的标签 res = html.select('li[id = "l2"]') print(res) # 结果:[<li id="l2">李四</li>] """级别选择器类型""" # 获取div里面的所有li标签 # 方法1 res = html.select('div li') # 方法2 res = html.select('div ul li') # 方法3 res = html.select('div>ul>li') print(res) # 结果:[<li id="l1">张三</li>, <li id="l2">李四</li>, <li>王五</li>]
获取节点信息
获取节点内容:适用于标签中嵌套标签的结构
obj.string(有层级限制)
obj.get_text()【推荐】(无层级限制)
节点的属性
tag.name 获取标签名
tag.attrs 将属性值作为一个字典返回
获取节点属性
- obj.attrs.get('title')【常用】
- obj.get('title')
- obj['title']
代码:
from bs4 import beautifulsoup # 创建对象 html = beautifulsoup(open('bs4的基本使用.html','r',encoding='utf-8'),'lxml') """获取文本""" # 获取div的id为d1里面span标签的文本(有层级限制,需一个一个进入span,非则报错) res = html.select('div[id = "d1"]>span')[0] print(res.string) # 结果:哈哈哈 # 获取div的id为d1里面span标签的文本(无层级限制,无需一个一个进入span) res = html.select('div[id = "d1"]')[0] print(res.get_text()) # 结果:哈哈哈 """获取标签属性值""" # 获取百度标签里面的属性值 res = html.select('a[title = "a2"]')[0] # 方法1 print(res.attrs.get('href')) # 方法2 print(res.get('href')) # 方法3 print(res['href']) # 结果:https://www.baidu.com
到此这篇关于python beautifulsoup高效解析网页数据的文章就介绍到这了,更多相关beautifulsoup解析网页数据内容请搜索代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持代码网!
发表评论