Python BeautifulSoup库 Python实战快速上手BeautifulSoup库爬取专栏标题和地址

软件发布|下载排行|最新软件

当前位置:首页IT学院IT技术

Python BeautifulSoup库 Python实战快速上手BeautifulSoup库爬取专栏标题和地址

小旺不正经   2021-10-20 我要评论
想了解Python实战快速上手BeautifulSoup库爬取专栏标题和地址的相关内容吗,小旺不正经在本文为您仔细讲解Python BeautifulSoup库的相关知识和一些Code实例,欢迎阅读和指正,我们先划重点:Python,BeautifulSoup库,Python,实战,下面大家一起来学习吧。

BeautifulSoup库快速上手

安装

pip install beautifulsoup4
# 上面的安装失败使用下面的 使用镜像
pip install beautifulsoup4 -i http://pypi.tuna.tsinghua.edu.cn/simple

使用PyCharm的命令行

image-20211019110243706

解析标签

from bs4 import BeautifulSoup
import requests
url='https://blog.csdn.net/weixin_42403632/category_11076268.html'
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'}
html=requests.get(url,headers=headers).text
s=BeautifulSoup(html,'html.parser')
title =s.select('h2')
for i in title:
    print(i.text)

第一行代码:导入BeautifulSoup库

第二行代码:导入requests

第三、四、五行代码:获取url的html

第六行代码:激活BeautifulSoup库 'html.parser'设置解析器为HTML解析器

第七行代码:选取所有<h2>标签

image-20211019142518434

解析属性

BeautifulSoup库 支持根据特定属性解析网页元素

根据class值解析

from bs4 import BeautifulSoup
import requests
url='https://blog.csdn.net/weixin_42403632/category_11076268.html'
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'}
html=requests.get(url,headers=headers).text
s=BeautifulSoup(html,'html.parser')
title =s.select('.column_article_title')
for i in title:
    print(i.text)

image-20211019142955305

根据ID解析

from bs4 import BeautifulSoup
html='''<div class="crop-img-before">
         <img src="" alt="" id="cropImg">
      </div>
        <div id='title'>
        测试成功
        </div>
      <div class="crop-zoom">
         <a href="javascript:;" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  class="bt-reduce">-</a><a href="javascript:;" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  class="bt-add">+</a>
      </div>
      <div class="crop-img-after">
         <div  class="final-img"></div>
      </div>'''
s=BeautifulSoup(html,'html.parser')
title =s.select('#title')
for i in title:
    print(i.text)

image-20211019143400421

多层筛选

from bs4 import BeautifulSoup
html='''<div class="crop-img-before">
         <img src="" alt="" id="cropImg">
      </div>
        <div id='title'>
        456456465
        <h1>测试成功</h1>
        </div>
      <div class="crop-zoom">
         <a href="javascript:;" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  class="bt-reduce">-</a><a href="javascript:;" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  class="bt-add">+</a>
      </div>
      <div class="crop-img-after">
         <div  class="final-img"></div>
      </div>'''
s=BeautifulSoup(html,'html.parser')
title =s.select('#title')
for i in title:
    print(i.text)
title =s.select('#title h1')
for i in title:
    print(i.text)

提取a标签中的网址

title =s.select('a')
for i in title:
    print(i['href'])

image-20211019144002419

实战-获取博客专栏 标题+网址

image-20211019184236143

from bs4 import BeautifulSoup
import requests
import re
url='https://blog.csdn.net/weixin_42403632/category_11298953.html'
headers={'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:93.0) Gecko/20100101 Firefox/93.0'}
html=requests.get(url,headers=headers).text
s=BeautifulSoup(html,'html.parser')
title =s.select('.column_article_list li a')
for i in title:
    print((re.findall('原创.*?\n(.*?)\n',i.text))[0].lstrip())
    print(i['href'])

image-20211019184252204

Copyright 2022 版权所有 软件发布 访问手机版

声明:所有软件和文章来自软件开发商或者作者 如有异议 请与本站联系 联系我们