且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Beautifulsoup:.find() 和 .select() 之间的区别

更新时间:2023-12-04 11:23:10

总结评论:

  • select 找到多个实例并返回一个列表,find 找到第一个,所以它们不会做同样的事情.select_one 相当于 find.
  • 在链接标签或使用 tag.classname 时,我几乎总是使用 css 选择器,如果要查找没有类的单个元素,我会使用 find.从本质上讲,这取决于用例和个人偏好.
  • 就灵活性而言,我想你知道答案,soup.select("div[id=foo] > div > div > div[class=fee] > span > span > a") 使用多个链接的 find/find_all 调用会看起来很丑陋.
  • bs4 中 css 选择器的唯一问题是支持非常有限,nth-of-type 是唯一实现的伪类,并且像 a[href][src] 这样的链接属性也不是支持 css 选择器的许多其他部分.但是像 a[href=..]* , a[href^=], a[href$=] 等等..我认为比 find("a", href=re.compile(....)) 好得多,但这也是个人喜好.
  • select finds multiple instances and returns a list, find finds the first, so they don't do the same thing. select_one would be the equivalent to find.
  • I almost always use css selectors when chaining tags or using tag.classname, if looking for a single element without a class I use find. Essentially it comes down to the use case and personal preference.
  • As far as flexibility goes I think you know the answer, soup.select("div[id=foo] > div > div > div[class=fee] > span > span > a") would look pretty ugly using multiple chained find/find_all calls.
  • The only issue with the css selectors in bs4 is the very limited support, nth-of-type is the only pseudo class implemented and chaining attributes like a[href][src] is also not supported as are many other parts of css selectors. But things like a[href=..]* , a[href^=], a[href$=] etc.. are I think much nicer than find("a", href=re.compile(....)) but again that is personal preference.

为了提高性能,我们可以运行一些测试,我修改了 这里的答案中的代码,该代码运行在 800 多个 html 文件上来自 这里,并非详尽无遗,但应该提供可读性的线索一些选项和性能:

For performance we can run some tests, I modified the code from an answer here running on 800+ html files taken from here, is is not exhaustive but should give a clue to the readability of some of the options and the performance:

修改后的功能是:

from bs4 import BeautifulSoup
from glob import iglob


def parse_find(soup):
    author = soup.find("h4", class_="h12 talk-link__speaker").text
    title = soup.find("h4", class_="h9 m5").text
    date = soup.find("span", class_="meta__val").text.strip()
    soup.find("footer",class_="footer").find_previous("data", {
        "class": "talk-transcript__para__time"}).text.split(":")
    soup.find_all("span",class_="talk-transcript__fragment")



def parse_select(soup):
    author = soup.select_one("h4.h12.talk-link__speaker").text
    title = soup.select_one("h4.h9.m5").text
    date = soup.select_one("span.meta__val").text.strip()
    soup.select_one("footer.footer").find_previous("data", {
        "class": "talk-transcript__para__time"}).text
    soup.select("span.talk-transcript__fragment")


def  test(patt, func):
    for html in iglob(patt):
        with open(html) as f:
            func(BeautifulSoup(f, "lxml")

现在是时间:

In [7]: from testing import test, parse_find, parse_select

In [8]: timeit test("./talks/*.html",parse_find)
1 loops, best of 3: 51.9 s per loop

In [9]: timeit test("./talks/*.html",parse_select)
1 loops, best of 3: 32.7 s per loop

就像我说的并不详尽,但我认为我们可以有把握地说 css 选择器肯定更有效.

Like I said not exhaustive but I think we can safely say the css selectors are definitely more efficient.