且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

Python中是否有`string.split()`的生成器版本?

更新时间:2023-11-28 23:08:16

It is highly probable that re.finditer uses fairly minimal memory overhead.

def split_iter(string):
    return (x.group(0) for x in re.finditer(r"[A-Za-z']+", string))

演示:

>>> list( split_iter("A programmer's RegEx test.") )
['A', "programmer's", 'RegEx', 'test']

我刚刚确认,假设我的测试方法正确,这会在python 3.2.1中占用不变的内存.我创建了一个非常大的字符串(大约1GB),然后使用for循环遍历了可迭代对象(没有列表推导,这会产生额外的内存).这并没有导致内存的显着增长(也就是说,如果存在内存增长,则远远小于1GB的字符串).

edit: I have just confirmed that this takes constant memory in python 3.2.1, assuming my testing methodology was correct. I created a string of very large size (1GB or so), then iterated through the iterable with a for loop (NOT a list comprehension, which would have generated extra memory). This did not result in a noticeable growth of memory (that is, if there was a growth in memory, it was far far less than the 1GB string).