python读取大文件

我们在处理小的文本文件时一般使用.read()、.readline() 和 .readlines()方法,但是当我们的文件有2个G,5个G甚至更大时,用这些方法内存就直接爆掉了。

对一般文件,如果文件很小,read()一次性读取最方便;如果不能确定文件大小,反复调用read(size)比较保险;如果是配置文件,调用readlines()最方便。

读取大文件方法:

一、Read In Chunks

把大文件分成小块来读

def read_in_chunks(filePath, chunk_size=1024*1024):
    """
    Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1M
    You can set your own chunk size 
    """
    file_object = open(filePath)
    while True:
        chunk_data = file_object.read(chunk_size)
        if not chunk_data:
            break
        yield chunk_data
if __name__ == "__main__":
    filePath = './path/filename'
    for chunk in read_in_chunks(filePath):
        process(chunk) # <do something with chunk>

二、Using with open()

with语句打开和关闭文件,包括抛出一个内部块异常。for line in f文件对象f视为一个迭代器,会自动的采用缓冲IO和内存管理,所以你不必担心大文件。

#If the file is line based
with open(...) as f:
    for line in f:
        process(line) # <do something with line>

三、fileinput处理

import fileinput
for line in fileinput.input(['sum.log']):
    print line

参考:
http://www.zhidaow.com/post/python-read-big-file
https://www.cnblogs.com/wulaa/p/7852592.html

f = open(filename,'r')
f.read()

#1:
while True:
    block = f.read(1024)
    if not block:
        break


#2:
while True:
    line = f.readline()
    if not line:
        break

#3:
for line in f.readlines():
    pass


#4:
with open(filename,'r') as file:
    for line in file:
        pass

#5:the second line
import linecache
txt = linecache.getline(filename,2)
    原文作者:meetliuxin
    原文地址: https://www.jianshu.com/p/af9f485fcd06
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞