【python】使用python对csv表格某一列的重复数据去重

import pandas as pd
import csv

l = list()
with open('Result.csv','r') as read:
    reader = csv.reader(read)
    for i in reader:
        l.append(i)
df = pd.DataFrame(l)
df.drop_duplicates(subset=3,inplace=True)
df.to_csv('afterdel.csv')

先读入list,转为Dataframe格式,然后去重,输出。

subset : column label or sequence of labels, optional
Only consider certain columns for identifying duplicates, by
default use all of the columns
keep : {‘first’, ‘last’, False}, default ‘first’
first : Drop duplicates except for the first occurrence.
last : Drop duplicates except for the last occurrence.
– False : Drop all duplicates.
inplace : boolean, default False
Whether to drop duplicates in place or to return a copy

    原文作者:我是一只妖精
    原文地址: https://blog.csdn.net/aaaaassssd/article/details/100012915
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞