In a Pandas DataFrame, you can remove duplicated rows based on multiple columns using the drop_duplicates()
method. Here's how you can do it:
import pandas as pd
# Sample DataFrame
data = {
'A': [1, 2, 3, 2, 1],
'B': ['apple', 'banana', 'cherry', 'banana', 'apple'],
'C': [10, 20, 30, 20, 10]
}
df = pd.DataFrame(data)
# Remove duplicates based on columns A and B
df = df.drop_duplicates(subset=['A', 'B'])
# Display the resulting DataFrame
print(df)
In this example, we have a DataFrame with three columns, and we want to remove duplicates based on columns 'A' and 'B'. The subset
parameter is set to a list of column names ('A' and 'B') to specify which columns should be considered when checking for duplicates. The resulting DataFrame will have duplicate rows removed based on the specified columns.
You can also use the keep
parameter to control which duplicate values to keep. By default, it's set to 'first', which keeps the first occurrence and removes subsequent duplicates. You can set it to 'last' to keep the last occurrence and remove earlier duplicates or 'False' to remove all duplicates. For example:
# Remove duplicates based on columns A and B, keeping the last occurrence
df = df.drop_duplicates(subset=['A', 'B'], keep='last')
This code will keep the last occurrence of a duplicated row based on columns 'A' and 'B'. Adjust the subset
and keep
parameters according to your specific requirements.