Removing duplicates from rows based on specific columns in an RDD/Spark DataFrame

1.3K    Asked by SmithBurgess in Spark , Asked on May 6, 2021
Let's say I have a rather large dataset in the following form:
data = sc.parallelize([('Foo',41,'US',3),
('Foo',39,'UK',1),
('Bar',57,'CA',2),
('Bar',72,'CA',2),
('Baz',22,'US',6),
('Baz',36,'US',6)])

 What I would like to do is remove duplicate rows based on the values of the first,third and fourth columns only.

Removing entirely duplicate rows is straightforward:

data = data.distinct()

and either row 5 or row 6 will be removed

But how do I only remove duplicate rows based on columns 1, 3 and 4 only? i.e. remove either one one of these:

('Baz',22,'US',6)

('Baz',36,'US',6)

 In Python, this could be done by specifying columns with .drop_duplicates(). How can I achieve the same in Spark/Pyspark?

Answered by Felica Laplaca

Pyspark dropDuplicates() method. Follow the way given below and use the same approach in your problem:

>>> from pyspark.sql import Row
>>> df = sc.parallelize([
... Row(name='Amit', id=5, marks=80),
... Row(name='Amit', id=5, marks=80),
... Row(name='Amit', id=10, marks=80)]).toDF()
>>> df.dropDuplicates().show()
+---+------+-----+
|id |marks| name|
+---+------+-----+
| 5| 80|Amit|
| 10| 80|Alice|
+---+------+-----+
>>> df.dropDuplicates(['name', 'marks']).show()
+---+------+-----+
|id | marks| name|
+---+------+-----+
| 5| 80| Amit|
+---+------+-----+

Your Answer

Interviews

Parent Categories