Artificial intelligent assistant

Remove duplicates in a file I have the following file.txt Plummet Cherist the day -- The Transatlatins <-----------duplicate Mysteriosa <-----------duplicate -- Angel City;Lara McAllen Love me right -- The Transatlatins Mysteriosa How can I delete duplicates without changing the order? and the spaces of line, I have tried with `sort` but I change the order and`uniq` does not take me the duplicates. expected result: Plummet Cherist the day -- Angel City;Lara McAllen Love me right -- The Transatlatins Mysteriosa

**Assuming** that the file is intended to be in the format


field1\

field2\

\

field1\

field2\

\



i.e. the last line in the sample file should read


Mysteriosa Mysteriosa


then this should do the trick, provided there is a trailing newline `\
` after the last entry


sed '$!N;$!N;s/\
/:/g' file | nl -s"|" | sort -t '|' -k2 | awk -F"|" '!_[$2]++' | sort -n | sed -e 's/.*|//' -e 's/:/\
/g'

xcX3v84RxoQ-4GxG32940ukFUIEgYdPy 17a46851ce1b49fcedf17a9ec12644a6