When you build a huffman tree, you always merge the two symbols that are least likely first. So if you use the frequencies of the symbols in the corpus as estimations of their probabilities you could start by merging a and b into ab since both have probability 1/5.
Then you will have three symbols in your alphabet $\\{ab,c,d\\}$ where c has probability 1/5 and the other two have probability 2/5.
In the next step of the algorithm you will pick c since it has the lowest probability in the set, but since both ab and d have the same probability you can choose to merge c with any one of them. If you choose to merge c with d, you'll get the left tree and if you merge c with ab you will get the right tree.
So the tree does not have to be unique. If we at some point in the algorithm get many tree nodes with the same probability there are many possible trees.