Hadoop supports fault tolerance (node failure, corrupt data transports etc), which is not the case of MPI.
Most implementations I have seen of Hadoop have also used MPI, since the two technologies specialise in different tasks, and it comes down to the calculations and data manipulation you are performing.
If you have resilient and high performance hardware, MPI can be used, however where there is a risk of failure\latency Hadoop helps reduce some of the effects.
That said Hadoop is aimed at non-iterative algorithms where nodes require little data exchange to proceed (non-iterative and independent), where as MPI is aimed at iterative algorithms where nodes require data exchange to proceed (iterative and dependent).
I found a good explanation of the two frameworks (which explain it better than me)