Question - What happens if you try to run a Hadoop job with an output directory that is already present?
Answer -
It will throw an exception saying that the output file directory already exists.
To run the MapReduce job, it needs to be ensured that the output directory does not exist in the HDFS.
To delete the directory before running the job, shell can be used:
Hadoop fs –rmr /path/to/your/output/
Or the Java API:
FileSystem.getlocal(conf).delete(outputDir, true);