Hadoop and its filesystem HDFS is open-source software (part of Apache) for distributed processing of Big Data. 

It allows you to set up a cluster of computers and reliably store your data. It has a master-slave structure, where the master is called the "namenode" and that knows where and how the data is stored in large blocks (64Mb or 128Mb). There is redundancy to prevent loss of data. 

In order to acces the data, Hadoop prefers Map-Reduce tasks, which is a way to read and process the data in order to extract information. 

So basicly it is a way to store and read your data.

Related news