Hadoop Distributed File System

Definition & Meaning

HDFS meaning

Last updated 23 month ago

What is the Hadoop Distributed File System (HDFS)?

What does HDFS stand for?

The Hadoop Distributed File System (HDFS) is a allotted report Device that runs on wellknown or low-stop Hardware. Developed by Apache Hadoop, HDFS works like a widespread disbursed report machine however gives higher Data Throughput and get entry to via the MapReduce set of rules, high Fault Tolerance and local guide of big facts sets.

What Does Hadoop Distributed File System Mean?

The HDFS shops a huge aMount of inFormation positioned across a couple of machines, normally in hundreds and lots of simultaneously related Nodes, and gives facts reliability through replicating each information Instance as 3 distinctive copies – in a single organization and one in any other. These copies may be cHanged within the occasion of Failure.

The HDFS architecture consists of clusters, every of that's Accessed thru a single NameNode Software Program device established on a separate sySTEM to display and control the that cluster’s File device and consumer access mechanism. The different machines set up one example of DataNode to control cluster garage.

Because HDFS is written in Java, it has local help for Java utility Programming Interfaces (API) for Application Integration and Accessibility. It also may be accessed via preferred Web Browsers.

Share Hadoop Distributed File System article on social networks

Your Score to Hadoop Distributed File System article

Score: 5 out of 5 (1 voters)

Be the first to comment on the Hadoop Distributed File System

4779- V5

tech-term.com© 2023 All rights reserved