Apache Hadoop

Apache Hadoop
Original author(s)Doug Cutting, Mike Cafarella
Developer(s)Apache Software Foundation
Initial releaseApril 1, 2006 (2006-04-01)[1]
Stable release
2.10.x2.10.2 / May 31, 2022 (2022-05-31)[2]
3.2.x3.2.4 / July 22, 2022 (2022-07-22)[2]
3.3.x3.3.6 / June 23, 2023 (2023-06-23)[2]
RepositoryHadoop Repository
Written inJava
Operating systemCross-platform
TypeDistributed file system
LicenseApache License 2.0
Websitehadoop.apache.org Edit this at Wikidata

Apache Hadoop ( /həˈdp/) is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation.[vague] It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use.[3] It has since also found use on clusters of higher-end hardware.[4][5] All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.[6]

The core of Apache Hadoop consists of a storage part, known as Hadoop Distributed File System (HDFS), and a processing part which is a MapReduce programming model. Hadoop splits files into large blocks and distributes them across nodes in a cluster. It then transfers packaged code into nodes to process the data in parallel. This approach takes advantage of data locality,[7] where nodes manipulate the data they have access to. This allows the dataset to be processed faster and more efficiently than it would be in a more conventional supercomputer architecture that relies on a parallel file system where computation and data are distributed via high-speed networking.[8][9]

The base Apache Hadoop framework is composed of the following modules:

  • Hadoop Common – contains libraries and utilities needed by other Hadoop modules;
  • Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster;
  • Hadoop YARN – (introduced in 2012) is a platform responsible for managing computing resources in clusters and using them for scheduling users' applications;[10][11]
  • Hadoop MapReduce – an implementation of the MapReduce programming model for large-scale data processing.
  • Hadoop Ozone – (introduced in 2020) An object store for Hadoop

The term Hadoop is often used for both base modules and sub-modules and also the ecosystem,[12] or collection of additional software packages that can be installed on top of or alongside Hadoop, such as Apache Pig, Apache Hive, Apache HBase, Apache Phoenix, Apache Spark, Apache ZooKeeper, Apache Impala, Apache Flume, Apache Sqoop, Apache Oozie, and Apache Storm.[13]

Apache Hadoop's MapReduce and HDFS components were inspired by Google papers on MapReduce and Google File System.[14]

The Hadoop framework itself is mostly written in the Java programming language, with some native code in C and command line utilities written as shell scripts. Though MapReduce Java code is common, any programming language can be used with Hadoop Streaming to implement the map and reduce parts of the user's program.[15] Other projects in the Hadoop ecosystem expose richer user interfaces.

  1. ^ "Hadoop Releases". apache.org. Apache Software Foundation. Retrieved 28 April 2019.
  2. ^ a b c "Apache Hadoop". Retrieved 27 September 2022.
  3. ^ Judge, Peter (22 October 2012). "Doug Cutting: Big Data Is No Bubble". silicon.co.uk. Retrieved 11 March 2018.
  4. ^ Woodie, Alex (12 May 2014). "Why Hadoop on IBM Power". datanami.com. Datanami. Retrieved 11 March 2018.
  5. ^ Hemsoth, Nicole (15 October 2014). "Cray Launches Hadoop into HPC Airspace". hpcwire.com. Retrieved 11 March 2018.
  6. ^ "Welcome to Apache Hadoop!". hadoop.apache.org. Retrieved 25 August 2016.
  7. ^ "What is the Hadoop Distributed File System (HDFS)?". ibm.com. IBM. Retrieved 12 April 2021.
  8. ^ Malak, Michael (19 September 2014). "Data Locality: HPC vs. Hadoop vs. Spark". datascienceassn.org. Data Science Association. Retrieved 30 October 2014.
  9. ^ Wang, Yandong; Goldstone, Robin; Yu, Weikuan; Wang, Teng (October 2014). "Characterization and Optimization of Memory-Resident MapReduce on HPC Systems". 2014 IEEE 28th International Parallel and Distributed Processing Symposium. IEEE. pp. 799–808. doi:10.1109/IPDPS.2014.87. ISBN 978-1-4799-3800-1. S2CID 11157612.
  10. ^ "Resource (Apache Hadoop Main 2.5.1 API)". apache.org. Apache Software Foundation. 12 September 2014. Archived from the original on 6 October 2014. Retrieved 30 September 2014.
  11. ^ Murthy, Arun (15 August 2012). "Apache Hadoop YARN – Concepts and Applications". hortonworks.com. Hortonworks. Retrieved 30 September 2014.
  12. ^ "Continuuity Raises $10 Million Series A Round to Ignite Big Data Application Development Within the Hadoop Ecosystem". finance.yahoo.com. Marketwired. 14 November 2012. Retrieved 30 October 2014.
  13. ^ "Hadoop-related projects at". Hadoop.apache.org. Retrieved 17 October 2013.
  14. ^ Data Science and Big Data Analytics: Discovering, Analyzing, Visualizing and Presenting Data. John Wiley & Sons. 19 December 2014. p. 300. ISBN 9781118876220. Retrieved 29 January 2015.
  15. ^ "[nlpatumd] Adventures with Hadoop and Perl". Mail-archive.com. 2 May 2010. Retrieved 5 April 2013.

From Wikipedia, the free encyclopedia · View on Wikipedia

Developed by Nelliwinne