Installing CDH 5 Hadoop YARN on a Single Linux Node in Pseudo-distributed Mode

For development purpose, Apache Hadoop and CDH 5 components can be deployed on a single Linux node in pseudo-distributed mode. In pseudo-distributed mode, Hadoop processing is distributed over all of the cores/processors on a single machine. Hadoop writes all files to the Hadoop Distributed File System (HDFS), and all services and daemons communicate over local TCP sockets for inter-process communication.


  • Supported Operating Systems: RedhatEL,Ubuntu,Debian,CentOS,SLES,OracleLinux->64-bit
  • Supported JDK Versions: >= jdk-1.7.025
  • Supported Internet Protocol: CDH requires IPv4. IPv6 is not supported.
  • SSH configuration:SSH should be configured


Download CDH tarball from cloudera
$ wget
$ cd /opt
$ tar -xzf ~/hadoop-2.3.0-cdh5.0.1.tar.gz
$ cd hadoop-2.3.0-cdh5.0.1

Edit config files

  • core-site.xml
  • hdfs-site.xml
  • mapred-site.xml
  • yarn-site.xml


$ git clone
$ cp -R hadoop-install/etc/hadoop/* $HADOOP_HOME/etc/hadoop/


Create dirs,user, and set Java Home for all users

$ git clone
$ cd hadoop-install/users-and-dirs

Set Java for all users
$ ./

OPTIONAL:>   #Create users(if you want use separate users)
$ ./ # no need to create multiple users in single node

Create required directories
$ ./  # edit HDFS_USER,YARN_UER,and MAPRED_USER variables in this file to point same user

Edit ~/.bashrc [optionally for file of hdfs and yarn user]

export HADOOP_HOME=/opt/hadoop-2.3.0-cdh5.0.1
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
Refresh bash profile $ bash


Create HDFS dirs

Create the history directory and set permissions and owner
$ sudo -u hdfs hdfs dfs -mkdir -p /user/log/history       OR [hdfs dfs -mkdir -p /user/log/history]
$ sudo -u hdfs hdfs dfs -chmod -R 1777 /user/log/history  OR [hdfs dfs -chmod -R 1777 /user/log/history]
$ sudo -u hdfs hdfs dfs -chown mapred:hadoop /user/log/history OR [hdfs dfs -chown mapred:hadoop /user/log/history]

$ sudo -u hdfs hadoop fs -mkdir /tmp OR [hadoop fs -mkdir /tmp]
$ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp OR [hadoop fs -chmod -R 1777 /tmp]


Format HDFS

If you have created separate user for each daemon

$ sudo -u hdfs bin/hdfs namenode -format


$ bin/hdfs namenode -format


Start HDFS and YARN services

If you have created separate user for each deamon

$ sudo -u hdfs sbin/
$ sudo -u yarn sbin/


$ sbin/
$ sbin/


  • $HADOOP_HOME/bin/hadoop :>>> For basic hadoop operations
  • $HADOOP_HOME/bin/yarn :>>> For YARN related operations
  • $HADOOP_HOME/bin/mapred :>>> For MapReduce realted operations
  • $HADOOP_HOME/bin/hdfs :>>> For HDFS related operations

Demoen Utilities:

  • $HADOOP_HOME/sbin/;
  • $HADOOP_HOME/sbin/;
  • $HADOOP_HOME/sbin/;
  • $HADOOP_HOME/sbin/ start historyserver


Check installation using jps

$ jps
20803 NameNode
22056 JobHistoryServer
22124 WebAppProxyServer
7926 Main
21817 NodeManager
21560 ResourceManager
8018 RemoteMavenServer
21373 SecondaryNameNode
21049 DataNode
25651 ElasticSearch
28730 Jps

If these services are not up, check the logs in logs directory to identify the issue.

Web interfaces

What is on what

Master Node:
 - NameNode
 - ResousrceManager
 - JobHistoryServer

Slave Node:
 - NodeManager
 - DataNode
 - WebAppProxyServer


Hi, Great.. Tutorial is just awesome..It is really helpful for a newbie like me.. I am a regular follower of your blog. Really very informative post you shared here. Kindly keep blogging. If anyone wants to become a Java developer learn from Java Training in Chennai. or learn thru Java Online Training from India . Nowadays Java has tons of job opportunities on various vertical industry.

Nice and good article. It is very useful for me to learn and understand easily. Thanks for sharing your valuable information and time. Please keep updating Big data training