且构网

分享程序员开发的那些事...
且构网 - 分享程序员编程开发的那些事

hadoop datanode启动失败 - 配置不正确:namenode地址dfs.namenode.servicerpc-address或dfs.namenode.rpc-address未配置

更新时间:2023-11-17 08:54:22

我的问题:
$ b


  1. export HADOOP_CONF_DIR = $ HADOOP_HOME / etc / hadoop

  2. echo $ HADOOP_CONF_DIR

  3. hdfs namenode -format

  4. hdfs getconf -namenodes

  5. .start-dfs.sh
  6. >

然后,Hadoop可以正常启动。


I am trying to set up Hadoop Cluster with one namenode and two datanodes(slave1 and slave2) so I downloaded the zip file from the Apache Hadoop and unzipped it in the namenode and one(slave1) of the datanodes.

So I made all the configurations(formatting the namenode) in master/slave1 and successfully set up the slave1 with the master which means that I am able to submit a job and see the datanode instance in the admin UI.

So I zipped the whole hadoop installation in the slave1 and unzipped it in the slave2 and changed some property values for tmp directory and environment variables such as JAVA_HOME. I didn't touch the master URL (fs.defaultFS) in the core-site.xml.

When I try to start datanode in slave2, I am getting this error.

java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured

It is weird that I didn't specify these properties in the slave1 and am able to start datanode in slave1 without any problem, but it is throwing this error in the slave2 even though all the configurations are same.

I found these links related to this problem, but it doesn't work in my environment.

  1. javaioioexception-incorrect
  2. dfs-namenode-servicerpc-address-or-dfs-namenode-rpc-address-is-not-configured
  3. incorrect-configuration-namenode-address-dfs-namenode-rpc-address-is-not-config

I am using hadoop 2.4.1 and JDK1.7 on centos.

It would be very helpful if someone who have had this problem already figured it out and can share some information.

Thanks.

These steps solved the problem for me:

  1. export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
  2. echo $HADOOP_CONF_DIR
  3. hdfs namenode -format
  4. hdfs getconf -namenodes
  5. .start-dfs.sh

Then, Hadoop can properly started.