Q: I'm working through it now and obtained the node and got in through VNC. When I ran hadoop dfsadmin -report, I got the error shown in the attached screenshot. So, I'm not even getting to the 0 node problem. Any idea what is going on?
A: It seems your ~/.hadoop2/conf/core-site.xml is corrupted.
Please check if it is the same as following:
hr4757@login1:~$ cat ~/.hadoop2/conf/core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://c201-124:9000</value>
</property>
</configuration>
</configuration>
Use the conf file attached below:
upload the attached file to {your home}/.hadoop2.
hr4757@login1:~/.hadoop2$ mv conf conf_bak
hr4757@login1:~/.hadoop2$ tar -xvf hadoop2conf.tar.gz
conf/
conf/slaves.default
conf/capacity-scheduler.xml
conf/configuration.xsl
conf/mapred-site.xml
conf/ssl-client.xml.example
conf/log4j.properties
conf/hdfs-site.xml
conf/mapred-site.xml.default
conf/hadoop-env.sh
conf/slaves
conf/ssl-server.xml.example
conf/hadoop-env.sh~
conf/core-site.xml.default
conf/masters
conf/core-site.xml
conf/hadoop-metrics.properties
conf/hdfs-site.xml.default
conf/hadoop-policy.xml
hr4757@login1:~/.hadoop2$ ls conf
capacity-scheduler.xml hadoop-metrics.properties mapred-site.xml.default
configuration.xsl hadoop-policy.xml masters
core-site.xml hdfs-site.xml slaves
core-site.xml.default hdfs-site.xml.default slaves.default
hadoop-env.sh log4j.properties ssl-client.xml.example
hadoop-env.sh~ mapred-site.xml ssl-server.xml.example