hadoop2伪分布式安装

0 注意: hadoop2中 不识别 _ 如果你在hadoop2的配置文件中出现_ 会报错,可以改名成 h2single511-115

1 hadoop2下载地址:
 http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.5.2/
 
2 准备环境:

a) 设置ip地址   这里设置为192.168.1.114
b) 关闭防火墙
c) 关闭防火墙的自动运行
d) 设置主机名  这里设置为h2single114
e) ip与hostname(主机名)绑定
f)  设置ssh免密码登陆(如果.ssh下有数据 先清空在重新生成)

g) 安装jdk1.7 并设置环境变量

步骤2具体参照 

32位centos下安装jdk1.7报Permission denied处理方式 

hadoop1.1.2伪分布式安装(单机版)

 

3 安装:

解压hadoop-2.5.2.tar.gz 重命名为hadoop2

分两步安装: 伪分布下安装HDFS和安装Yarn

3.1 安装HDFS

a) 修改配置文件etc/hadoop/hadoop-env.sh
[root@h2single114 hadoop]# pwd
/usr/local/hadoop2/etc/hadoop
[root@h2single114 hadoop]# vi hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.7

b) 修改配置文件etc/hadoop/core-site.xml:
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://h2single114:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/tmp</value>    
	</property>
    <property>
        <name>fs.trash.interval</name>
        <value>1440</value>
    </property>
</configuration>


c) 修改配置文件etc/hadoop/hdfs-site.xml:

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>


d) 格式化文件系统:
[root@h2single114 bin]# pwd
/usr/local/hadoop2/bin
[root@h2single114 bin]# hdfs namenode -format

e) 启动HDFS集群:
[root@h2single114 sbin]# pwd
/usr/local/hadoop2/sbin
[root@h2single114 sbin]# start-dfs.sh

f) 访问web浏览器:
http://h2single114:50070/

启动如下:


hadoop2伪分布式安装
 

3.1.1 hdfs shell命令 :

练习:
  创建目录:
  $ bin/hdfs dfs -mkdir /user
  $ bin/hdfs dfs -mkdir /user/root
  复制文件:
  $ bin/hdfs dfs -put /etc/profile /user/input   将linux文件/etc/profile上传到 dfs /user/input内

  
关闭集群:
[root@h2single511-115 sbin]# pwd
/usr/local/hadoop2.4/sbin
[root@h2single511-115 sbin]# stop-dfs.sh

查看shell 命令:
[root@h2single511-115 sbin]# hdfs  dfs 
Usage: hadoop fs [generic options]
        [-appendToFile <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        [-checksum <src> ...]
        [-chgrp [-R] GROUP PATH...]
		.........

3.2 安装Yarn

拷贝模板文件并重命名:
[root@h2single511-115 hadoop]# cp  mapred-site.xml.template   mapred-site.xml

修改配置文件etc/hadoop/mapred-site.xml:
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
 <property>
     <name>mapreduce.map.memory.mb</name>
     <value>512</value>
 </propery>
 <property>
     <name>mapreduce.reduce.memory.mb</name>
     <value>512</value>
 </propery>
<property>
        
	<name>mapreduce.jobhistory.address</name> 
        
	<value>hadoop3:10020</value>

</property>

<property>
       
 <name>mapreduce.jobhistory.webapp.address</name>
        
 <value>hadoop3:10021</value>

</property> 

</configuration>
修改配置文件etc/hadoop/yarn-site.xml:  
<configuration>
    <property>
        
	<description>The hostname of the RM.</description> 
        
	<name>yarn.resourcemanager.hostname</name> 指定yarn resoucemanager所在的节点
        
	<value>hadoop3</value> 
    
    </property>

    <property>
        <name>yarn.nodemanager.aux-services</name> yarn上能运行很多计算任务 这里指定是mr类型的运算
        <value>mapreduce_shuffle</value>
    </property>
</configuration>

启动Yarn集群:
[root@h2single511-115 sbin]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop2.4/logs/yarn-root-resourcemanager-h2single511-115.out
localhost: starting nodemanager, logging to /usr/local/hadoop2.4/logs/yarn-root-nodemanager-h2single511-115.out
如果启动失败 去上面的out里查看

启动jobhistory:
# mr-jobhistory-daemon.sh start historyserver


查看是否启动成功:
[root@h2single511-115 sbin]# jps
13613 Jps
13222 ResourceManager
13312 NodeManager


访问web浏览器:

ResourceManager - http://hadoop3-115:8088/
namenode - http://hadoop3-115:50070/
jobhistory - http://hadoop3:10021/jobhistory   配置文件配置web访问端口为10021


运行例子:
[root@h2single511-115 mapreduce]# pwd
/usr/local/hadoop2.4/share/hadoop/mapreduce
[root@h2single511-115 mapreduce]# pwd
/usr/local/hadoop2.4/share/hadoop/mapreduce
[root@h2single511-115 mapreduce]# hadoop jar hadoop-mapreduce-examples-2.4.0.jar wordcount /user/input /user/output



查看结果:
[root@h2single511-115 bin]# hdfs dfs -cat /user/output/*


关闭Yarn集群:
  $ sbin/stop-yarn.sh

如果不想启动secondnamenode, 可以写一个sh脚本,来启动,如下:

1 /usr/local/hadoop2.4下创建脚本start-hadoop2.sh,内容如下:

#!/bin/sh
hadoop_home=/usr/local/hadoop2.4
$hadoop_home/sbin/hadoop-daemon.sh start namenode
$hadoop_home/sbin/hadoop-daemon.sh start datanode
$hadoop_home/sbin/yarn-daemon.sh start resourcemanager
$hadoop_home/sbin/yarn-daemon.sh start nodemanager
$hadoop_home/sbin/mr-jobhistory-daemon.sh  start historyserver


2 直接运行[root@h2single511-115 hadoop2.4]# sh start-hadoop2.sh 即可启动

相关推荐