本文共 7660 字,大约阅读时间需要 25 分钟。
记录在64位CentOS 7环境下搭建Hadoop 2.7集群的步骤,这些记录都仅供参考!
主机名 | IP地址 | 角色 | Hadoop用户 |
---|---|---|---|
hadoop-master | 192.168.30.60 | NameNode、ResourceManager、SecondaryNameNode | hadoop |
hadoop-slave01 | 192.168.30.61 | DataNode、NodeManager | hadoop |
hadoop-slave02 | 192.168.30.62 | DataNode、NodeManager | hadoop |
hadoop-slave03 | 192.168.30.63 | DataNode、NodeManager | hadoop |
$ systemctl stop firewalld $ systemctl disable firewalld
$ setenforce 0$ sed -i 's/enforcing/disabled/' /etc/sysconfig/selinux
注:以上操作需要使用root用户
$ vi /etc/hosts
########## Hadoop host ##########192.168.30.60 hadoop-master192.168.30.61 hadoop-slave01192.168.30.62 hadoop-slave02192.168.30.63 hadoop-slave03
注:以上操作需要使用root用户,通过ping 主机名可以返回对应的IP即可
首先要创建hadoop用户,然后在4台主机上使用hadoop用户配置无密码访问,所有主机的操作相同,以hadoop-master为例
生成私钥和公钥
$ ssh-keygen -t rsa
拷贝公钥到主机(需要输入密码)
$ ssh-copy-id hadoop@hadoop-master$ ssh-copy-id hadoop@hadoop-slave01$ ssh-copy-id hadoop@hadoop-slave02$ ssh-copy-id hadoop@hadoop-slave03
注:以上操作需要在hadoop用户,通过hadoop用户ssh到其他主机不需要密码即可。
注:使用hadoop用户操作
$ cd /home/hadoop$ curl -o jdk-8u151-linux-x64.tar.gz http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz?AuthParam=1516091623_fa4174d4b1eed73f36aa38230498cd48
安装java可使用hadoop用户操作;
$ mkdir -p /home/hadoop/app/java$ tar -zxf jdk-8u151-linux-x64.tar.gz$ mv jdk1.8.0_151 /home/hadoop/app/java/jdk1.8
$ vi /home/hadoop/.bash_profile
export JAVA_HOME=/home/hadoop/app/java/jdk1.8export JRE_HOME=$JAVA_HOME/jreexport CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/libexport PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
启用环境变量
$ source /home/hadoop/.bash_profile
注:通过java –version
命令返回Java的版本信息即可
hadoop的安装配置使用hadoop用户操作;
$ curl -O http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz
$ mkdir -p /home/hadoop/app/hadoop/{tmp,hdfs/{data,name}}
$ tar zxf tar -zxf hadoop-2.7.5.tar.gz -C /home/hadoop/app/hadoop
Hadoop配置文件都是XML文件,使用hadoop用户操作即可;
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/core-site.xml
hadoop.tmp.dir file:/home/hadoop/app/hadoop/tmp fs.defaultFS hdfs://hadoop-master:9000 io.file.buffer.size 131072
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/hdfs-site.xml
dfs.replication 3 dfs.namenode.name.dir /home/hadoop/app/hadoop/hdfs/name dfs.datanode.data.dir /home/hadoop/app/hadoop/hdfs/data dfs.webhdfs.enabled true
mapred-site.xml需要从一个模板拷贝在修改
$ cp /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-site.xml.template /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-site.xml
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-site.xml
mapreduce.framework.name yarn mapreduce.jobhistory.address hadoop-master:10020 mapreduce.jobhistory.webapp.address hadoop-master:19888 mapreduce.jobhistory.done-dir /history/done mapreduce.jobhistory.intermediate-done-dir /history/done_intermediate
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/yarn-site.xml
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname hadoop-master yarn.resourcemanager.address hadoop-master:8032 yarn.resourcemanager.scheduler.address hadoop-master:8030 yarn.resourcemanager.resource-tracker.address hadoop-master:8031 yarn.resourcemanager.admin.address hadoop-master:8033 yarn.resourcemanager.webapp.address hadoop-master:8088
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/slaves
hadoop-slave01hadoop-slave02hadoop-slave03
修改hadoop-env.sh文件的JAVA_HOME环境变量,操作如下:
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
修改yarn-env.sh文件的JAVA_HOME环境变量,操作如下:
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/yarn-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
修改mapred-env.sh文件的JAVA_HOME环境变量,操作如下:
$ vi /home/hadoop/app/hadoop/hadoop-2.7.5/etc/hadoop/mapred-env.sh
export JAVA_HOME=/home/hadoop/app/java/jdk1.8
$ scp -r /home/hadoop/app/hadoop hadoop@hadoop-slave01:/home/hadoop/app/$ scp -r /home/hadoop/app/hadoop hadoop@hadoop-slave02:/home/hadoop/app/$ scp -r /home/hadoop/app/hadoop hadoop@hadoop-slave03:/home/hadoop/app/
在所有机器hadoop用户家目录下编辑 .bash_profile 文件,在最后追加:
$ vi /home/hadoop/.bash_profile
### Hadoop PATHexport HADOOP_HOME=/home/hadoop/app/hadoop/hadoop-2.7.5export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
让环境变量生效:
source /home/hadoop/.bash_profile
注:这是配置hadoop的用户环境变量,如果系统变量设置在 /etc/profile.d/ 目录下新增
在hadoop主节点上初始化HDFS文件系统,然后启动hadoop集群
$ hdfs namenode –format
$ start-all.sh
注:在mapreduce.site.xml中配置了jobhistory,需要启动日志记录服务:
$ mr-jobhistory-daemon.sh start historyserver
$ stop-all.sh
注:也可以一步一步执行启动,首先启动namenode-->datanode-->YARN -->NodeManagers -->historyserver
master进程:
$ jps3124 NameNode3285 SecondaryNameNode3451 ResourceManager4254 Jps
slave进程:
$ jps3207 Jps2409 NodeManager2332 DataNode
$ hadoop jar /home/hadoop/app/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar pi 5 10
返回的结果是:Estimated value of Pi is 3.28000000000000000000
$ hadoop fs -mkdir /user/hadoop/input$ hadoop fs -mkdir /user/hadoop/output
$ hadoop fs -put The_Man_of_Property /user/hadoop/input
启动测试
$ hadoop jar /home/hadoop/app/hadoop/hadoop-2.7.5/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.5.jar wordcount /user/hadoop/input /user/hadoop/output/wordcounttest
$ hadoop fs -ls /user/hadoop/output/wordcounttestFound 2 items-rw-r--r-- 3 hadoop supergroup 0 2018-01-17 14:32 /user/hadoop/output/wordcounttest/_SUCCESS-rw-r--r-- 3 hadoop supergroup 181530 2018-01-17 14:32 /user/hadoop/output/wordcounttest/part-r-00000$ hadoop fs -get /user/hadoop/output/wordcounttest/part-r-00000 ./$ cat part-r-00000 |sort -k2 -nr|headthe 5144of 3407to 2782and 2573a 2543he 2139his 1912was 1702in 1694had 1526
转载于:https://blog.51cto.com/balich/2062052