基于 Docker 构建 Hadoop 平台

10.13大数据导论课程记录
课程内容:安装ubuntu虚拟机并使用Docker构建Hadoop平台

ps.此文章为课程记录,即表示部分观点或内容可能有误或表述不清,读者需自行辨别,记录顺序或逻辑可能混乱,敬请谅解(我tm自己做的笔记,你要实在要杠就别看

注意:此篇文章涉及大量命令操作,对不熟悉的读者可能有些困难,可以多了解Linux后仔细阅读本文。

资料

https://polar-bear.eu.org/files/基于Docker构建Hadoop平台.pdf

0. 绪论

使⽤Docker搭建Hadoop技术平台,包括安装Docker、Java、Scala、Hadoop、 Hbase、Spark。

集群共有5台机器,主机名分别为 h01、h02、h03、h04、h05。其中 h01 为 master,其他的为slave。

虚拟机配置:建议1盒2线程、8G内存、30G硬盘。最早配置4G内存,HBase和Spark运⾏异常。

  • JDK 1.8
  • Scala 2.11.12
  • Hadoop 3.3.3
  • Hbase 3.0.0
  • Spark 3.3.0

1. Docker

1.1 Ubuntu 22.04 安装Docker

在 Ubuntu 下对 Docker 的操作都需要加上 sudo ,如果已经是 root 账号了,则不需要。
如果不加 sudo ,Docker 相关命令会⽆法执⾏。
在 Ubuntu 下安装 Docker 的时候需在管理员的账号下操作。

1
mike@ubuntu2204:~$ wget -qO- https://get.docker.com/ | sh

安装完成之后,以 sudo 启动 Docker 服务。

1
mike@ubuntu2204:~$ sudo service docker start

显⽰ Docker 中所有正在运⾏的容器,由于 Docker 才安装,我们没有运⾏任何容器,所以显⽰结果如
下所⽰。

1
2
3
4
mike@ubuntu2204:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
mike@ubuntu2204:~$

1.2 使⽤Docker

现在的 Docker ⽹络能够提供 DNS 解析功能,我们可以使⽤如下命令为接下来的 Hadoop 集群单独构
建⼀个虚拟的⽹络。可以采⽤直通、桥接或macvlan⽅式,这⾥采⽤桥接模式,可以做到5台主机互联,
并能访问宿主机和⽹关,可以连接外⽹,便于在线下载程序资源。

1
2
mike@ubuntu2204:~$sudo docker network create --driver=bridge hadoop

以上命令创建了⼀个名为 hadoop 的虚拟桥接⽹络,该虚拟⽹络内部提供了⾃动的DNS解析服务。使⽤下⾯这个命令查看 Docker 中的⽹络,可以看到刚刚创建的名为 hadoop 的虚拟桥接⽹络。

1
2
3
4
5
6
7
8
9
mike@ubuntu2204:~$ sudo docker network ls
[sudo] password for mike:
NETWORK ID NAME DRIVER SCOPE
3948edc3e8f3 bridge bridge local
337965dd9b1e hadoop bridge local
cb8f2c453adc host host local
fff4bd1c15ee mynet macvlan local
30e1132ad754 none null local
mike@ubuntu2204:~$


下载 ubuntu 22.04 版本的镜像⽂件

(问题):docker安装ubuntu22.04切换阿里云镜像出现jdk无法安装的问题,现在没有找到解决方法(反正我没找到) …

22.04换源的看看就行,后文有16.04的方法

display
1
2
mike@ubuntu2204:~$ sudo docker pull ubuntu:22.04

查看已经下载的镜像

1
2
3
4
5
6
mike@ubuntu2204:~$ sudo docker images
[sudo] password for mike:
REPOSITORY TAG IMAGE ID CREATED SIZE
newuhadoop latest fe08b5527281 3 days ago 2.11GB
ubuntu 22.04 27941809078c 6 weeks ago 77.8MB
mike@ubuntu2204:~$

根据镜像启动⼀个容器,可以看出 shell 已经是容器的 shell 了,这⾥注意@后⾯的容器ID与上图镜像ID⼀致

1
2
mike@ubuntu2204:~$ sudo docker run -it ubuntu:22.04 /bin/bash
root@27941809078c:/#

输⼊ exit 可以退出容器,不过建议使⽤ Ctrl + P + Q ,退出容器状态,但仍让容器处于后台运⾏状态。

1
mike@ubuntu2204:~$

查看本机上所有的容器

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
mike@ubuntu2204:~$ sudo docker ps -a
[sudo] password for mike:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
NAMES
8016da5278ae newuhadoop "/bin/bash" 3 days ago Up 2 days
h05
409c7e8aa2e9 newuhadoop "/bin/bash" 3 days ago Up 2 days
h04
0d8af236e1e7 newuhadoop "/bin/bash" 3 days ago Up 2 days
h03
72d62b7d4874 newuhadoop "/bin/bash" 3 days ago Up 2 days
h02
d4d3ca3bbb61 newuhadoop "/bin/bash" 3 days ago Up 2 days 0.0.0.0:8088-
>8088/tcp, :::8088->8088/tcp, 0.0.0.0:9870->9870/tcp, :::9870->9870/tcp h01
mike@ubuntu2204:~$

此处会看到刚刚创建好的容器,并在后台运⾏。这⾥因为是后期制作的教程,为了节省内存,只保留了5
台hadoop的容器,最原始的容器已经删除。

启动⼀个状态为退出的容器,最后⼀个参数为容器 ID

1
2
mike@ubuntu2204:~$ sudo docker start 27941809078c

进⼊⼀个容器

1
mike@ubuntu2204:~$ sudo docker attach 27941809078c

关闭⼀个容器

1
mike@ubuntu2204:~$ sudo docker stop 27941809078c

2. 安装集群

主要是安装 JDK 1.8 的环境,因为 Spark 要 Scala,Scala 要 JDK 1.8,以及 Hadoop,以此来构建基础镜像。

2.1 安装 Java 与 Scala

进⼊之前的 Ubuntu 容器

先更换 apt 的源

2.1.1 修改 apt 源

备份源

1
2
root@27941809078c:/# cp /etc/apt/sources.list /etc/apt/sources_init.list
root@27941809078c:/#

先删除就源⽂件,这个时候没有 vim ⼯

1
root@27941809078c:/# rm /etc/apt/sources.list

复制以下命令,回⻋,即可⼀键切换到阿⾥云 ubuntu 22.04镜像:(此时已经是root权限,提⽰符为#)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
bash -c "cat << EOF > /etc/apt/sources.list && apt update
deb http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy main restricted universe
multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-security main restricted universe
multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-updates main restricted universe
multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-proposed main restricted universe
multiverse
deb http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted universe
multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ jammy-backports main restricted
universe multiverse
EOF"

再使⽤ apt update / apt upgrade 来更新,update更列表,upgrade更新包

1
2
3
root@27941809078c:/# apt update
root@27941809078c:/# apt upgrade

2.1.2 安装 Java与 Scala

安装 jdk 1.8,直接输⼊命令

1
2
root@27941809078c:/# apt install openjdk-8-jdk

测试⼀下安装结果

1
2
3
4
5
root@27941809078c:/# java -version
openjdk version "1.8.0_312"
OpenJDK Runtime Environment (build 1.8.0_312-8u312-b07-0ubuntu1-b07)
OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)
root@27941809078c:/#

接下来安装scala

1
root@27941809078c:/# apt install scala

测试⼀下安装结果

1
2
3
4
5
root@27941809078c:/# scala
Welcome to Scala 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_312).
Type in expressions for evaluation. Or try :help.
scala>

输⼊ :quit 退出scala

一些操作

  • docker

下载 ubuntu 22.04 版本的镜像⽂件:sudo docker pull ubuntu:22.04

查看已经下载的镜像: sudo docker images

根据镜像启动:sudo docker run -it 镜像名字 /bin/bash

输⼊ exit 可以退出容器,不过建议使⽤ Ctrl + P + Q ,退出容器状态,但仍让容器处于后台运⾏状
态。

查看本机上所有的容器:sudo docker ps -a

启动⼀个状态为退出的容器,最后⼀个参数为容器 ID:sudo docker start 27941809078c

进⼊⼀个容器:sudo docker attach 27941809078c

关闭⼀个容器:sudo docker stop 27941809078c

  • vim

Vim编辑器是一种功能强大的文本编辑器,以下是一些常用的Vim操作:

  1. 进入命令模式:按下Esc键。
  2. 插入模式:在命令模式下输入i,然后开始输入文本。
  3. 删除字符:在插入模式下,按下Backspace键或Delete键。
  4. 保存并退出:在命令模式下输入:wq,然后按下Enter键。
  5. 保存但不退出:在命令模式下输入:w,然后按下Enter键。
  6. 打开文件:在命令模式下输入:e <文件名>,然后按下Enter键。
  7. 查找文本:在命令模式下输入/<要查找的文本>,然后按下Enter键。
  8. 替换文本:在命令模式下输入:%s/<要查找的文本>/<替换后的文本>/g,然后按下Enter键。
  9. 跳转到指定行:在命令模式下输入:<行号>,然后按下Enter键。
  10. 复制选中的文本:在命令模式下输入:'<,'>y,然后按下Enter键。
  11. 粘贴复制的文本:在命令模式下输入:p,然后按下Enter键。
  12. 撤销操作:在命令模式下输入:u,然后按下Enter键。
  13. 重做操作:在命令模式下输入:Ctrl-r,然后按下Enter键。
  14. 显示帮助信息:在命令模式下输入:help <命令>,然后按下Enter键。
  • nano

Nano是一个基于终端的文本编辑器,以下是一些常用的Nano操作:

  1. 打开文件:在终端中输入nano <文件名>,然后按下回车键。
  2. 插入模式:按下i键进入插入模式,可以开始编辑文本。
  3. 退出插入模式:按下Esc键退出插入模式。
  4. 保存并退出:按下Ctrl-O键保存文件,然后按下Ctrl-X键退出Nano。
  5. 保存但不退出:按下Ctrl-O键保存文件,然后按下Ctrl-X键再次退出Nano。
  6. 撤销操作:按下Ctrl-_键撤销上一次操作。
  7. 重做操作:按下Ctrl-_键再次撤销上一次撤销的操作。
  8. 移动光标:使用方向键或鼠标移动光标。
  9. 搜索文本:按下Ctrl-W键打开搜索框,输入要查找的文本,然后按下回车键进行搜索。
  10. 替换文本:按下Ctrl-W键打开搜索框,输入要查找的文本,然后按下Alt-C键进行替换。

echo " … " > /etc/apt/sources.list

echo输入状态下为>

1
2
3
4
5
6
root@6936446824fb:/# echo "
>
>
>
> "

直到 " 结束输入

以" > /etc/apt/… (保存文件路径)结束则保存

16.04安装集群

22.04的问题暂时没有找到解决方法

不过可以切换16.04使用,以下是16.04从换源开始的教程

环境:VMware虚拟机安装Ubuntu内

主要是安装 JDK 1.8 的环境,因为 Spark 要 Scala,Scala 要 JDK 1.8,以及 Hadoop,以此来构建基础镜像。

安装 Java 与 Scala

进⼊之前的 Ubuntu 容器

先更换 apt 的源

修改 apt 源

备份

1
cp /etc/apt/sources.list /etc/apt/sources_init.list

删除 先删除就源⽂件,这个时候没有 vim 工具

1
rm /etc/apt/sources.list

换源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
echo "deb http://mirrors.aliyun.com/ubuntu/ xenial main  
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main

deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main

deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe

deb http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security universe" > /etc/apt/sources.list

更新列表,更新包

1
2
3
apt update
apt upgrade

安装jdk

1
apt install openjdk-8-jdk

检测java安装情况

1
java -version

安装scala

1
apt install scala

测试scala安装情况

1
scala

输入<:quit>退出

>
1
:quit


安装 Hadoop

  • 在当前容器中将配置配好
  • 导入出为镜像
  • 以此镜像为基础创建五个容器,并赋予 hostname
  • 进入 h01 容器,启动 Hadoop

安装 vim,用来编辑文件

1
apt install vim

安装 net-tools

1
apt install net-tools

安装 SSH

安装 SSH,并配置免密登录,由于后面的容器之间是由一个镜像启动的,就像同一个磨具出来的 5 把锁与钥匙,可以互相开锁。所以在当前容器里配置 SSH 自身免密登录就 OK 了。

安装 SSH

1
apt-get install openssh-server

安装 SSH 的客户端

1
apt-get install openssh-client

进入当前用户的用户根目录

1
2
root@6936446824fb:/# cd ~
root@6936446824fb:~#

生成密钥,不用输入,一直回车就行,生成的密钥在当前用户根目录下的 .ssh 文件夹中

以 . 开头的文件与文件夹 ls 是看不懂的,需要 ls -al 才能查看。

1
ssh-keygen -t rsa -P ""

将公钥追加到 authorized_keys 文件中

1
2
cat .ssh/id_rsa.pub >> .ssh/authorized_keys

启动 SSH 服务

1
2
3
root@6936446824fb:~# service ssh start
* Starting OpenBSD Secure Shell server sshd [ OK ]
root@6936446824fb:~#

免密登录自己

1
ssh 127.0.0.1

会出现提示root@6936446824fb:~# ssh 127.0.0.1 The authenticity of host '127.0.0.1 (127.0.0.1)' can't be established. ECDSA key fingerprint is SHA256:TNYlo/eEdFBPLAkGC9o6tGiOufO4S2zVfHRjD0xpW8Y. Are you sure you want to continue connecting (yes/no)? 输入yes即可

提示如下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Warning: Permanently added '127.0.0.1' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04.7 LTS (GNU/Linux 6.2.0-36-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

修改 .bashrc 文件,启动 shell 的时候,自动启动 SSH 服务

用 vim 打开 .bashrc 文件

1
vim ~/.bashrc

注意vim编辑器的使用方法,具体参照本篇vim的操作

按一下 i 键,使得 vim 进入插入模式,此时终端的左下角会显示为 – INSERT --,将光标移动到最后面,添加一行

1
service ssh start

最后几行应该是这样的

1
2
3
4
5
6
7
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
#if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
# . /etc/bash_completion
#fi
service ssh start


按一下 Esc 键,使得 vim 退出插入模式

再输入英文模式下的冒号 :,此时终端的左下方会有一个冒号 : 显示出来

再输入三个字符 wq!,这是一个组合命令

  • w是保存的意思
  • q是退出的意思
  • !是强制的意思

再输入回车,退出 vim。

此时,SSH 免密登录已经完全配置好。

安装 Hadoop

下载 Hadoop 的安装文件

1
wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-3.3.3/hadoop-3.3.3.tar.gz

可能会Connection refused.
可以使用阿里云镜像

1
wget https://mirrors.aliyun.com/apache/hadoop/common/hadoop-3.3.3/hadoop-3.3.3.tar.gz

解压到 /usr/local ⽬录下⾯并重命名⽂件夹

1
2
3
tar -zxvf hadoop-3.3.3.tar.gz -C /usr/local/
cd /usr/local/
mv hadoop-3.3.3 hadoop

修改 /etc/profile ⽂件,添加⼀下环境变量到⽂件中

先运行update-alternatives --config java查看java安装目录

1
2
3
4
5
如我的输出为/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java

JDK路径为/usr/lib/jvm/java-8-openjdk-amd64

为JAVA_HOME 为 JDK 安装路径

⽤ vim 打开 /etc/profile

1
vim /etc/profile

追加以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#java
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
#hadoop
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export HDFS_DATANODE_USER=root
export HDFS_DATANODE_SECURE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_NAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

保存

使环境变量⽣效

1
source /etc/profile

复杂繁琐的修改

在⽬录 /usr/local/hadoop/etc/hadoop 下,修改6个重要配置⽂件

cd /usr/local/hadoop/etc/hadoop

修改 hadoop-env.sh ⽂件,在⽂件末尾添加⼀下信息
vim hadoop-env.sh

1
2
3
4
5
6
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

以下xml文件应该是修改<configuration></configuration>之间,而不是全部修改

修改 core-site.xml,修改为

1
2
3
4
5
6
7
8
9
10
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://h01:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop3/hadoop/tmp</value>
</property>
</configuration>

修改 hdfs-site.xml,修改为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop3/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<value>/home/hadoop3/hadoop/hdfs/data</value>
</property>
</configuration>

修改 mapred-site.xml,修改为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>
/usr/local/hadoop/etc/hadoop,
/usr/local/hadoop/share/hadoop/common/*,
/usr/local/hadoop/share/hadoop/common/lib/*,
/usr/local/hadoop/share/hadoop/hdfs/*,
/usr/local/hadoop/share/hadoop/hdfs/lib/*,
/usr/local/hadoop/share/hadoop/mapreduce/*,
/usr/local/hadoop/share/hadoop/mapreduce/lib/*,
/usr/local/hadoop/share/hadoop/yarn/*,
/usr/local/hadoop/share/hadoop/yarn/lib/*
</value>
</property>
</configuration>

修改 yarn-site.xml,修改为

1
2
3
4
5
6
7
8
9
10
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>h01</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>

修改 workers 为

1
2
3
4
5
h01
h02
h03
h04
h05

Docker 中启动集群

先将当前容器导出为镜像

这个命令是用于将一个Docker容器(fab4da838c2f)的更改提交为一个新的镜像,并为其添加标签"hadoop"。同时,为这个新镜像添加注释"haddop"。

1
sudo docker commit -m "haddop" -a "hadoop" 6936446824fb newuhadoop

并查看当前镜像

1
sudo docker images

显示如下

1
2
3
4
5
6
7
8
zyrf@zyrf-virtual-machine:~$ sudo docker commit -m "haddop" -a "hadoop" 6936446824fb newuhadoop
sha256:3666704d18c7cdf9adc6102df91db62222c236bf12f5bad01b3dff45e2e21729
zyrf@zyrf-virtual-machine:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
newuhadoop latest 3666704d18c7 About a minute ago 2.62GB
ubuntu 22.04 e4c58958181a 3 weeks ago 77.8MB
ubuntu <none> 3565a89d9e81 5 weeks ago 77.8MB
ubuntu 16.04 b6f507652425 2 years ago 135MB

启动 5 个终端分别执行这几个命令

第一条命令启动的是 h01 是做 master 节点的,所以暴露了端口,以供访问 web 页面

–network hadoop 参数是将当前容器加入到名为 hadoop 的虚拟桥接网络中,此网站提供自动的 DNS 解析功能

1
sudo docker run -it --network hadoop -h "h01" --name "h01" -p 9870:9870 -p 8088:8088 newuhadoop /bin/bash

正常应该是这样的

其余的四条命令就是几乎一样的了,分别用四个终端打开

1
2
3
4
5
6
7
8
9
10
sudo docker run -it --network hadoop -h "h02" --name "h02" newuhadoop /bin/bash


sudo docker run -it --network hadoop -h "h03" --name "h03" newuhadoop /bin/bash


sudo docker run -it --network hadoop -h "h04" --name "h04" newuhadoop /bin/bash


sudo docker run -it --network hadoop -h "h05" --name "h05" newuhadoop /bin/bash

如果你不小心退出容器,再执行上面的命令会出错,以下操作可能对你有用

1
2
3
4
5
6
7
8
9
10
11
12
13
若未处于root权限状态需添加sudo

启动容器
sudo docker start h01

进入容器
docker exec -it <容器ID或名称> /bin/bash

停止容器
docker stop h01

删除容器
docker rm h01

最后应该是这样的

磁盘空间不足

可能在完成以下操作后,如果你使用的是虚拟机,默认分配的20GB已经不够用了

VMware设置>磁盘>扩展>增加空间

Ubuntu>磁盘>正在使用的磁盘>设置>调整大小

启动 Haddop 集群

回归正题

接下来,在 h01 主机中,启动 Haddop 集群

先进行格式化操作

不格式化操作,hdfs会起不来

1
2
3
root@h01:/# cd /usr/local/hadoop/bin
root@h01:/usr/local/hadoop/bin#
root@h01:/usr/local/hadoop/bin# ./hadoop namenode -format

进入 hadoop 的 sbin 目录

1
2
root@h01:/# cd /usr/local/hadoop/sbin/
root@h01:/usr/local/hadoop/sbin#

启动

1
root@h01:/usr/local/hadoop/sbin# ./start-all.sh 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
root@h01:/usr/local/hadoop/sbin# ./start-all.sh 
Starting namenodes on [h01]
h01: Warning: Permanently added 'h01,172.18.0.2' (ECDSA) to the list of known hosts.
Starting datanodes
h02: Warning: Permanently added 'h02,172.18.0.3' (ECDSA) to the list of known hosts.
h05: Warning: Permanently added 'h05,172.18.0.6' (ECDSA) to the list of known hosts.
h03: Warning: Permanently added 'h03,172.18.0.4' (ECDSA) to the list of known hosts.
h04: Warning: Permanently added 'h04,172.18.0.5' (ECDSA) to the list of known hosts.
h04: WARNING: /usr/local/hadoop/logs does not exist. Creating.
h02: WARNING: /usr/local/hadoop/logs does not exist. Creating.
h03: WARNING: /usr/local/hadoop/logs does not exist. Creating.
h05: WARNING: /usr/local/hadoop/logs does not exist. Creating.
Starting secondary namenodes [h01]
Starting resourcemanager
Starting nodemanagers
root@h01:/usr/local/hadoop/sbin#

访问本机的 8088 与 9870 端口就可以看到监控信息了

如果不小心退出,再次重启前请先停止

1
2
cd /usr/local/hadoop/sbin
./stop-all.sh

使用命令 ./hadoop dfsadmin -report 可查看分布式文件系统的状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
root@h01:/usr/local/hadoop/bin# cd /usr/local/hadoop/sbin/
root@h01:/usr/local/hadoop/sbin# ./start-all.sh
Starting namenodes on [h01]
h01: Warning: Permanently added 'h01,172.18.0.2' (ECDSA) to the list of known hosts.
Starting datanodes
h02: Warning: Permanently added 'h02,172.18.0.3' (ECDSA) to the list of known hosts.
h05: Warning: Permanently added 'h05,172.18.0.6' (ECDSA) to the list of known hosts.
h03: Warning: Permanently added 'h03,172.18.0.4' (ECDSA) to the list of known hosts.
h04: Warning: Permanently added 'h04,172.18.0.5' (ECDSA) to the list of known hosts.
h04: WARNING: /usr/local/hadoop/logs does not exist. Creating.
h02: WARNING: /usr/local/hadoop/logs does not exist. Creating.
h03: WARNING: /usr/local/hadoop/logs does not exist. Creating.
h05: WARNING: /usr/local/hadoop/logs does not exist. Creating.
Starting secondary namenodes [h01]
Starting resourcemanager
Starting nodemanagers
root@h01:/usr/local/hadoop/sbin# ./hadoop dfsadmin -report
bash: ./hadoop: No such file or directory
root@h01:/usr/local/hadoop/sbin# cd /usr/local/hadoop/bin/
root@h01:/usr/local/hadoop/bin# ./hadoop dfsadmin -report
WARNING: Use of this script to execute dfsadmin is deprecated.
WARNING: Attempting to execute replacement "hdfs dfsadmin" instead.

Configured Capacity: 154968965120 (144.33 GB)
Present Capacity: 52879798272 (49.25 GB)
DFS Remaining: 52879675392 (49.25 GB)
DFS Used: 122880 (120 KB)
DFS Used%: 0.00%
Replicated Blocks:
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Low redundancy blocks with highest priority to recover: 0
Pending deletion blocks: 0
Erasure Coded Block Groups:
Low redundancy block groups: 0
Block groups with corrupt internal blocks: 0
Missing block groups: 0
Low redundancy blocks with highest priority to recover: 0
Pending deletion blocks: 0

-------------------------------------------------
Live datanodes (5):

Name: 172.18.0.2:9866 (h01)
Hostname: h01
Decommission Status : Normal
Configured Capacity: 30993793024 (28.87 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 18925076480 (17.63 GB)
DFS Remaining: 10575912960 (9.85 GB)
DFS Used%: 0.00%
DFS Remaining%: 34.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 0
Last contact: Thu Nov 02 12:44:51 UTC 2023
Last Block Report: Thu Nov 02 12:40:39 UTC 2023
Num of Blocks: 0


Name: 172.18.0.3:9866 (h02.hadoop)
Hostname: h02
Decommission Status : Normal
Configured Capacity: 30993793024 (28.87 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 18925039616 (17.63 GB)
DFS Remaining: 10575949824 (9.85 GB)
DFS Used%: 0.00%
DFS Remaining%: 34.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 0
Last contact: Thu Nov 02 12:44:48 UTC 2023
Last Block Report: Thu Nov 02 12:40:39 UTC 2023
Num of Blocks: 0


Name: 172.18.0.4:9866 (h03.hadoop)
Hostname: h03
Decommission Status : Normal
Configured Capacity: 30993793024 (28.87 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 18925076480 (17.63 GB)
DFS Remaining: 10575912960 (9.85 GB)
DFS Used%: 0.00%
DFS Remaining%: 34.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 0
Last contact: Thu Nov 02 12:44:51 UTC 2023
Last Block Report: Thu Nov 02 12:40:39 UTC 2023
Num of Blocks: 0


Name: 172.18.0.5:9866 (h04.hadoop)
Hostname: h04
Decommission Status : Normal
Configured Capacity: 30993793024 (28.87 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 18925039616 (17.63 GB)
DFS Remaining: 10575949824 (9.85 GB)
DFS Used%: 0.00%
DFS Remaining%: 34.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 0
Last contact: Thu Nov 02 12:44:48 UTC 2023
Last Block Report: Thu Nov 02 12:40:39 UTC 2023
Num of Blocks: 0


Name: 172.18.0.6:9866 (h05.hadoop)
Hostname: h05
Decommission Status : Normal
Configured Capacity: 30993793024 (28.87 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 18925076480 (17.63 GB)
DFS Remaining: 10575912960 (9.85 GB)
DFS Used%: 0.00%
DFS Remaining%: 34.12%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 0
Last contact: Thu Nov 02 12:44:51 UTC 2023
Last Block Report: Thu Nov 02 12:40:39 UTC 2023
Num of Blocks: 0


root@h01:/usr/local/hadoop/bin#

Hadoop 集群已经构建好了

运行内置WordCount例子

把license作为需要统计的文件

运行

1
2
root@h01:/usr/local/hadoop# cat LICENSE.txt > file1.txt
root@h01:/usr/local/hadoop# ls

结果

1
2
3
4
5
root@h01:/usr/local/hadoop# cat LICENSE.txt > file1.txt
root@h01:/usr/local/hadoop# ls
LICENSE-binary NOTICE-binary README.txt etc include libexec logs share
LICENSE.txt NOTICE.txt bin file1.txt lib licenses-binary sbin
root@h01:/usr/local/hadoop#

在 HDFS 中创建 input 文件夹

1
2
cd /usr/local/hadoop/bin
./hadoop fs -mkdir /input

上传 file1.txt 文件到 HDFS 中

1
./hadoop fs -put ../file1.txt /input

查看 HDFS 中 input 文件夹里的内容

1
./hadoop fs -ls /input

结果

1
2
3
4
5
6
7
root@h01:/usr/local/hadoop# cd /usr/local/hadoop/bin
root@h01:/usr/local/hadoop/bin# ./hadoop fs -mkdir /input
root@h01:/usr/local/hadoop/bin# ./hadoop fs -put ../file1.txt /input
root@h01:/usr/local/hadoop/bin# ./hadoop fs -ls /input
Found 1 items
-rw-r--r-- 2 root supergroup 15217 2023-11-02 13:30 /input/file1.txt
root@h01:/usr/local/hadoop/bin#

运作 wordcount 例子程序

1
./hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.3.jar wordcount /input /output
输出结果
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
root@h01:/usr/local/hadoop/bin# ./hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.3.jar wordcount /input /output
2023-11-02 13:32:18,936 INFO client.DefaultNoHARMFailoverProxyProvider: Connecting to ResourceManager at h01/172.18.0.2:8032
2023-11-02 13:32:19,203 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1698931074728_0001
2023-11-02 13:32:19,870 INFO input.FileInputFormat: Total input files to process : 1
2023-11-02 13:32:20,082 INFO mapreduce.JobSubmitter: number of splits:1
2023-11-02 13:32:20,177 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1698931074728_0001
2023-11-02 13:32:20,178 INFO mapreduce.JobSubmitter: Executing with tokens: []
2023-11-02 13:32:20,325 INFO conf.Configuration: resource-types.xml not found
2023-11-02 13:32:20,326 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2023-11-02 13:32:20,728 INFO impl.YarnClientImpl: Submitted application application_1698931074728_0001
2023-11-02 13:32:20,758 INFO mapreduce.Job: The url to track the job: http://h01:8088/proxy/application_1698931074728_0001/
2023-11-02 13:32:20,759 INFO mapreduce.Job: Running job: job_1698931074728_0001
2023-11-02 13:32:26,837 INFO mapreduce.Job: Job job_1698931074728_0001 running in uber mode : false
2023-11-02 13:32:26,838 INFO mapreduce.Job: map 0% reduce 0%
2023-11-02 13:32:30,902 INFO mapreduce.Job: map 100% reduce 0%
2023-11-02 13:32:35,949 INFO mapreduce.Job: map 100% reduce 100%
2023-11-02 13:32:35,961 INFO mapreduce.Job: Job job_1698931074728_0001 completed successfully
2023-11-02 13:32:36,047 INFO mapreduce.Job: Counters: 54
File System Counters
FILE: Number of bytes read=12507
FILE: Number of bytes written=577461
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=15313
HDFS: Number of bytes written=9894
HDFS: Number of read operations=8
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
HDFS: Number of bytes read erasure-coded=0
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=2391
Total time spent by all reduces in occupied slots (ms)=1928
Total time spent by all map tasks (ms)=2391
Total time spent by all reduce tasks (ms)=1928
Total vcore-milliseconds taken by all map tasks=2391
Total vcore-milliseconds taken by all reduce tasks=1928
Total megabyte-milliseconds taken by all map tasks=2448384
Total megabyte-milliseconds taken by all reduce tasks=1974272
Map-Reduce Framework
Map input records=270
Map output records=1672
Map output bytes=20756
Map output materialized bytes=12507
Input split bytes=96
Combine input records=1672
Combine output records=657
Reduce input groups=657
Reduce shuffle bytes=12507
Reduce input records=657
Reduce output records=657
Spilled Records=1314
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=76
CPU time spent (ms)=1010
Physical memory (bytes) snapshot=557735936
Virtual memory (bytes) snapshot=5230510080
Total committed heap usage (bytes)=484442112
Peak Map Physical memory (bytes)=327278592
Peak Map Virtual memory (bytes)=2610937856
Peak Reduce Physical memory (bytes)=230457344
Peak Reduce Virtual memory (bytes)=2619572224
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=15217
File Output Format Counters
Bytes Written=9894
root@h01:/usr/local/hadoop/bin#

查看 HDFS 中的 /output 文件夹的内容

1
./hadoop fs -ls /output
结果
1
2
3
4
5
root@h01:/usr/local/hadoop/bin# ./hadoop fs -ls /output
Found 2 items
-rw-r--r-- 2 root supergroup 0 2023-11-02 13:32 /output/_SUCCESS
-rw-r--r-- 2 root supergroup 9894 2023-11-02 13:32 /output/part-r-00000
root@h01:/usr/local/hadoop/bin#

查看 part-r-00000 文件的内容

1
./hadoop fs -cat /output/part-r-00000
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
root@h01:/usr/local/hadoop/bin# ./hadoop fs -ls /output
Found 2 items
-rw-r--r-- 2 root supergroup 0 2023-11-02 13:32 /output/_SUCCESS
-rw-r--r-- 2 root supergroup 9894 2023-11-02 13:32 /output/part-r-00000
root@h01:/usr/local/hadoop/bin# ./hadoop fs -cat /output/part-r-00000
"AS 2
"Contribution" 1
"Contributor" 1
"Derivative 1
"Legal 1
"License" 1
"License"); 1
"Licensor" 1
"NOTICE" 1
"Not 1
"Object" 1
"Source" 1
"Work" 1
"You" 1
"Your") 1
"[]" 1
"control" 1
"printed 1
"submitted" 1
(50%) 1
(Don't 1
(a) 1
(an 1
(and 1
(b) 1
(c) 1
(d) 1
(except 1
(hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/asio-1.10.2) 1
(hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/rapidxml-1.13) 1
(hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/tr2) 1
(hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/uriparser2) 1
(i) 1
(ii) 1
(iii) 1
(including 3
(or 3
(such 1
(the 1
----------- 1
------------ 2
------------- 2
-------------------------------------- 1
-------------------------------------------------------------------------------- 1
1 1
1. 1
1.0 1
2-Clause 1
2. 1
2.0 2
2.0, 1
2004 1
3-Clause 1
3. 1
4. 1
5. 1
6. 1
7. 1
8. 1
9 1
9. 1
A 1
AND 3
ANY 2
APPENDIX: 1
Accepting 1
Additional 1
Apache 5
Appendix 1
BASIS, 2
BSD 2
Boost 1
CONDITIONS 4
Contribution 3
Contribution(s) 3
Contribution." 1
Contributions) 1
Contributions. 2
Contributor 8
Contributor, 1
Copyright 2
DISTRIBUTION 1
Definitions. 1
Derivative 17
Disclaimer 1
Domain 1
END 1
Entity 3
Entity" 1
FITNESS 1
FOR 2
For 3
Foundation 1
Grant 2
How 1
However, 1
IS" 2
If 2
In 1
January 1
KIND, 2
Legal 3
Liability. 2
License 11
License, 7
License. 11
License; 1
Licensed 1
Licensor 8
Licensor, 1
Limitation 1
MERCHANTABILITY, 1
MIT 1
NON-INFRINGEMENT, 1
NOTICE 5
Notwithstanding 1
OF 3
OR 2
Object 4
PARTICULAR 1
PURPOSE. 1
Patent 1
Public 1
REPRODUCTION, 1
Redistribution. 1
Sections 1
See 2
Software 2
Source 8
Subject 2
Submission 1
TERMS 2
TITLE, 1
The 2
This 3
To 1
Trademarks. 1
USE, 1
Unless 3
Version 3
WARRANTIES 2
WITHOUT 2
Warranty 1
Warranty. 1
We 1
While 1
Work 20
Work, 4
Work. 1
Works 12
Works" 1
Works, 2
Works; 3
You 23
Your 8
[name 1
[yyyy] 1
a 20
above, 1
acceptance 1
accepting 2
act 1
acting 1
acts) 1
add 2
addendum 1
additional 4
additions 1
advised 1
against 1
against, 1
agree 1
agreed 3
agreement 1
all 3
alleging 1
alone 1
along 1
alongside 1
also 1
an 6
and 42
and/or 1
annotations, 1
any 28
appear. 1
applicable 3
applies 1
apply 2
appropriate 1
appropriateness 1
archives. 1
are 6
arising 1
as 15
asio-1.10.2 1
asserted 1
associated 1
assume 1
at 2
attach 1
attached 1
attribution 4
authorized 2
authorship, 2
authorship. 1
available 1
based 1
be 5
been 2
behalf 5
below). 1
beneficial 1
bind 1
boilerplate 1
brackets 1
brackets!) 1
bundles 1
but 5
by 20
by, 3
cannot 1
carry 1
cause 2
changed 1
character 1
charge 1
choose 1
claims 2
class 1
code 1
code, 2
combination 1
comment 1
commercial 1
common 1
communication 3
compiled 1
compliance 1
complies 1
components 2
computer 1
conditions 7
conditions. 1
conditions: 1
configuration 1
consequential 1
consistent 1
conspicuously 1
constitutes 1
construed 1
contained 1
content 1
contents 1
contract 1
contract, 1
contributory 1
control 2
control, 1
controlled 1
conversions 1
copies 1
copy 3
copyright 10
copyright, 1
counterclaim 1
cross-claim 1
customary 1
damages 3
damages, 1
damages. 1
date 1
defend, 1
defined 1
definition, 2
deliberate 1
derived 1
describing 1
description 1
designated 1
determining 1
different 1
direct 2
direct, 1
direction 1
discussing 1
display 1
display, 1
distribute 3
distribute, 2
distributed 3
distribution 3
distribution, 1
do 3
document. 1
documentation 1
documentation, 2
does 1
each 4
easier 1
editorial 1
either 2
elaborations, 1
electronic 1
electronic, 1
enclosed 2
entities 1
entity 3
entity, 1
entity. 2
even 1
event 1
example 1
except 2
excluding 3
executed 1
exercise 1
exercising 1
explicitly 1
express 2
failure 1
fee 1
fields 1
fifty 1
file 6
file, 1
file. 1
filed. 1
files 1
files. 1
files; 1
following 3
for 19
for, 1
form 8
form, 4
form. 1
format. 1
from 3
from) 1
from, 1
generated 2
give 1
goodwill, 1
governing 1
grant 1
granted 2
granting 1
grants 2
grossly 1
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/bloom/* 1
hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc 1
hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h 1
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/util/bulk_crc32_x86.c 1
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/fuse-dfs/util/tree.h 1
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/gmock-1.7.0/*/*.{cc|h} 1
hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/third_party/protobuf/protobuf/cpp_helpers.h 1
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java 1
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/TimeoutFuture.java 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/bootstrap-3.4.1 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/d3-v4.1.1.min.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.css 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dataTables.bootstrap.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-full-2.0.0.min.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/dust-helpers-1.1.1.min.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery-3.5.1.min.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/jquery.dataTables.min.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/json-bignum.js 1
hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/static/moment.min.js 1
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/lz4/lz4.{c|h} 1
hadoop-tools/hadoop-sls/src/main/html/css/bootstrap-responsive.min.css 1
hadoop-tools/hadoop-sls/src/main/html/css/bootstrap.min.css 1
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/bootstrap.min.js 1
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/d3.v3.js 1
hadoop-tools/hadoop-sls/src/main/html/js/thirdparty/jquery.js 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/node_modules/.bin/r.js 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/dt-1.10.18/* 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jquery 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/compat/{fstatat|openat|unlinkat}.h 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/cJSON.[ch] 1
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/resources/TERMINAL 1
harmless 1
has 2
have 2
hereby 2
herein 1
hold 1
http://www.apache.org/licenses/ 1
http://www.apache.org/licenses/LICENSE-2.0 1
identification 1
identifying 1
if 4
implied, 1
implied. 1
import, 1
improving 1
in 23
inability 1
incidental, 1
include 3
included 2
includes 1
including 5
including, 1
inclusion 2
incorporated 2
incurred 1
indemnify, 1
indemnity, 1
indicated 1
indirect, 2
individual 3
information. 1
informational 1
infringed 1
infringement, 1
institute 1
intentionally 2
interfaces 1
irrevocable 2
is 8
issue 1
its 3
language 1
law 3
lawsuit) 1
least 1
legal 1
liability 2
liability. 1
liable 1
licensable 1
license 5
licenses 1
licenses. 3
licenses/ 1
limitation, 1
limitations 1
limited 4
link 1
lists, 1
litigation 2
loss 1
losses), 1
made 1
made, 1
mailing 1
make, 1
making 1
malfunction, 1
managed 1
management 1
marked 1
marks, 1
may 9
mean 10
means 2
mechanical 1
media 1
medium, 1
meet 1
merely 1
modifications 3
modifications, 3
modified 1
modify 2
modifying 1
more 1
must 4
name 1
name) 1
names 1
names, 1
necessarily 1
negligence), 1
negligent 1
no 2
no-charge, 2
non-exclusive, 2
normally 1
not 11
nothing 1
notice 2
notice, 1
notices 8
object 1
obligations 1
obligations, 1
obtain 1
of 62
of, 3
offer 1
offer, 1
on 11
one 1
only 4
open 1
or 62
or, 1
origin 1
original 2
other 8
otherwise 3
otherwise, 3
out 1
outstanding 1
own 4
owner 4
owner. 1
owner] 1
ownership 2
page" 1
part 4
patent 5
patent, 1
percent 1
perform, 1
permission 1
permissions 3
perpetual, 2
pertain 2
places: 1
possibility 1
power, 1
preferred 1
prepare 1
product 2
prominent 1
provide 1
provided 5
provides 2
publicly 2
purpose 2
purposes 4
rapidxml-1.13 1
readable 1
reason 1
reasonable 1
received 1
recipients 1
recommend 1
redistributing 2
regarding 1
remain 1
replaced 1
represent, 1
representatives, 1
reproduce 1
reproduce, 1
reproducing 1
reproduction, 3
required 4
responsibility, 1
responsible 1
result 1
resulting 1
retain, 1
revisions, 1
rights 1
risks 1
royalty-free, 2
same 1
section 1
section) 1
sell, 2
sent 1
separable 1
separate 1
service 1
shall 15
shares, 1
should 1
software 2
sole 1
solely 1
source 3
source, 1
special, 1
specific 1
state 1
stated 2
statement 1
stating 1
stoppage, 1
sublicense, 1
submit 1
submitted 2
submitted. 1
subsequently 1
such 17
summarizes 1
supersede 1
support, 1
syntax 1
systems 1
systems, 1
terminate 1
terms 7
text 5
that 22
the 97
their 3
then 2
theory, 1
thereof 1
thereof, 2
thereof. 1
these 1
third-party 3
this 16
those 4
through 1
to 39
tort 1
tr2 1
tracking 1
trade 1
trademark, 1
trademarks, 1
transfer 1
transformation 1
translation 1
types. 1
under 10
union 1
unless 1
uriparser2 1
use 5
use, 4
using 1
various 1
verbal, 1
version 1
warranties 1
warranty 1
warranty, 1
was 1
where 1
wherever 1
whether 4
which 2
whole, 2
whom 1
with 11
within 8
without 3
work 5
work, 2
work. 1
works 1
worldwide, 2
writing 1
writing, 3
written 1
you 2
your 4

Hadoop 部分结束了

安装 Hbase

在 Hadoop 集群的基础上安装 Hbase

下载 Hbase 3.0.0

1
2
3
cd /

wget https://downloads.apache.org/hbase/3.0.0-alpha-4/hbase-3.0.0-alpha-4-bin.tar.gz

解压到 /usr/local 目录下面

1
tar -zxvf hbase-3.0.0-alpha-4-bin.tar.gz -C /usr/local/

修改 /etc/profile 环境变量文件,添加 Hbase 的环境变量,追加下述代码

1
vim /etc/profile
1
2
export HBASE_HOME=/usr/local/hbase-3.0.0-alpha-4
export PATH=$PATH:$HBASE_HOME/bin

使环境变量配置文件生效

1
root@h01:/usr/local# source /etc/profile

使用 ssh h02 可进入其他四个容器,依次修改。

断开ssh

1
exit

即是每个容器都要在 /etc/profile 文件后追加那两行环境变量

在目录 /usr/local/hbase-3.0.0-alpha-4/conf 修改配置

修改 hbase-env.sh,追加

1
2
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HBASE_MANAGES_ZK=true

修改 hbase-site.xml 为

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://h01:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master</name>
<value>h01:60000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>h01,h02,h03,h04,h05</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hadoop/zoodata</value>
</property>
</configuration>

修改 regionservers 文件为

1
2
3
4
5
h01
h02
h03
h04
h05

使用 scp 命令将配置好的 Hbase 复制到其他 4 个容器中

1
2
3
4
root@h01:~# scp -r /usr/local/hbase-3.0.0-alpha-4 root@h02:/usr/local/
root@h01:~# scp -r /usr/local/hbase-3.0.0-alpha-4 root@h03:/usr/local/
root@h01:~# scp -r /usr/local/hbase-3.0.0-alpha-4 root@h04:/usr/local/
root@h01:~# scp -r /usr/local/hbase-3.0.0-alpha-4 root@h05:/usr/local/

启动 Hbase

1
2
cd /usr/local/hbase-3.0.0-alpha-4/bin
./start-hbase.sh
1
2
3
4
5
6
7
8
9
10
11
12
13
14
root@h01:/usr/local/hbase-3.0.0-alpha-4/conf# cd /usr/local/hbase-3.0.0-alpha-4/bin
root@h01:/usr/local/hbase-3.0.0-alpha-4/bin# ./start-hbase.sh
h01: running zookeeper, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-zookeeper-h01.out
h05: running zookeeper, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-zookeeper-h05.out
h04: running zookeeper, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-zookeeper-h04.out
h02: running zookeeper, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-zookeeper-h02.out
h03: running zookeeper, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-zookeeper-h03.out
running master, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase--master-h01.out
h02: running regionserver, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-regionserver-h02.out
h01: running regionserver, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-regionserver-h01.out
h05: running regionserver, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-regionserver-h05.out
h04: running regionserver, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-regionserver-h04.out
h03: running regionserver, logging to /usr/local/hbase-3.0.0-alpha-4/bin/../logs/hbase-root-regionserver-h03.out
root@h01:/usr/local/hbase-3.0.0-alpha-4/bin#

打开 Hbase 的 shell

1
./hbase shell
1
2
3
4
5
6
7
8
root@h01:/usr/local/hbase-3.0.0-alpha-4/bin# ./hbase shell
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/book.html#shell
Version 3.0.0-alpha-4, re44cc02c75ecae7ece845f04722eb16b7528393f, Sat May 27 16:02:34 UTC 2023
Took 0.0014 seconds
hbase:001:0>

hbase测试

创建表member

1
create 'member','id','address','info'
1
2
3
4
5
6
hbase:001:0> create 'member','id','address','info'
Created table member
Took 3.0668 seconds
=> Hbase::Table - member
hbase:002:0>

添加数据,并查看表中数据

1
put 'member', 'debugo','id','11'
1
2
3
hbase:002:0>  put 'member', 'debugo','id','11'
Took 0.3332 seconds
hbase:003:0>
1
put 'member', 'debugo','info:age','27'
1
2
3
4
hbase:003:0>  put 'member', 'debugo','info:age','27'
Took 0.0249 seconds
hbase:004:0>

1
count 'member'
1
2
3
4
5
hbase:004:0>  count 'member'
1 row(s)
Took 0.0975 seconds
=> 1
hbase:005:0>
1
scan 'member'
1
2
3
4
5
6
7
8
hbase:005:0> scan 'member'
ROW COLUMN+CELL
debugo column=id:, timestamp=2023-11-03T01:05:55.576, value=11
debugo column=info:age, timestamp=2023-11-03T01:28:38.763, value=
27
1 row(s)
Took 0.0724 seconds
hbase:006:0>

安装 Spark

在 Hadoop 的基础上安装 Spark

下载 Spark 3.3.0

1
2
回到容器h01
wget https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-3.3.3/spark-3.3.3-bin-hadoop3.tgz

解压到 /usr/local ⽬录下⾯

1
tar -zxvf spark-3.3.3-bin-hadoop3.tgz -C /usr/local/

修改⽂件夹的名字

1
2
cd /usr/local/
mv spark-3.3.3-bin-hadoop3 spark-3.3.3

修改 /etc/profile 环境变量⽂件,添加 Hbase 的环境变量,追加下述代码

1
2
export SPARK_HOME=/usr/local/spark-3.3.3
export PATH=$PATH:$SPARK_HOME/bin

使环境变量配置⽂件⽣效

1
source /etc/profile

使⽤ ssh h02 可进⼊其他四个容器,依次修改。

即是每个容器都要在 /etc/profile ⽂件后追加那两⾏环境变量

在⽬录 /usr/local/spark-3.3.3/conf 修改配置

修改⽂件名

1
root@h01:/usr/local/spark-3.3.3/conf# mv spark-env.sh.template spark-env.sh

修改 spark-env.sh,追加

1
2
3
4
5
6
7
8
9
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SCALA_HOME=/usr/share/scala

export SPARK_MASTER_HOST=h01
export SPARK_MASTER_IP=h01
export SPARK_WORKER_MEMORY=4g

修改⽂件名

注意:不知是不是版本不一样此目录下没有slaves.template

1
2
3
root@h01:/usr/local/spark-3.3.3/conf# ls
fairscheduler.xml.template metrics.properties.template spark-env.sh
log4j2.properties.template spark-defaults.conf.template workers.template

可以尝试直接创建slaves或者更改workers

1
root@h01:/usr/local/spark-3.3.3/conf# mv slaves.template slaves

修改 slaves 如下

1
2
3
4
5
h01
h02
h03
h04
h05

使⽤ scp 命令将配置好的 Hbase 复制到其他 4 个容器中

1
2
3
4
5
root@h01:/usr/local# scp -r /usr/local/spark-3.3.3 root@h02:/usr/local/
root@h01:/usr/local# scp -r /usr/local/spark-3.3.3 root@h03:/usr/local/
root@h01:/usr/local# scp -r /usr/local/spark-3.3.3 root@h04:/usr/local/
root@h01:/usr/local# scp -r /usr/local/spark-3.3.3 root@h05:/usr/local/

启动 Spark

1
root@h01:/usr/local/spark-3.3.3/sbin# ./start-all.sh

结果

1
2
3
4
5
6
7
8
9
root@h01:/usr/local/spark-3.3.3/conf# cd /usr/local/spark-3.3.3/sbin
root@h01:/usr/local/spark-3.3.3/sbin# ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark-3.3.3/logs/spark--org.apache.spark.deploy.master.Master-1-h01.out
h01: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.3.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-h01.out
h03: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.3.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-h03.out
h05: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.3.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-h05.out
h04: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.3.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-h04.out
h02: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark-3.3.3/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-h02.out
root@h01:/usr/local/spark-3.3.3/sbin#

HDFS 重格式化问题

  • 重新格式化意味着集群的数据会被全部删除,格式化前需考虑数据备份或转移问题;

  • 先删除主节点(即namenode节点),Hadoop的临时存储⽬录tmp、namenode存储永久性元数
    据⽬录dfs/name、Hadoop系统⽇志⽂件⽬录log 中的内容 (注意是删除⽬录下的内容不是⽬
    录);

  • 删除所有数据节点(即datanode节点) ,Hadoop的临时存储⽬录tmp、namenode存储永久性元数
    据⽬录dfs/name、Hadoop系统⽇志⽂件⽬录log 中的内容;

  • 格式化⼀个新的分布式⽂件系统:

1
root@h01:/usr/local/hadoop/bin# ./hadoop namenode -format

注意事项:

  • Hadoop的临时存储⽬录tmp(即core-site.xml配置⽂件中的hadoop.tmp.dir属性,默认值是/tmp/hadoop-${user.name}),如果没有配置hadoop.tmp.dir属性,那么hadoop格式化时将会在/tmp⽬录下创建⼀个⽬录,例如在cloud⽤⼾下安装配置hadoop,那么Hadoop的临时存储⽬录就位于/tmp/hadoop-cloud⽬录下
  • Hadoop的namenode元数据⽬录(即hdfs-site.xml配置⽂件中的dfs.namenode.name.dir属性,默认值是${hadoop.tmp.dir}/dfs/name),同样如果没有配置该属性,那么hadoop在格式化时将⾃⾏创建。必须注意的是在格式化前必须清楚所有⼦节点(即DataNode节点)dfs/name下的内容,否则在启动hadoop时⼦节点的守护进程会启动失败。这是由于,每⼀次format主节点namenode,dfs/name/current⽬录下的VERSION⽂件会产⽣新的clusterID、namespaceID。但是如果⼦节点的dfs/name/current仍存在,hadoop格式化时就不会重建该⽬录,因此形成⼦节点的clusterID、namespaceID与主节点(即namenode节点)的clusterID、namespaceID不⼀致。最终导致hadoop启动失败。


说在最后

历时4天,终于完成Hadoop全部的安装,版本差异有一些问题吧,然后就是遇到了一些错误,查资料时还是得多方面参考。这次部署算是熟悉了Ubuntu的命令行操作,从毫无基础到慢慢理解。

多看看评论,写教程的人有时候当局者迷,评论区也有许多人或许遇到一样的问题。

有问题多自己想想,思考的过程还是很有意义的(痛苦),最后也会及其有成就感。

摸了~

本文于10.13开始安装Ubuntu虚拟机文章起草,10.30日开始docker部署,连续4天构建Hadoop(网上资料太少了,没得抄啊),平均一天6h时长。活动活动去~