LieBrother

当才华撑不起野心时,应该静下心来学习;当能力驾驭不了目标时,应该沉下心来历练。


  • 首页

  • 归档

  • 分类

  • 标签

  • 关于

HBase单机模式配置

发表于 2016-04-02   |   分类于 HBase   |     |   阅读次数

1.在hbase-env.sh中修改Java路径

1
2
export JAVA_HOME=/csh/link/jdk
export HBASE_PID_DIR=/csh/hadoop/hbase/pids

2.在hbase-site.xml中修改

1
2
3
4
5
6
7
8
<property>
<name>hbase.rootdir</name>
<value>file:///csh/hadoop/hbase</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/csh/hadoop/zookeeper</value>
</property>

3.启动HBase

1
bin/start-hbase.sh

4.连接HBase

1
bin/hbase shell

放下

发表于 2016-03-31   |   分类于 生活人生   |     |   阅读次数

放下得失
放下自卑
放下消极
放下懒惰
放下焦虑
放下烦扰
放下抱怨
放下恐惧
放下犹豫

Hadoop2.x配置HA

发表于 2016-03-27   |   分类于 Hadoop   |     |   阅读次数

各节点配置参考表

主机 NameNode DataNode Zookeeper ZKFC JournalNode ResourceManager NodeManager
node1 1 1 1 1
node2 1 1 1 1 1 1
node3 1 1 1 1
node4 1 1 1

文件配置:
core-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<property>
<name>hadoop.tmp.dir</name>
<value>/csh/hadoop/hadoop2.7.2/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/csh/hadoop/hadoop2.7.2/journal</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node1:2181,node2:2181,node3:2181</value>
</property>

hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>node1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>node2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>node1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>node2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node2:8485;node3:8485;node4:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>

mapred-site.xml

1
2
3
4
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

masters

1
node2

slaves

1
2
3
node2
node3
node4

启动

安装Zookeeper请看:Zookeeper安装以及集群搭建

1.启动 zookeeper(在node1,node2,node3中执行以下命令)
(在zookeeper/bin目录下)

1
./zkServer.sh start

通过以下命令检查是否启动成功

1
./zkServer.sh status

成功会显示以下数据

1
2
3
ZooKeeper JMX enabled by default
Using config: /csh/software/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower //这里会有一个节点是:leader,其余2个节点是:follower

2.启动journalnode(在node1中执行以下命令)

1
./hadoop-daemons.sh start journalnode

在node2、node3、node4运行jps命令检查journalnode是否启动成功
成功会有出现

1
2601 JournalNode

3.格式化zkfc,让在zookeeper中生成ha节点(在node1中执行)

1
hdfs zkfc –formatZK

格式化成功后可以查看zookeeper得到

1
2
3
./zkCli.sh -server node1:2181
[zk: node1:2181(CONNECTED) 0] ls /hadoop-ha
[mycluster]

4.格式化hdfs(在node1中执行)

1
hadoop namenode –format

5.启动NameNode
先在node1上启动active结点(在node1中执行)

1
[root@node1 sbin]# hadoop-daemon.sh start namenode

在node2中同步namenode数据,同时启动standby的namenode

1
2
3
4
#把NameNode的数据同步到node2上  
hdfs namenode –bootstrapStandby
#启动node2上的namenode作为standby
hadoop-daemon.sh start namenode

6.启动DataNode(在node1中执行)

1
./hadoop-daemons.sh start datanode

7.启动yarn
(在作为资源管理器上的机器上启动,我这里是node1,执行如下命令完成yarn的启动)

1
./start-yarn.sh

8.启动ZKFC(在node1、node2中分别执行)

1
hadoop-daemon.sh start zkfc

各节点的情况

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
//node1
17827 QuorumPeerMain
18179 NameNode
25431 Jps
19195 ResourceManager
19985 DFSZKFailoverController

//node2
9088 QuorumPeerMain
13250 Jps
9171 JournalNode
10360 NodeManager
10985 DFSZKFailoverController
9310 NameNode
9950 DataNode

//node3
7108 NodeManager
7926 Jps
6952 DataNode
6699 JournalNode
6622 QuorumPeerMain

//node4
6337 JournalNode
6755 NodeManager
7574 Jps
6603 DataNode

Zookeeper安装以及集群搭建

发表于 2016-03-27   |   分类于 ZooKeeper   |     |   阅读次数

本文中配置3个节点的zookeeper集群,主机分别是node1,node2,node3
到官网下载压缩包,也可以在下面链接下载
zookeeper-3.4.8.tar.gz

1.解压压缩包

1
tar -xvf zookeeper-3.4.8.tar.gz

2.修改配置
到conf文件目录下,有个zoo_sample.cfg文件,将文件拷贝一份改名为zoo.cfg

1
cp zoo_sample.cfg zoo.cfg

修改配置文件zoo.cfg

1
2
3
4
5
6
7
8
9
10
11
12
tickTime=2000
initLimit=10
syncLimit=5
#配置zookeeper的数据存放目录
dataDir=/csh/hadoop/zookeeper/data
#配置zookeeper的日志记录
dataLogDir=/csh/hadoop/zookeeper/datalog
clientPort=2181
#配置集群
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888

3.创建dataDir和dataLogDir目录

1
2
mkdir -p /csh/hadoop/zookeeper/data
mkdir -p /csh/hadoop/zookeeper/datalog

4.根据配置文件zoo.cfg中的集群,在dataDir中添加文件myid,并写入相应的数字

1
2
3
4
5
6
#在node1中执行
echo "1" > /csh/hadoop/zookeeper/data/myid
#在node2中执行
echo "2" > /csh/hadoop/zookeeper/data/myid
#在node3中执行
echo "3" > /csh/hadoop/zookeeper/data/myid

5.运行3个主机的zookeeper,通过以下命令

1
2
#在zookeeper/bin目录下
./zkServer.sh start

6.检测是否成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#结点node1
[root@node1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /csh/software/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower

#结点node2
[root@node2 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /csh/software/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader

#结点node3
[root@node3 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /csh/software/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: follower

配置Hadoop2.x的HDFS、MapReduce来运行WordCount程序

发表于 2016-03-25   |   分类于 Hadoop   |     |   阅读次数
主机 HDFS MapReduce
node1 NameNode ResourceManager
node2 SecondaryNameNode & DataNode NodeManager
node3 DataNode NodeManager
node4 DataNode NodeManager

1.配置hadoop-env.sh

1
export JAVA_HOME=/csh/link/jdk

2.配置core-site.xml

1
2
3
4
5
6
7
8
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/csh/hadoop/hadoop2.7.2/tmp</value>
</property>

3.配置hdfs-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<property>
<name>dfs.namenode.http-address</name>
<value>node1:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node2:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/csh/hadoop/hadoop2.7.2/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/csh/hadoop/hadoop2.7.2/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>

4.配置mapred-site.xml

1
2
3
4
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

5.配置yarn-site.xml

1
2
3
4
5
6
7
8
9
10
11
12
<property>
<name>yarn.resourcemanager.hostname</name>
<value>node1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

6.配置masters

1
node2

7.配置slaves

1
2
3
node2
node3
node4

8.启动Hadoop

1
2
3
bin/hadoop namenode -format
sbin/start-dfs.sh
sbin/start-yarn.sh

9.运行WordCount程序

1
2
3
4
5
6
7
8
//创建文件wc.txt
echo "I love Java I love Hadoop I love BigData Good Good Study, Day Day Up" > wc.txt
//创建HDFS中的文件
hdfs dfs -mkdir -p /input/wordcount/
//将wc.txt上传到HDFS中
hdfs dfs -put wc.txt /input/wordcount
//运行WordCount程序
hadoop jar /csh/software/hadoop-2.7.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input/wordcount/ /output/wordcount/

10.结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
[root@node1 sbin]# hadoop jar /csh/software/hadoop-2.7.2/share/hadoop/mapreduce/hadoapreduce-examples-2.7.2.jar wordcount /input/wordcount/ /output/wordcount/
16/03/24 19:26:48 INFO client.RMProxy: Connecting to ResourceManager at node1/192.161.11:8032
16/03/24 19:26:56 INFO input.FileInputFormat: Total input paths to process : 1
16/03/24 19:26:56 INFO mapreduce.JobSubmitter: number of splits:1
16/03/24 19:26:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_145887175_0001
16/03/24 19:26:59 INFO impl.YarnClientImpl: Submitted application application_145887175_0001
16/03/24 19:27:00 INFO mapreduce.Job: The url to track the job: http://node1:8088/prapplication_1458872237175_0001/
16/03/24 19:27:00 INFO mapreduce.Job: Running job: job_1458872237175_0001
16/03/24 19:28:13 INFO mapreduce.Job: Job job_1458872237175_0001 running in uber modfalse
16/03/24 19:28:13 INFO mapreduce.Job: map 0% reduce 0%
16/03/24 19:30:07 INFO mapreduce.Job: map 100% reduce 0%
16/03/24 19:31:13 INFO mapreduce.Job: map 100% reduce 33%
16/03/24 19:31:16 INFO mapreduce.Job: map 100% reduce 100%
16/03/24 19:31:23 INFO mapreduce.Job: Job job_1458872237175_0001 completed successfu
16/03/24 19:31:24 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=106
FILE: Number of bytes written=235387
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=174
HDFS: Number of bytes written=64
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=116501
Total time spent by all reduces in occupied slots (ms)=53945
Total time spent by all map tasks (ms)=116501
Total time spent by all reduce tasks (ms)=53945
Total vcore-milliseconds taken by all map tasks=116501
Total vcore-milliseconds taken by all reduce tasks=53945
Total megabyte-milliseconds taken by all map tasks=119297024
Total megabyte-milliseconds taken by all reduce tasks=55239680
Map-Reduce Framework
Map input records=4
Map output records=15
Map output bytes=129
Map output materialized bytes=106
Input split bytes=105
Combine input records=15
Combine output records=9
Reduce input groups=9
Reduce shuffle bytes=106
Reduce input records=9
Reduce output records=9
Spilled Records=18
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=1468
CPU time spent (ms)=6780
Physical memory (bytes) snapshot=230531072
Virtual memory (bytes) snapshot=4152713216
Total committed heap usage (bytes)=134795264
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=69
File Output Format Counters
Bytes Written=64
[root@node1 sbin]# hdfs dfs -cat /output/wordcount/*
BigData 1
Day 2
Good 2
Hadoop 1
I 3
Java 1
Study, 1
Up 1
love 3
1…21222324
LieBrother

LieBrother

当才华撑不起野心时,应该静下心来学习;当能力驾驭不了目标时,应该沉下心来历练。

120 日志
38 分类
138 标签
© 2016 - 2019 LieBrother
由 Hexo 强力驱动
主题 - NexT.Mist
本站访客数人次  |  本站总访问量次