SpringCloud基础框架搭建 –11 docker+hadoop+flume 监控文件

这是系列搭建springcloud基础框架的文章,内容包括集成shiro、Mysql主从、seata、activiti、drools、hadoop大数据常用组件、keepalive+nginx https配置等;

1. 安装配置 docker+hadoop+spark 参考
2. 安装 flume 1.9.0 …. tar -zxvf ….bin.tar.gz
usr/local/flume
flume/安装目录
workspace/放配置文件;
3. 执行命令:
./flume-ng agent –conf ../conf/ –name a3 –conf-file ../../workspace/flume-dir-hdfs.conf
后台运行:
nohup ./flume-ng agent –conf ../conf/ –name a3 –conf-file ../../workspace/flume-dir-hdfs.conf -Dflume.root.logger=info,console &

4. 配置文件
flume-dir-hdfs.conf

a3.sources = r3
a3.sinks = k3
a3.channels = c3

# configs source
a3.sources.r3.type = spooldir
a3.sources.r3.spoolDir = /root/bigdatas/softs/flume
a3.sources.r3.fileSuffix = .COMPLETED
a3.sources.r3.fileHeader = true

a3.sinks.k3.type = hdfs
a3.sinks.k3.hdfs.path = hdfs://master:9000/flume/upload/%Y%m%d/%H

# upload file prefix
a3.sinks.k3.hdfs.filePrefix = upload-
a3.sinks.k3.hdfs.round = true
a3.sinks.k3.hdfs.roundValue = 1
a3.sinks.k3.hdfs.roundUnit = hour
a3.sinks.k3.hdfs.useLocalTimeStamp = true
a3.sinks.k3.hdfs.batchSize = 100
a3.sinks.k3.hdfs.fileType = DataStream
a3.sinks.k3.hdfs.rollInterval = 600
a3.sinks.k3.hdfs.rollSize = 134217700
a3.sinks.k3.hdfs.rollCount = 0

a3.sinks.k3.hdfs.minBlockReplicas = 1

a3.channels.c3.type = memory
a3.channels.c3.capacity = 1000
a3.channels.c3.transactionCapacity = 100

a3.sources.r3.channels = c3
a3.sinks.k3.channel = c3

5. 监控目录
/root/bigdatas/softs/flume (该目录为宿主机的目录 /usr/local/docker/bigdatas/softs映射)
6. 通过: http://master:50070 看不到上传到 hdfs 的文件 file system
有问题;
7. 命令开启重启
https://www.cnblogs.com/wenq001/p/10196201.html
8. 需要拷同的 jar [hadoop -> flume/lib]
HADOOP_HOME/share/hdfs,common,tools
hdfs,common 除 test 所有 jar包;
tools/lib/
cp commons-io-2.4.jar /usr/local/flume/flume/lib/
cp htrace-core-3.1.0-incubating.jar /usr/local/flume/flume/lib/
cp commons-configuration-1.6.jar /usr/local/flume/flume/lib/
cp hadoop-auth-2.7.6.jar /usr/local/flume/flume/lib/
9. 如果 hadoop 的 datanode 没启动成功,且一直循环

欢迎您的到来,感谢您的支持!

为您推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注