一、环境部署
hadoop集群2.7.1
flume 1.7.0
spark集群:spark-2.0.1-bin-hadoop2.7.tgz
环境搭建可参考我前面几篇文章。不再赘述
三台机器:master,slave1,slave2
二、启动集群环境
1.启动hadoop集群
start-all.sh
2.启动spark集群
start-master.shstart-slaves.sh
三、配置flume
编辑conf/flume-conf.properties
配置source、channel、sink
sink是关键
一种是推式接收器
一种是拉式接收器
相对于的spark streaming 读取代码也要相应改变
这里我采用的是推式接收器
sink则应这样配置
a1.sinks = k1 a1.sinks.k1.type = avro a1.sinks.k1.channel = c1 a1.sinks.k1.hostname = master #绑定的主机名a1.sinks.k1.port = 9999 #绑定的端口号```source可谓avro,syslog等等,第一次测试为确保成功,可先考虑avro,与sink对应 **avro:**
a1.sources.r1.type = avro
a1.sources.r1.bind = 192.168.31.131
a1.sources.r1.port = 4141
a1.sources.r1.channels = c1
**syslog:**
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = 192.168.31.131
a1.sources.r1.channels = c1
channel均为内存 最后完整配置如下:source为avro:
a1.sources = r1
a1.sinks = k1
a1.channels = c1
Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = 192.168.31.131
a1.sources.r1.port = 4141
a1.sources.r1.channels = c1
Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = master
a1.sinks.k1.port = 9999
Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
yi bai tiao jiu submit
source 为syslog:
a1.sources = r1
a1.sinks = k1
a1.channels = c1
Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = 192.168.31.131
a1.sources.r1.channels = c1
Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = master
a1.sinks.k1.port = 9999
Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
yi bai tiao jiu submit
####四、编程FlumeWordCount.py**编写spark steaming 代码,读取flume采集的数据,并统计词频** 代码(python 实现):``` # -*- coding: UTF-8 -*- ###spark streaming&&Flume from pyspark import SparkContext from pyspark.streaming import StreamingContext from pyspark.streaming.flume import FlumeUtils sc=SparkContext("local[2]","FlumeWordCount") #处理时间间隔为2s ssc=StreamingContext(sc,2) #打开一个TCP socket 地址 和 端口号 #lines=ssc.socketTextStream("master",9999) lines = FlumeUtils.createStream(ssc, "master",9999) lines1=lines.map(lambda x:x[1]) #对1s内收到的字符串进行分割 words=lines1.flatMap(lambda line:line.split(" ")) #映射为(word,1)元祖 pairs=words.map(lambda word:(word,1)) wordcounts=pairs.reduceByKey(lambda x,y:x+y) #输出文件,前缀+自动加日期 wordcounts.saveAsTextFiles("/tmp/flume") wordcounts.pprint() #启动spark streaming应用 ssc.start() #等待计算终止 ssc.awaitTermination() ```####五、运行#####1.下载依赖的jars包注意,应该去官网找对应的jar包,例如 kafka2.01对应  下载spark-streaming-flume_2.11.jar 我放在了/home/cms下 (ps:之前放在了flume/lib/下,就会报找不到classpath的错#####2.启动flume agent ``` flume-ng agent --conf ./conf/ -f /home/cms/flume/conf/flume-conf.properties -n a1 -Dflume.root.logger=INFO,console ```>--conf对于的是flume下的conf目录-f 对应的是agent配置文件-n 对于的是agent的名字 基本上终端都会报错 ,因为没有启动client和sink落地的程序,暂时不用理#####3.**若source为avro:**,启动测试1)准备测试数据 新建一个log_test.txt 输入数据: 2)运行spark streaming 开启另一个终端``` spark-submit --jars spark-streaming-flume-assembly_2.11-2.0.1.jar FlumeWordCount.py 2> error_log.txt```3)发送数据 开启另外一个终端,将log_test.txt发送出去``` flume-ng avro-client --conf ./conf/ -H 192.168.31.131 -p 4141 -F /home/cms/log_test.txt```4)观察运行spark streaming的程序的终端的输出  若只是输出time。。。等等,则检查flume的配置 hdfs上查看:    #####4.**若source为syslog:**,启动测试1)运行spark streaming 程序``` spark-submit --jars spark-streaming-flume-assembly_2.11-2.0.1.jar FlumeWordCount.py 2> error_log.txt```2)开启另一个终端,发送数据``` echo "hello'\t'word" | nc 192.168.31.131 5140 ```3)观察运行spark streaming的程序的终端的输出 ####四、下一步flume+kafka+spark streaming 参考:[官网](http://spark.apache.org/docs/latest/streaming-flume-integration.html)
作者:玄月府的小妖在debug
链接:https://www.jianshu.com/p/23a906d5e59f
共同學習,寫下你的評論
評論加載中...
作者其他優質文章