SpringCloud基础框架搭建 –10 集成kafka相关应用-环境

这是系列搭建springcloud基础框架的文章,内容包括集成shiro、Mysql主从、seata、activiti、drools、hadoop大数据常用组件、keepalive+nginx https配置等;

参考:https://blog.csdn.net/m0_61232019/article/details/127683413
前提,需要配置zookeeper: 2023-4-8 框架搭建 –9 集成zookeeper相关应用-环境

master:
cd /root/bigdatas/softs
tar -xzvf kafka_2.11-2.1.0.tgz -C /usr/local
cd /usr/local/kafka_2.11-2.1.0
mkdir kafka-logs

vi ~/.bashrc

export KAFKA_HOME=/usr/local/kafka_2.11-2.1.0
export PATH=$KAFKA_HOME/bin:$PATH

vi config/server.properties

# 修改如下参数
broker.id=0 
listeners=PLAINTEXT://master:9092
log.dirs=/usr/local/kafka_2.11-2.1.0/kafka-logs
zookeeper.connect=master:2181,slave1:2181,slave2:2181

注:
broker.id:集群内全局唯一标识符,每一个节点上需要设置不同的值(值可以随机但是不能相同)
listeners:这个是与本级相关的,需要每个节点设置为自己的主机名或者ip
log.dirs:存放kafka消息,也就是之前创建的kafka-logs目录
zookeeper.connect: 配置zookeeper集群的地址(主机名:2081,主机名:2181,主机名:2181—中间用逗号隔开,2181是默认的)

拷贝kafka目录到 slavex

scp -r kafka_2.11-2.1.0/ slave1:/usr/local/
scp -r kafka_2.11-2.1.0/ slave2:/usr/local/

在master,slave1,2 分别修改 server.properties borker.id 值不能一样,同样配置 .bashrc Kafka环境变量;
==============
分别启动 master,slave1,2

cd /bin/
./kafka-server-start.sh -daemon /usr/local/kafka.../config/server.perperties

检测是否成功:
jps 看是否有 Kafka ;
==============
出现问题:
kafka failed; error=’Cannot allocate memory’ (errno=12)
https://blog.csdn.net/m0_70556273/article/details/127706153

一、启动kafka broker时异常
nohup /mnt/sata1/kafka_2.11-0.10.0.1/bin/kafka-server-start.sh /mnt/sata1/kafka_2.11-0.10.0.1/config/server.properties &
异常:
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
二、解决方法
进入/mnt/sata1/kafka_2.11-0.10.0.1/bin目录下,修改kafka-server-start.sh文件:
找到这一行export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" 
改为   export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

======= 创建主题,发送消息,消费消息 =====================
参考:https://blog.csdn.net/qq_44713806/article/details/96502017
以下命令,只需要一台kafka上执行,或消息消息一台,发送消息另一台
1)创建主题:

[root@master bin]# ./kafka-topics.sh --create --zookeeper master:2181,slave1:2181,slave2:2181 --replication-factor 1 --partitions 1 --topic solvingProblem
Created topic "solvingProblem".

2)列出主题

./kafka-topics.sh --list --zookeeper master:2181,slave1:2181,slave2:2181

#查看描述, 分区
./kafka-topics.sh --describe --zookeeper master:2181,slave1:2181,slave2:2181 --topic solvingProblem

3)消费主题: (命令一开始没有输出什么,等待消息)

./kafka-console-consumer.sh --bootstrap-server master:9092,slave1:9092,slave2:9092 --topic solvingProblem
# 接收的消息
[root@master bin]# ./kafka-console-consumer.sh --bootstrap-server master:9092,slave1:9092,slave2:9092 --topic solvingProblem
dssfsfsfs
dssfsfsfs
dssfsfsfs
ssssss
good

morning

4)发送 消息

[root@slave1 /]# ./kafka-console-producer.sh --broker-list master:9092,slave1:9092,slave2:9092 --topic solvingProblem
>dssfsfsfs
>[2023-05-09 13:37:23,115] WARN [Producer clientId=console-producer] Got error produce response with correlation id 6 on topic-partition solvingProblem-0, retrying (2 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
[2023-05-09 13:37:23,132] WARN [Producer clientId=console-producer] Received invalid metadata error in produce request on partition solvingProblem-0 due to org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)
[2023-05-09 13:37:24,970] WARN [Producer clientId=console-producer] Got error produce response with correlation id 11 on topic-partition solvingProblem-0, retrying (1 attempts left). Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
[2023-05-09 13:37:24,970] WARN [Producer clientId=console-producer] Received invalid metadata error in produce request on partition solvingProblem-0 due to org.apache.kafka.common.errors.NetworkException: The server disconnected before a response was received.. Going to request metadata update now (org.apache.kafka.clients.producer.internals.Sender)
ssssss
>good
> morning
> # 输入发送消息位置

=======异常问题=====================
kafka 成功启后又自动关闭
这个问题,可能因为zookeeper brokers 问题造成的;
1)关闭 kafka
kafka.xxx.xxx.xx/bin/./kafka-server-stop.sh
2)删除 配置在 server.properties 中的 log.dir 目录;
3)进入 zookeeper
bin/zkCli.sh (这步之前有先关闭 zookeeper zkServer.sh stop,但是提示拒绝连接 )

...
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /

相关信息。。。

[admin, brokers, cluster, config, consumers, controller_epoch, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]

删除上面的 brokers

rmr /brokers
# 提示 command not found
deleteall /brokers
# https://blog.csdn.net/weixin_49618140/article/details/123637563
# zookeeper新版本剔除了 rmr 命令,使用 deleteall 代替即可。

相关参考:https://blog.csdn.net/yabingshi_tech/article/details/120670096
https://club.coder55.com/article?id=65646
2)相关:1 partitions have leader brokers without a matching listener, including [baidd-0] (org.apache.kafka.
— 相关 kafka 节点已关闭;
3)其他相关
如何验证 kafak 是否启动, 关闭
可以通过运行命令
bin/kafka-server-start.sh config/server.properties 来启动 Kafka,通过运行命令
bin/kafka-server-stop.sh 来关闭 Kafka;
4):三台 kafka ,kafka-server-start 后,
jps 并不是每台,都有 kafka 进度,但启动消息监听后,还是可以收到消息;
5)kafka集群 报错 could not be established. Broker may not be available host
这是开起消费侦听时,输出的日志,这个不影响数据;

欢迎您的到来,感谢您的支持!

为您推荐

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注