c语言sscanf函数的用法是什么
310
2022-11-18
Kafka-安装使用
Kafka-安装使用
kafka使用zookeeper保存broker的元数据,所以安装kafka之前需要先安装zookeeper
安装zookeeper
1.准备安装包,解压
此处使用 zookeeper-3.4.9
2.编辑配置文件
在zookeeper根目录下,新建一个数据文件夹data(我用的tmp),并且在该目录中创建一个myid的文件,用于指明自己的ID(此值为整数即可,后边会用到此值,每个机器对应一个值)
修改conf/zoo.cfg文件
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/Users/FengZhen/Desktop/Hadoop/zookeeper-3.4.9/tmp# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1server.1=localhost:2888:3888
在这个配置中,initLimit表示用于在从节点与主节点之间建立初始化连接的时间上限
syncLimit表示允许从节点与主节点处于不同步状态的时间上限
这两个值都是tickTime的倍数,所以initLimit是 20 * 2000ms,也就是40s.
配置里还列出了群组中所有服务器的地址,服务器地址遵循server.X=hostname:peerPort:leaderPort格式
X:服务器的ID,它必须是一个整数,不过不一定从0开始,也不要求是连续的,上述myid中的值
Hostname:服务器的机器名或IP地址
peerPort:用于节点间通信的TCP端口
leaderPort:用于首领选举的TCP端口。
客户端只需要通过clientPort就能连接到群组,而群组节点间的通信则需要同时用到这三个端口(peerPort\leaderPort\clientPort)
zookeeper集群被称为群组(Ensemble),zookeeper使用的是一致性协议,所以建议每个群组里应该包含奇数个节点(比如3个、5个等),因为只有当群组里的大多数节点处于可用人数,zookeeper才能处理外部的请求。
3个节点的群组和4个群组的节点允许失效的节点数都是1个,所以3个节点和4个节点起到的作用是一样的。
假设有一个包含5个节点的群组,如果要对群组做一些包括更换节点在内的配置更改,需要依次重启每一个节点。如果群组无法容忍多个节点失效,那么在进行群组维护时就会存在风险。
建议一个群组的节点数不超过7个,因为zookeeper使用了一致性协议,节点过多会降低整个群组的性能。
3.启动zookeeper
bin目录下 ./zkServer.sh start
验证是否启动成功
telnet localhost 2181 连接成功输入srvr
FengZhendeMacBook-Pro:tmp FengZhen$ telnet localhost 2181Trying ::1...Connected to localhost.Escape character is '^]'.srvrZookeeper version: 3.4.9-1757313, built on 08/23/2016 06:50 GMTLatency min/avg/max: 0/0/0Received: 1Sent: 0Connections: 1Outstanding: 0Zxid: 0x3161Mode: standaloneNode count: 167
安装kafka broker
1.准备安装包,解压
此处用的kafka_2.11-1.0.0
2.配置文件修改
修改conf/server.properties,如下
# Licensed to the Apache Software Foundation (ASF) under one or more# contributor license agreements. See the NOTICE file distributed with# this work for additional information regarding copyright ownership.# The ASF licenses this file to You under the Apache License, Version 2.0# (the "License"); you may not use this file except in compliance with# the License. You may obtain a copy of the License at## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.# see kafka.server.KafkaConfig for additional details and defaults############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=0############################# Socket Server Settings ############################## The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured.# FORMAT:# listeners = listener_name://host_name:port# EXAMPLE:# listeners = PLAINTEXT://your.host.name:9092#listeners=PLAINTEXT://:9092# Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value# returned from java.net.InetAddress.getCanonicalHostName().#advertised.listeners=PLAINTEXT://your.host.name:9092# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL# The number of threads that the server uses for receiving requests from the network and sending responses to the networknum.network.threads=3# The number of threads that the server uses for processing requests, which may include disk I/Onum.io.threads=8# The send buffer (SO_SNDBUF) used by the socket serversocket.send.buffer.bytes=102400# The receive buffer (SO_RCVBUF) used by the socket serversocket.receive.buffer.bytes=102400# The maximum size of a request that the socket server will accept (protection against OOM)socket.request.max.bytes=104857600############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=/tmp/kafka-logs# The default number of log partitions per topic. More partitions allow greater# parallelism for consumption, but this will also result in more files across# the brokers.num.partitions=1# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.# This value is recommended to be increased for installations with data dirs located in RAID array.num.recovery.threads.per.data.dir=1############################# Internal Topic Settings ############################## The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1############################# Log Flush Policy ############################## Messages are immediately written to the filesystem but by default we only fsync() to sync# the OS cache lazily. The following configurations control the flush of data to disk.# There are a few important trade-offs here:# 1. Durability: Unflushed data may be lost if you are not using replication.# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.# The settings below allow one to configure the flush policy to flush data after a period of time or# every N messages (or both). This can be done globally and overridden on a per-topic basis.# The number of messages to accept before forcing a flush of data to disk#log.flush.interval.messages=10000# The maximum amount of time a message can sit in a log before we force a flush#log.flush.interval.ms=1000############################# Log Retention Policy ############################## The following configurations control the disposal of log segments. The policy can# be set to delete segments after a period of time, or after a given size has accumulated.# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens# from the end of the log.# The minimum age of a log file to be eligible for deletion due to agelog.retention.hours=168# A size-based retention policy for logs. Segments are pruned from the log unless the remaining# segments drop below log.retention.bytes. Functions independently of log.retention.hours.#log.retention.bytes=1073741824# The maximum size of a log segment file. When this size is reached a new log segment will be created.log.segment.bytes=1073741824# The interval at which log segments are checked to see if they can be deleted according# to the retention policieslog.retention.check.interval.ms=300000############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=localhost:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000############################# Group Coordinator Settings ############################## The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.# The default value for this is 3 seconds.# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.group.initial.rebalance.delay.ms=0
zookeeper.connect设置为localhost:2181,如果有多个,分号拼接
3.启动kafka server
bin/kafka-server-start.sh -daemon ../config/server.properties
4.创建topic
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test_topic
5.查看topic详情
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic test_topic
6.在测试主题上发布消息
./kafka-console-producer.sh --broker-list localhost:9092 --topic test_topic
FengZhendeMacBook-Pro:bin FengZhen$ ./kafka-console-producer.sh --broker-list localhost:9092 --topic test_topic>test>topic>2020-03-25
7.在测试主题上读取消息
./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test_topic --from-beginning
FengZhendeMacBook-Pro:bin FengZhen$ ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test_topic --from-beginningUsing the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].testtopic2020-03-25
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~