Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Kafka cluster deployment     - Chrome plug-in management, online-offline installation, part of the plug presentations (Linux)

- Linux scheduling summary (Linux)

- Use IP address spoofing Intrusion Prevention Firewall (Linux)

- Deploy Mono 4 and Jexus 5.6 on CentOS7 (Server)

- IPTABLES configuration steps under Linux (Linux)

- Oracle partition table data migration, process management automation (Database)

- 10046 trace only open for a particular SQL statement (Database)

- redis main building and disaster recovery from a cluster deployment (Database)

- Management DB2 logs (Database)

- Java reflection mechanism explained in detail and Method.invoke explanation (Programming)

- Top 10: HTML5, JavaScript 3D game engine and framework (Linux)

- Use ldap implement Windows Remote Desktop Ubuntu Linux (Linux)

- Ubuntu 14.04 install PostgreSQL 9.2 (Database)

- The most concise explanation of JavaScript closures (Programming)

- Linux argument references and command substitution (Linux)

- DIY security of Linux platform (Linux)

- How do I upgrade from Ubuntu 15.04 to Ubuntu 15.10 (Linux)

- Talk about the Linux ABI compatibility Application (Linux)

- Linux System Tutorial: Ubuntu on the desktop is disabled by default keyring to unlock tips (Linux)

- Android annotation support (Support Annotations) (Programming)

 
         
  Kafka cluster deployment
     
  Add Date : 2018-11-21      
         
         
         
 

I. About kafka

Kafka is a high-throughput distributed publish-subscribe messaging system that can handle the size of the consumer action website all stream data. This action (web browsing, search and other user's action) is a key factor in many social functions in modern networks. These data are usually due to the throughput requirements be resolved through logs and log aggregation. Like for like Hadoop logs and offline data analysis systems, but requires real-time processing limitations, this is a viable solution. Kafka's purpose is Hadoop parallel loading mechanism to unify both online and offline messaging, but also to the cluster through the machine to provide real-time consumption.

II. Preparations

1. I Configure each host IP. Each host IP configured as static IP (ensure that each host can communicate properly, to avoid excessive network traffic, it is recommended in the same network segment)

2. Modify the host name of the machine. Kafka cluster all the hosts need to be modified.

3. Configure host mapping. Modify the hosts file, add the mapping for each host IP and host name.

4. Open the appropriate ports. Port configuration later in this document require open (or turn off the firewall), root privileges.

5. Zookeeper ensure the cluster service to work. In fact, as long as the Zookeeper cluster deployment is successful, the preparatory work can be done above the basic.

III. Installing Kafka

1. Kafka download the installation package, visit Kafka's official website to download the corresponding version. As used herein version 2.9.2-0.8.1.1.

  2.   use the following command to extract the installation package

tar -zxvf kafka_2.9.2-0.8.1.1.tgz

3. Modify the configuration file, only you need to modify /config/server.properties simple configuration file.

vim config / server.properties

needs to be modified:

broker.id (labeled current server id in the cluster, starting from 0); port; host.name (current server host name); zookeeper.connect (connected zookeeper cluster); log.dirs (log in storage directory, you need to create in advance).

Example:

4. Kafka configured to upload to other nodes

scp -r kafka node2: / usr /

Note, do not forget to upload after each node and modify broker.id host.nam and other unique configurations.

IV. Start and test Kafka

1. First start Zookeeper, then use the following command to start Kafka, a message indicates that after the successful launch.

./ bin / kafka-server-start.sh config / server.properties &

2. On Kafka test. Create separate topic, producer, consumer, preferably created on different nodes. Enter information on the producer of the console, the console is able to observe the consumer received.

Create a topic:

./ bin / kafka-topics.sh -zookeeper node1: 2181, node2: 2181, node3: 2181 -topic test -replication-factor 2 -partitions 3 -create

View topic:

./ bin / kafka-topics.sh -zookeeper node1: 2181, node2: 2181, node3: 2181 -list

Create producer:

./ bin / kafka-console-producer.sh -broker-list node1: 9092, node2: 9092, node3: 9092 -topic test

Create consumer:

./ bin / kafka-console-consumer.sh -zookeeper node1: 2181, node2: 2181, node3: 2181 - from-begining -topic test

Testing:

Enter information in the producer of the console to see whether the consumer receives the console.

producer:

consumer

After the above configuration and testing, Kafka has initially deployed, the next you can configure and operate according to the specific needs of Kafka. More about Kafka's operations and use more specific please refer to the document examiner network. https://cwiki.apache.org/confluence/display/KAFKA/Index

     
         
         
         
  More:      
 
- Ganglia Python plug-in the process of writing notes (Linux)
- numpy and SciPy installation under Python for scientific computing package (Linux)
- Nginx Proxy timeout Troubleshooting (Server)
- Linux OOM killer mechanism (Linux)
- How to back up Debian system backupninja (Linux)
- HttpClient Tutorial (Programming)
- Hadoop 1 and 2.x installation notes (Server)
- Python image processing library (PIL) to install and simple to use (Linux)
- CentOS7 method to upgrade the kernel to 3.18 (Linux)
- Ubuntu font settings: Using Windows Font (Linux)
- Log analysis is done with Grafana Elasticsearch (Server)
- How to install Bugzilla 4.4 on Ubuntu / CentOS 6.x (Linux)
- Git uses a small mind (Linux)
- Linux Troubleshooting: How to save the status of the SSH session is closed (Linux)
- Developing a Web server yourself (Server)
- C ++ 11 smart pointers (Programming)
- Several reasons MySQL garbled (Database)
- Related to optimize the use of Btrfs file system on SSD (Linux)
- To build Spring RestTemplate use HttpClient4 (Programming)
- 3 tips Linux command (Linux)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.