Home PC Games Linux Windows Database Network Programming Server Mobile  
           
  Home \ Server \ Use Elasticsearch + Logstash + Kibana set up centralized log Practice Analysis Platform     - How to install MySQL on Linux Dock (Database)

- Ubuntu 14.04 / 14.10 how to install FFmpeg 2.5.1 (Linux)

- Use SecureCRT to transfer files between local and remote hosts (Linux)

- Redhat Close SELinux correct step (Linux)

- SQL Beginner Guide (Database)

- Oracle Duplicate build DataGuard (Database)

- Use in Linux ipmitool tool (Linux)

- How to use OpenVPN and PrivacyIDEA build two-factor authentication for remote access (Server)

- MySQL 5.6.12 binary log path switching binlog (Database)

- Linux installation skynet issue summary (Linux)

- Oracle multi-user concurrency and transaction processing (Database)

- Python Basics Tutorial - lambda keyword (Programming)

- Upgrading Oracle 11.2.0.1 to 11.2.0.3 (Database)

- libreadline.so.6: can not open shared object file problem solution (Linux)

- Experience PHP 7.0 on CentOS 7.x / Fedora 21 (Server)

- Linux performance monitoring (Linux)

- Redis data types Introduction (Database)

- JavaScript subarray Deduplication (Programming)

- Installation and Configuration ISC DHCP server on Debian Linux (Server)

- Ubuntu PPA install SMPlayer 14.9 (Linux)

 
         
  Use Elasticsearch + Logstash + Kibana set up centralized log Practice Analysis Platform
     
  Add Date : 2017-01-08      
         
         
         
  In last week's Shanghai Gopher Meetup gatherings, he listened to thank ASTA speech. Then recently also need to implement a centralized log analysis platform. ASTA also thank just about his use Elasticsearch + Logstash + Kibana this combination for log analysis. After coming back to buy a book and then various google it configured, of course, just setting up the framework. These three form there are many features not familiar with. This article is a brief introduction on CentOS if configured ELK (because the company's servers are Centos, personally prefer Ubuntu ha ha)

What is the ELK:

Elasticsearch + Logstash + Kibana (ELK) is an open source log management program, when we analyze visits to the website will generally help of Google / Baidu / CNZZ other ways to embed JS do statistics, but when abnormal or visit the website being attacked us need to be analyzed in the background, such as a specific log Nginx, and Nginx log split / GoAccess / Awstats are relatively simple single-node solution for distributed clusters or large data welterweight will look beyond their grasp, and the ELK appears allows us to calmly face the new challenges.

Logstash: responsible for log collection, processing and storage
Elasticsearch: responsible for log retrieval and analysis
Kibana: responsible for log visualization
Official website:

JDK - http://www.Oracle.com/technetwork/java/javase/downloads/index.html
Elasticsearch - https://www.elastic.co/downloads/elasticsearch
Logstash - https://www.elastic.co/downloads/logstash
Kibana - https://www.elastic.co/downloads/kibana
Nginx-https: //www.nginx.com/

Server configuration:

Install Java JDK:

cat / etc / RedHat-release
// This is my version of linux
CentOS Linux release 7.1.1503 (Core)
// We install Java Jdk way through yum
yum install java-1.7.0-openjdk

Elasticsearch installation:

Download and install #
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm
yum localinstall elasticsearch-1.7.1.noarch.rpm

# Start Services
service elasticsearch start
service elasticsearch status

# Elasticsearch View profile
rpm -qc elasticsearch

/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/logging.yml
/etc/init.d/elasticsearch
/ Etc / sysconfig / elasticsearch
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf

# Check port usage
netstat -nltp

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address Foreign Address State PID / Program name
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1817 / master
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 27369 / node
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31848 / nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 16567 / sshd
tcp6 0 0 127.0.0.1:8005 ::: * LISTEN 8263 / java
tcp6 0 0 ::: 5000 ::: * LISTEN 2771 / java
tcp6 0 0 ::: 8009 ::: * LISTEN 8263 / java
tcp6 0 0 ::: 3306 ::: * LISTEN 28839 / mysqld
tcp6 0 0 ::: 80 ::: * LISTEN 31848 / nginx: master
tcp6 0 0 ::: 8080 ::: * LISTEN 8263 / java
tcp6 0 0 ::: 9200 ::: * LISTEN 25808 / java
tcp6 0 0 ::: 9300 ::: * LISTEN 25808 / java
tcp6 0 0 ::: 9301 ::: * LISTEN 2771 / java
tcp6 0 0 ::: 22 ::: * LISTEN 16567 / sshd

We explained we see 9200 port installation is successful, we can enter the terminal

# Test Access
curl -X GET http: // localhost: 9200 /

Or directly open the browser we can see

{
status: 200,
name: "Pip the Troll",
cluster_name: "elasticsearch",
version: {
number: "1.7.2",
build_hash: "e43676b1385b8125d647f593f7202acbd816e8ec",
build_timestamp: "2015-09-14T09: 49: 53Z",
build_snapshot: false,
lucene_version: "4.10.4"
},
tagline: "You Know, for Search"
}

It shows that our program is operating normally.

Kibana installation:

# Download the tar package
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
# Unzip
tar zxf kibana-4.1.1-linux-x64.tar.gz -C / usr / local /
cd / usr / local /
mv kibana-4.1.1-linux-x64 kibana

# Create kibana Service
vim /etc/rc.d/init.d/kibana

#! / Bin / bash
### BEGIN INIT INFO
# Provides: kibana
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Runs kibana daemon
# Description: Runs the kibana daemon as a non-root user
### END INIT INFO

# Process name
NAME = kibana
DESC = "Kibana4"
PROG = "/ etc / init.d / kibana"

# Configure location of Kibana bin
KIBANA_BIN = / usr / local / kibana / bin

# PID Info
PID_FOLDER = / var / run / kibana /
PID_FILE = / var / run / kibana / $ NAME.pid
LOCK_FILE = / var / lock / subsys / $ NAME
PATH = / bin: / usr / bin: / sbin: / usr / sbin: $ KIBANA_BIN
DAEMON = $ KIBANA_BIN / $ NAME

# Configure User to run daemon process
DAEMON_USER = root
# Configure logging location
KIBANA_LOG = / var / log / kibana.log

# Begin Script
RETVAL = 0

if [ `id -u` -ne 0]; then
        echo "You need root privileges to run this script"
        exit 1
fi

# Function library
. /etc/init.d/functions
 
start () {
        echo -n "Starting $ DESC:"

pid = `pidofproc -p $ PID_FILE kibana`
        if [-n "$ pid"]; then
                echo "Already running."
                exit 0
        else
        # Start Daemon
if [! -d "$ PID_FOLDER"]; then
                        mkdir $ PID_FOLDER
                fi
daemon --user = $ DAEMON_USER --pidfile = $ PID_FILE $ DAEMON 1> "$ KIBANA_LOG" 2> & 1 &
                sleep 2
                pidofproc node> $ PID_FILE
                RETVAL = $?
                [[$? -eq 0]] && success || failure
echo
                [$ RETVAL = 0] && touch $ LOCK_FILE
                return $ RETVAL
        fi
}

reload ()
{
    echo "Reload command is not implemented for this service."
    return $ RETVAL
}

stop () {
        echo -n "Stopping $ DESC:"
        killproc -p $ PID_FILE $ DAEMON
        RETVAL = $?
echo
        [$ RETVAL = 0] && rm -f $ PID_FILE $ LOCK_FILE
}

case "$ 1" in
  start)
        start
;;
  stop)
        stop
        ;;
  status)
        status -p $ PID_FILE $ DAEMON
        RETVAL = $?
        ;;
  restart)
        stop
        start
        ;;
  reload)
reload
;;
  *)
# Invalid Arguments, print the following message.
        echo "Usage: $ 0 {start | stop | status | restart}"> & 2
exit 2
        ;;
esac

# Modify launch permissions
chmod + x /etc/rc.d/init.d/kibana

# Start kibana service
service kibana start
service kibana status

# View port
netstat -nltp

I had just been executed

netstat -nltp

So the effect shown here, I will not put up, if we can see the 5601 port that means we installed successfully.

Option 1: Generate SSL Certificates:

Generate SSL certificates for server and client authentication:

sudo vi /etc/pki/tls/openssl.cnf

Find the [v3_ca] section in the file, and add this line under it (substituting in the Logstash Server's private IP address):

subjectAltName = IP: logstash_server_private_ip
cd / etc / pki / tls
sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa: 2048 -keyout private / logstash-forwarder.key -out certs / logstash-forwarder.crt

Option 2: FQDN (DNS):

cd / etc / pki / tls
sudo openssl req -subj '/ CN = <^> logstash_server_fqdn /' -x509 -days 3650 -batch -nodes -newkey rsa: 2048 -keyout private / logstash-forwarder.key -out certs / logstash-forwarder.crt

Logstash installation:

Logstash Forwarder (client):

Installation Logstash Forwarder
wget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm
yum localinstall logstash-forwarder-0.4.0-1.x86_64.rpm

# See logstash-forwarder configuration file location
rpm -qc logstash-forwarder
/etc/logstash-forwarder.conf

# Backup configuration file
cp /etc/logstash-forwarder.conf /etc/logstash-forwarder.conf.save

# Edit /etc/logstash-forwarder.conf, you need to be modified according to the actual situation

vim /etc/logstash-forwarder.conf
{
  "Network": {
    "Servers": [ "here to write the server ip: 5000"],

    "Ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",

    "Timeout": 15
  },

  "Files": [
    {
      "Paths": [
        "/ Var / log / messages",
        "/ Var / log / secure"
      ],

      "Fields": { "type": "syslog"}
    }
  ]
}

Logstash Server (server):

Download rpm package #
wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.4-1.noarch.rpm
#installation
yum localinstall logstash-1.5.4-1.noarch.rpm
# Create a 01-logstash-initial.conf file
vim /etc/logstash/conf.d/01-logstash-initial.conf
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}


filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "% {SYSLOGTIMESTAMP: syslog_timestamp}% {SYSLOGHOST: syslog_hostname}% {DATA: syslog_program} (:? [% {POSINT: syslog_pid}]) ?:% {GREEDYDATA: syslog_message}"}
      add_field => [ "received_at", "% {@ timestamp}"]
      add_field => [ "received_from", "% {host}"]
    }
    syslog_pri {}
    date {
      match => [ "syslog_timestamp", "MMM d HH: mm: ss", "MMM dd HH: mm: ss"]
    }
  }
}

output {
  elasticsearch {host => localhost}
  stdout {codec => rubydebug}
}

# Start logstash service
service logstash start
service logstash status

# Access Kibana, Time-field name Select @timestamp To access the next step after Nginx configuration log data can not or will not create
http: // localhost: 5601 /

# Add node and client configurations, note synchronize certificates (SSH can be synchronized by the way)
/etc/pki/tls/certs/logstash-forwarder.crt

Nginx configuration log:

# Modify Client Configuration
vim /etc/logstash-forwarder.conf

{
  "Network": {
    "Servers": [ "ip address of your server: 5000"],

    "Ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",

    "Timeout": 15
  },

  "Files": [
    {
      "Paths": [
        "/ Var / log / messages",
        "/ Var / log / secure"
      ],
      "Fields": { "type": "syslog"}
    }, {
      "Paths": [
        "/app/local/nginx/logs/access.log"
      ],
      "Fields": { "type": "nginx"}
    }
  ]
}

# Server increase patterns
mkdir / opt / logstash / patterns
vim / opt / logstash / patterns / nginx

NGUSERNAME [a-zA-Z @ -. + _%] +
NGUSER% {NGUSERNAME}
NGINXACCESS% {IPORHOST: remote_addr} - - [% {HTTPDATE: time_local}] "?% {WORD: method}% {URIPATH: path} (:?% {URIPARAM: param}) HTTP /% {NUMBER: httpversion}" % {INT: status}% {INT: body_bytes_sent}% {QS: http_referer}% {QS: http_user_agent}


# Modify permission logstash
chown -R logstash: logstash / opt / logstash / patterns

# Modify server configuration
vim /etc/logstash/conf.d/01-logstash-initial.conf

input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}


filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "% {SYSLOGTIMESTAMP: syslog_timestamp}% {SYSLOGHOST: syslog_hostname}% {DATA: syslog_program} (:? [% {POSINT: syslog_pid}]) ?:% {GREEDYDATA: syslog_message}"}
      add_field => [ "received_at", "% {@ timestamp}"]
      add_field => [ "received_from", "% {host}"]
    }
    syslog_pri {}
    date {
      match => [ "syslog_timestamp", "MMM d HH: mm: ss", "MMM dd HH: mm: ss"]
    }
  }
  if [type] == "nginx" {
    grok {
      match => { "message" => "% {NGINXACCESS}"}
    }
  }
}

output {
  elasticsearch {host => localhost}
  stdout {codec => rubydebug}
}
     
         
         
         
  More:      
 
- Git uses a small mind (Linux)
- RHEL5.8 physical opportunities to Read-only file system (Linux)
- Implicit conversion from Java type conversion compare MySQL and Oracle (Database)
- Bash Automated Customization Linux belongs to its own CentOS system (Linux)
- GitHub multiplayer co-development configuration (Linux)
- Oracle 11g through SCN do incremental backup repair standby library detailed process (Database)
- Linux System Getting Started Tutorial: How to find the maximum memory your system supports (Linux)
- Lua regex (string function) (Programming)
- Dom4j change XML coding (Programming)
- WebLogic administrator account and reset the password (Database)
- Use Oracle 11g show spparameter command (Database)
- MySQL InnoDB table --BTree basic data structures (Database)
- Ubuntu 14.04 Nvidia proprietary drivers for install two graphic cards (Linux)
- Under Linux using Magent + Memcached cache server cluster deployment (Server)
- Object-oriented language Java some of the basic features (Programming)
- MySQL enabled SSD storage (Database)
- Math objects easily overlooked but very convenient method --JavaScript (Programming)
- DupeGuru- find and remove duplicate files (Linux)
- Selection sort, insertion sort, and Shell sort (Programming)
- Android Delete project useless resource file (Programming)
     
           
     
  CopyRight 2002-2022 newfreesoft.com, All Rights Reserved.