|
In last week's Shanghai Gopher Meetup gatherings, he listened to thank ASTA speech. Then recently also need to implement a centralized log analysis platform. ASTA also thank just about his use Elasticsearch + Logstash + Kibana this combination for log analysis. After coming back to buy a book and then various google it configured, of course, just setting up the framework. These three form there are many features not familiar with. This article is a brief introduction on CentOS if configured ELK (because the company's servers are Centos, personally prefer Ubuntu ha ha)
What is the ELK:
Elasticsearch + Logstash + Kibana (ELK) is an open source log management program, when we analyze visits to the website will generally help of Google / Baidu / CNZZ other ways to embed JS do statistics, but when abnormal or visit the website being attacked us need to be analyzed in the background, such as a specific log Nginx, and Nginx log split / GoAccess / Awstats are relatively simple single-node solution for distributed clusters or large data welterweight will look beyond their grasp, and the ELK appears allows us to calmly face the new challenges.
Logstash: responsible for log collection, processing and storage
Elasticsearch: responsible for log retrieval and analysis
Kibana: responsible for log visualization
Official website:
JDK - http://www.Oracle.com/technetwork/java/javase/downloads/index.html
Elasticsearch - https://www.elastic.co/downloads/elasticsearch
Logstash - https://www.elastic.co/downloads/logstash
Kibana - https://www.elastic.co/downloads/kibana
Nginx-https: //www.nginx.com/
Server configuration:
Install Java JDK:
cat / etc / RedHat-release
// This is my version of linux
CentOS Linux release 7.1.1503 (Core)
// We install Java Jdk way through yum
yum install java-1.7.0-openjdk
Elasticsearch installation:
Download and install #
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm
yum localinstall elasticsearch-1.7.1.noarch.rpm
# Start Services
service elasticsearch start
service elasticsearch status
# Elasticsearch View profile
rpm -qc elasticsearch
/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/logging.yml
/etc/init.d/elasticsearch
/ Etc / sysconfig / elasticsearch
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf
# Check port usage
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID / Program name
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1817 / master
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 27369 / node
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31848 / nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 16567 / sshd
tcp6 0 0 127.0.0.1:8005 ::: * LISTEN 8263 / java
tcp6 0 0 ::: 5000 ::: * LISTEN 2771 / java
tcp6 0 0 ::: 8009 ::: * LISTEN 8263 / java
tcp6 0 0 ::: 3306 ::: * LISTEN 28839 / mysqld
tcp6 0 0 ::: 80 ::: * LISTEN 31848 / nginx: master
tcp6 0 0 ::: 8080 ::: * LISTEN 8263 / java
tcp6 0 0 ::: 9200 ::: * LISTEN 25808 / java
tcp6 0 0 ::: 9300 ::: * LISTEN 25808 / java
tcp6 0 0 ::: 9301 ::: * LISTEN 2771 / java
tcp6 0 0 ::: 22 ::: * LISTEN 16567 / sshd
We explained we see 9200 port installation is successful, we can enter the terminal
# Test Access
curl -X GET http: // localhost: 9200 /
Or directly open the browser we can see
{
status: 200,
name: "Pip the Troll",
cluster_name: "elasticsearch",
version: {
number: "1.7.2",
build_hash: "e43676b1385b8125d647f593f7202acbd816e8ec",
build_timestamp: "2015-09-14T09: 49: 53Z",
build_snapshot: false,
lucene_version: "4.10.4"
},
tagline: "You Know, for Search"
}
It shows that our program is operating normally.
Kibana installation:
# Download the tar package
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
# Unzip
tar zxf kibana-4.1.1-linux-x64.tar.gz -C / usr / local /
cd / usr / local /
mv kibana-4.1.1-linux-x64 kibana
# Create kibana Service
vim /etc/rc.d/init.d/kibana
#! / Bin / bash
### BEGIN INIT INFO
# Provides: kibana
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Runs kibana daemon
# Description: Runs the kibana daemon as a non-root user
### END INIT INFO
# Process name
NAME = kibana
DESC = "Kibana4"
PROG = "/ etc / init.d / kibana"
# Configure location of Kibana bin
KIBANA_BIN = / usr / local / kibana / bin
# PID Info
PID_FOLDER = / var / run / kibana /
PID_FILE = / var / run / kibana / $ NAME.pid
LOCK_FILE = / var / lock / subsys / $ NAME
PATH = / bin: / usr / bin: / sbin: / usr / sbin: $ KIBANA_BIN
DAEMON = $ KIBANA_BIN / $ NAME
# Configure User to run daemon process
DAEMON_USER = root
# Configure logging location
KIBANA_LOG = / var / log / kibana.log
# Begin Script
RETVAL = 0
if [ `id -u` -ne 0]; then
echo "You need root privileges to run this script"
exit 1
fi
# Function library
. /etc/init.d/functions
start () {
echo -n "Starting $ DESC:"
pid = `pidofproc -p $ PID_FILE kibana`
if [-n "$ pid"]; then
echo "Already running."
exit 0
else
# Start Daemon
if [! -d "$ PID_FOLDER"]; then
mkdir $ PID_FOLDER
fi
daemon --user = $ DAEMON_USER --pidfile = $ PID_FILE $ DAEMON 1> "$ KIBANA_LOG" 2> & 1 &
sleep 2
pidofproc node> $ PID_FILE
RETVAL = $?
[[$? -eq 0]] && success || failure
echo
[$ RETVAL = 0] && touch $ LOCK_FILE
return $ RETVAL
fi
}
reload ()
{
echo "Reload command is not implemented for this service."
return $ RETVAL
}
stop () {
echo -n "Stopping $ DESC:"
killproc -p $ PID_FILE $ DAEMON
RETVAL = $?
echo
[$ RETVAL = 0] && rm -f $ PID_FILE $ LOCK_FILE
}
case "$ 1" in
start)
start
;;
stop)
stop
;;
status)
status -p $ PID_FILE $ DAEMON
RETVAL = $?
;;
restart)
stop
start
;;
reload)
reload
;;
*)
# Invalid Arguments, print the following message.
echo "Usage: $ 0 {start | stop | status | restart}"> & 2
exit 2
;;
esac
# Modify launch permissions
chmod + x /etc/rc.d/init.d/kibana
# Start kibana service
service kibana start
service kibana status
# View port
netstat -nltp
I had just been executed
netstat -nltp
So the effect shown here, I will not put up, if we can see the 5601 port that means we installed successfully.
Option 1: Generate SSL Certificates:
Generate SSL certificates for server and client authentication:
sudo vi /etc/pki/tls/openssl.cnf
Find the [v3_ca] section in the file, and add this line under it (substituting in the Logstash Server's private IP address):
subjectAltName = IP: logstash_server_private_ip
cd / etc / pki / tls
sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa: 2048 -keyout private / logstash-forwarder.key -out certs / logstash-forwarder.crt
Option 2: FQDN (DNS):
cd / etc / pki / tls
sudo openssl req -subj '/ CN = <^> logstash_server_fqdn /' -x509 -days 3650 -batch -nodes -newkey rsa: 2048 -keyout private / logstash-forwarder.key -out certs / logstash-forwarder.crt
Logstash installation:
Logstash Forwarder (client):
Installation Logstash Forwarder
wget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm
yum localinstall logstash-forwarder-0.4.0-1.x86_64.rpm
# See logstash-forwarder configuration file location
rpm -qc logstash-forwarder
/etc/logstash-forwarder.conf
# Backup configuration file
cp /etc/logstash-forwarder.conf /etc/logstash-forwarder.conf.save
# Edit /etc/logstash-forwarder.conf, you need to be modified according to the actual situation
vim /etc/logstash-forwarder.conf
{
"Network": {
"Servers": [ "here to write the server ip: 5000"],
"Ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"Timeout": 15
},
"Files": [
{
"Paths": [
"/ Var / log / messages",
"/ Var / log / secure"
],
"Fields": { "type": "syslog"}
}
]
}
Logstash Server (server):
Download rpm package #
wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.4-1.noarch.rpm
#installation
yum localinstall logstash-1.5.4-1.noarch.rpm
# Create a 01-logstash-initial.conf file
vim /etc/logstash/conf.d/01-logstash-initial.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "% {SYSLOGTIMESTAMP: syslog_timestamp}% {SYSLOGHOST: syslog_hostname}% {DATA: syslog_program} (:? [% {POSINT: syslog_pid}]) ?:% {GREEDYDATA: syslog_message}"}
add_field => [ "received_at", "% {@ timestamp}"]
add_field => [ "received_from", "% {host}"]
}
syslog_pri {}
date {
match => [ "syslog_timestamp", "MMM d HH: mm: ss", "MMM dd HH: mm: ss"]
}
}
}
output {
elasticsearch {host => localhost}
stdout {codec => rubydebug}
}
# Start logstash service
service logstash start
service logstash status
# Access Kibana, Time-field name Select @timestamp To access the next step after Nginx configuration log data can not or will not create
http: // localhost: 5601 /
# Add node and client configurations, note synchronize certificates (SSH can be synchronized by the way)
/etc/pki/tls/certs/logstash-forwarder.crt
Nginx configuration log:
# Modify Client Configuration
vim /etc/logstash-forwarder.conf
{
"Network": {
"Servers": [ "ip address of your server: 5000"],
"Ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt",
"Timeout": 15
},
"Files": [
{
"Paths": [
"/ Var / log / messages",
"/ Var / log / secure"
],
"Fields": { "type": "syslog"}
}, {
"Paths": [
"/app/local/nginx/logs/access.log"
],
"Fields": { "type": "nginx"}
}
]
}
# Server increase patterns
mkdir / opt / logstash / patterns
vim / opt / logstash / patterns / nginx
NGUSERNAME [a-zA-Z @ -. + _%] +
NGUSER% {NGUSERNAME}
NGINXACCESS% {IPORHOST: remote_addr} - - [% {HTTPDATE: time_local}] "?% {WORD: method}% {URIPATH: path} (:?% {URIPARAM: param}) HTTP /% {NUMBER: httpversion}" % {INT: status}% {INT: body_bytes_sent}% {QS: http_referer}% {QS: http_user_agent}
# Modify permission logstash
chown -R logstash: logstash / opt / logstash / patterns
# Modify server configuration
vim /etc/logstash/conf.d/01-logstash-initial.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "% {SYSLOGTIMESTAMP: syslog_timestamp}% {SYSLOGHOST: syslog_hostname}% {DATA: syslog_program} (:? [% {POSINT: syslog_pid}]) ?:% {GREEDYDATA: syslog_message}"}
add_field => [ "received_at", "% {@ timestamp}"]
add_field => [ "received_from", "% {host}"]
}
syslog_pri {}
date {
match => [ "syslog_timestamp", "MMM d HH: mm: ss", "MMM dd HH: mm: ss"]
}
}
if [type] == "nginx" {
grok {
match => { "message" => "% {NGINXACCESS}"}
}
}
}
output {
elasticsearch {host => localhost}
stdout {codec => rubydebug}
} |
|
|
|