Configure Logstash server
Login to the Ubuntu instance
ESTEST_INSTANCE_2_DNS=$(aws ec2 describe-instances --instance-ids $ESTEST_INSTANCE_2_ID | jq --raw-output .Reservations[0].Instances[0].PublicDnsName) && echo $ESTEST_INSTANCE_2_DNS
ssh -i $ESTEST_INSTANCE_2_KEYPAIR ubuntu@$ESTEST_INSTANCE_2_DNS
### (One time setup)
# change prompt color to purple
echo 'export PS1="\[\033[0;35m\] INSTANCE 2 (Logstash server) :[\w] \[\033[0m\]"' \
>> ~/.bash_profile && source ~/.bash_profile
Install AWS Logstash output plugin
The output plugin will handle the SigV4 signing necessary to interact with the Amazon Elasticsearch domain.
sudo /opt/logstash/bin/plugin install logstash-output-amazon_es
Install Logstash
# Add logstash to the list
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb http://packages.elastic.co/logstash/2.1/debian stable main" | sudo tee -a /etc/apt/sources.list
# install
sudo apt-get -y update
sudo apt-get install -y logstash
Generate SSL keypair
sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private
cd /etc/pki/tls
# !! replace the DNS with the Logstash server's DNS
sudo openssl req -subj '/CN=ip-10-231-159-134.us-west-2.compute.internal/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Configure Logstash
Configure beats input
Create a Filebeat input configuration file named /etc/logstash/conf.d/02-filebeat-input.conf
. The beats
input will listen on port 5044.
cat << EOF > /tmp/02-filebeat-input.conf
input {
beats {
port => 5044
type => "logs"
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
EOF
sudo cp /tmp/02-filebeat-input.conf /etc/logstash/conf.d/02-filebeat-input.conf
Configure syslog filter
Create a syslog filter configuration file named /etc/logstash/conf.d/10-syslog.conf
cat << EOF > /tmp/10-syslog.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
EOF
sudo cp /tmp/10-syslog.conf /etc/logstash/conf.d/10-syslog.conf
Configure Elasticsearch output configuration file
Create the configuration file name /etc/logstash/conf.d/30-elasticsearch-output.conf
that designates that output should be sent to our Elasticsearch cluster. Note that no AWS credentials (AWS AccessKey or SecretKey) are needed in the configuration, since temporary credentails are available via the IAM Role for EC2.
# !! replace with your Elasticsearch domain DNS
# example: ES_CLUSTER_DNS=search-estest-domain-xnzukkovs6px3wt2zhsy7t2si4.us-west-2.es.amazonaws.com
ES_CLUSTER_DNS={your Elasticsearch domain DNS here}
cat << EOF > /tmp/30-elasticsearch-output.conf
output {
amazon_es {
hosts => ["$ES_CLUSTER_DNS"]
region => "us-west-2"
index => "production-logs-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
EOF
sudo cp /tmp/30-elasticsearch-output.conf /etc/logstash/conf.d/30-elasticsearch-output.conf
ls -al /etc/logstash/conf.d/
Restart the Logstash server to pick up changes
sudo service logstash stop
# if the service can't be stopped for some reason, force-terminate the processes
sudo pkill -9 -u logstash
sudo service logstash start
# add system startup
sudo update-rc.d logstash defaults 96 9
Logout of the server, and copy the public key to local drive
Later on, we will copy the public key to the servers with the filebeat agents.
logout
On local machine
scp -i $ESTEST_INSTANCE_2_KEYPAIR ubuntu@$ESTEST_INSTANCE_2_DNS:/etc/pki/tls/certs/logstash-forwarder.crt /tmp/logstash-forwarder.crt
Appendix
View the Logstash debug logs
tail -f /var/log/logstash/logstash.log
Force-kill all Logstash processes
sudo pkill -9 -u logstash
Logstash binary is at
/opt/logstash/bin/logstash