Tabla de Contenidos

Resumen

Este es un articulo en donde se pretende enviar todos los logs a rsyslog central donde se almacenen los logs y se creen gráficas con kibana. El flujo de datos es el siguiente:

Nodos cliente –> rsyslog central –> rsyslog central remoto –> logstash –> elasticsearch –> Kibana.

Comandos

apt install rsyslog
systemctl  restart rsyslog
man rsyslogd
man rsyslog.conf
semanage -a -t syslogd_port_t -p udp 514
semanage -a -t syslogd_port_t -p tcp 514 
ufw allow 514/udp
ufw allow 514/tcp

Archivos

/etc/rsyslog.conf 
/etc/default/rsyslog

Opciones

cat /etc/rsyslog.conf 
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
 
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
 
cat /etc/rsyslog.d/40-server.conf 
$template RemoteLogs,"/var/log/rsyslog/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?RemoteLogs
#&  ~  #Put this to stop logging
 
 
cat /etc/rsyslog.d/60-cliente.conf 
*.* @@192.168.0.181:514

Rsyslog con TLS y loganalyzer

MariaDB

apt install mariadb-client mariadb-server

Configuración Rsyslog

Servidor

apt install rsyslog gnutls-bin rsyslog-gnutls rsyslog-mysql

Crear los certificados

mkdir -p /etc/certs/gnutls
cd /etc/certs/gnutls
 
#Creando la entidad certificadora
certtool --generate-privkey --outfile ca-key.pem
certtool --generate-self-signed --load-privkey ca-key.pem --outfile ca.pem
 
#Creando el certificado cliente
certtool --generate-privkey --outfile key.pem
certtool --generate-request --load-privkey key.pem --outfile request.pem
certtool --generate-certificate --load-request request.pem --outfile cert.pem  --load-ca-certificate ca.pem --load-ca-privkey ca-key.pem

Editar el archivo /etc/rsyslog.conf con nano Y Agregar en la seccion global este bloque

# make gtls driver the default and set certificate files
global(
DefaultNetstreamDriver="gtls"
DefaultNetstreamDriverCAFile="/etc/certs/gnutls/ca.pem"
DefaultNetstreamDriverCertFile="/etc/certs/gnutls/cert.pem"
DefaultNetstreamDriverKeyFile="/etc/certs/gnutls/key.pem"
)
 
# load TCP listener
module(
load="imtcp"
StreamDriver.Name="gtls"
StreamDriver.Mode="1"
StreamDriver.Authmode="anon"
)
 
# start up listener at port 6514
input(
type="imtcp"
port="6514"
)

Reiniciar el Rsyslog y ver el resultado

systemctl restart rsyslog
tail /var/log/syslog

Cliente

apt install rsyslog rsyslog-gnutls

Copiar el archivo /etc/certs/gnutls/ca.pem al cliente.

Editar el archivo /etc/rsyslog.conf con nano Y Agregar en la seccion global este bloque

# make gtls driver the default and set certificate files
# certificate files - just CA for a client
global(DefaultNetstreamDriverCAFile="/etc/certs/gnutls/ca.pem")
 
# set up the action for all messages
action(type="omfwd" Target="LaIPdelServidorLogs" protocol="tcp" port="6514"
       StreamDriver="gtls" StreamDriverMode="1" StreamDriverAuthMode="anon")

Reiniciar el Rsyslog y ver el resultado

systemctl restart rsyslog
tail /var/log/syslog

Luego probamos que se envian los logs con

logger esta es una prueba

Y luego ver en el servidor de Logs esa linea.

Loganalyzer

apt install certbot python3-certbot-nginx nginx-light php-fpm php-gd php-mysql loganalyzer

Editar el archivo /etc/loganalyzer/config.php con nano Modificar los siguiente valores de la base de datos con los datos que estan en el archivo /etc/rsyslog.d/mysql.conf

        $CFG['Sources']['Source1']['ID'] = "Source1";
        $CFG['Sources']['Source1']['Name'] = "Mariadb";
        $CFG['Sources']['Source1']['Description'] = "Base de datos central";
        $CFG['Sources']['Source1']['SourceType'] = SOURCE_DB;
        $CFG['Sources']['Source1']['MsgParserList'] = "";
        $CFG['Sources']['Source1']['DBTableType'] = "winsyslog";
        $CFG['Sources']['Source1']['DBType'] = DB_MYSQL;
        $CFG['Sources']['Source1']['DBServer'] = "localhost";
        $CFG['Sources']['Source1']['DBName'] = "Syslog";
        $CFG['Sources']['Source1']['DBUser'] = "rsyslog";
        $CFG['Sources']['Source1']['DBPassword'] = "LaSuperClabe";
        $CFG['Sources']['Source1']['DBTableName'] = "SystemEvents";

Monitoreando Logs con ELK (Elasticsearch, Logstash, Kibana)

Instalación de Java (Requisitos de ELK)

apt-get install default-jre
 
java version

Configuración de Repositorio de ELK

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
cat /etc/apt/sources.list.d/elastic-7.x.list
apt-get update

Instalando ELK

apt-get install elasticsearch logstash kibana
 
systemctl status elasticsearch
systemctl status logstash
systemctl status kibana
 
systemctl enable elasticsearch
systemctl enable logstash
systemctl enable kibana
 
 
systemctl start elasticsearch
systemctl start logstash
systemctl start kibana

Verificando puertos de ELK

#elastisearch por
lsof -i -P -n | grep LISTEN | grep 9200
 
#logstash port
lsof -i -P -n | grep LISTEN | grep 9600
 
#Kibana port
lsof -i -P -n | grep LISTEN | grep 5601

Enrutando de Logstash a Elasticsearch

input {                                                                                      
  udp {                                                                                      
    host => "127.0.0.1"                                                                      
    port => 10514                                                                            
    codec => "json"                                                                          
    type => "rsyslog"                                                                        
  }                                                                                          
}                                                                                            
 
 
# The Filter pipeline stays empty here, no formatting is done.                                                                                           filter { }                                                                                   
 
 
# Every single log will be forwarded to ElasticSearch. If you are using another port, you should specify it here.                                                                                             
output {                                                                                     
  if [type] == "rsyslog" {                                                                   
    elasticsearch {                                                                          
      hosts => [ "127.0.0.1:9200" ]                                                          
    }                                                                                        
  }                                                                                          
}                                                                                            

Reiniciando logstash y verificando servicio

systemctl restart logstash
netstat -na | grep 10514

Enrutando de Rsyslog a Logstash

template(name="json-template"
  type="list") {
    constant(value="{")
      constant(value="\"@timestamp\":\"")     property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"@version\":\"1")
      constant(value="\",\"message\":\"")     property(name="msg" format="json")
      constant(value="\",\"sysloghost\":\"")  property(name="hostname")
      constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
      constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
      constant(value="\",\"programname\":\"") property(name="programname")
      constant(value="\",\"procid\":\"")      property(name="procid")
    constant(value="\"}\n")
}
 
# This line sends all lines to defined IP address at port 10514
# using the json-template format.
 
*.*                         @127.0.0.1:10514;json-template

Reiniciando Rsyslog y verificando que recibimos datos en Logstash

systemctl restart rsyslog
curl -XGET 'http://localhost:9200/logstash-*/_search?q=*&pretty'

Luego solo falta crear el dashboard, panel y conexion a los datos en Kibana

Configuración Nginx

Agregar esta configuración de nginx para poner loganalyzer y kibana en subfolders

      location ^~ /loganalizer/ {
           alias /usr/share/loganalyzer/;
           index index.php;
           location ~ \.php$ {
                include snippets/fastcgi-php.conf;
                fastcgi_pass unix:/run/php/php7.3-fpm.sock;
                fastcgi_param  SCRIPT_FILENAME $request_filename;
           }
 
        }
 
        location ~ /kibana {
            proxy_pass http://localhost:5601;
        }

Graylog

Ver Graylog

Referencias