Push FlashBlade syslog to ECK via logstash

jboothomas
3 min readJan 27, 2021

In this blog I will cover the steps used to configure Pure Storage FlashBlade to output syslog via logstash to an ECK elasticsearch instance.

I am currently running a 7 worker node v1.19.3 Kubernetes cluster onto which both logstash and elasticsearch are deployed.

Elasticsearch is deployed using the ECK operator, with the addition of a volume claim Template using the Pure Storage Orchestrator and our FlashBlade as backend:

apiVersion: elasticsearch.k8s.elastic.co/v1kind: Elasticsearchmetadata:  name: syslog-esspec:  version: 7.9.3  nodeSets:  - name: default    count: 3    config:      node.master: true      node.data: true      node.ingest: true      node.store.allow_mmap: false    podTemplate:      spec:        containers:        - name: elasticsearch    volumeClaimTemplates:      - metadata:          name: elasticsearch-data        spec:          accessModes:          - ReadWriteOnce          resources:            requests:              storage: 5Gi          storageClassName: pure-file

I also deploy Kibana with a DNS ingress rule for access to the interface, and in this example disabled self-signed certificates for ease of access.

apiVersion: kibana.k8s.elastic.co/v1kind: Kibanametadata:  name:  {{ demo_name }}spec:  version: 7.9.3  count: 1  elasticsearchRef:    name: syslog-es  http:    tls:      selfSignedCertificate:        disabled: true

I can obtain the elastic users password to login to the Kibana interface with:

kubectl -n demo-syslog get secret syslog-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'; echo

Now onto the logstash configuration, I first create a Kubernetes configmap:

apiVersion: v1kind: ConfigMapmetadata:  name: logstash-configmapdata:  logstash.yml: |    http.host: "0.0.0.0"    path.config: /usr/share/logstash/pipeline  logstash.conf: |    input {      syslog {        port => 5514      }    }    output {      elasticsearch {        hosts => [ "https://syslog-es-http:9200" ]        user => "elastic"        password => '${ELASTICPASS}'        ssl => true        ssl_certificate_verification => false        cacert => "/usr/share/logstash/escerts/tls.crt"      }    }---

The elastic user password will be pulled in from the Kubernetes secret as an env variable in our logstash pod. The cacert value will be mounted from our Elasticsearch certificate secret.

I use the following deployment for logstash:

apiVersion: apps/v1kind: Deploymentmetadata:  name: logstash-deployment  labels:    app: logstashspec:  replicas: 1  selector:    matchLabels:      app: logstash  template:    metadata:      labels:        app: logstash  spec:    containers:    - name: logstash      image: docker.elastic.co/logstash/logstash:7.1.0      ports:      - containerPort: 5514      volumeMounts:      - name: config-volume        mountPath: /usr/share/logstash/config      - name: logstash-pipeline-volume        mountPath: /usr/share/logstash/pipeline      - name: es-cert-volume        mountPath: /usr/share/logstash/escerts      env:      - name: ELASTICPASS         valueFrom:          secretKeyRef:            name: syslog-es-elastic-user            key: elastic    volumes:    - name: config-volume      configMap:        name: logstash-configmap        items:          - key: logstash.yml            path: logstash.yml    - name: logstash-pipeline-volume      configMap:        name: logstash-configmap        items:          - key: logstash.conf            path: logstash.conf    - name: es-cert-volume      secret:        secretName: syslog-es-http-ca-internal---kind: ServiceapiVersion: v1metadata:  name: logstash-servicespec:  selector:    app: logstash  ports:  - protocol: TCP    port: 5514    targetPort: 5514  type: NodePort

In production I would use a LoadBalancer to assign an ip from a pool of local network addresses instead of the NodePort.

I apply the above yaml files and can now simply configure my FlashBlade. Under syslog I provide the value to access my logstash deployment using the nodeport on a cname pointing to my K8s cluster:

So as to test I enable and disable the FlashBlade’ remote assist settings to generate some log entries and within Kibana I can see these arrive in Elasticsearch.

I hope this shows how easy it can be, to push via logstash, syslog events into Elasticsearch for hassle free log collection.

--

--

jboothomas

Infrastructure engineering for modern data applications