Signup/Sign In
LAST UPDATED: JULY 20, 2020

Setup Elasticsearch with Authentication (X-Pack Security) Enabled on Kubernetes

    In this article series we will setup the EFK stack, which includes Elasticsearch, Kibana and Fluent Bit for log collection, aggregation and monitoring, which is slowly becoming a standard in the Kubernetes world, atleast Elasticsearch and Kibana is.

    You can setup EFK stack in Kubernetes without any security enabled, which we have already covered in one of our previous post. Having X-Pack security enabled in Elasticsearch has many benefits, like:

    1. To store data in Elasticsearch and to fetch data from Elasticsearch, basic username-password authentication will be required.

    2. To access Kibana UI, we will get a login screen, where we need to provide credentials, hence securing the Kibana UI.

    3. Fluent Bit will also require Elasticsearch credentials to store data in Elasticsearch.

    Hence, we can say, that enabling X-Pack security provides a basic end to end security for the EFK setup.

    We will be covering the whole EFK stack setup in three part series:

    1. Setup Elasticsearch with X-Pack enabled

    2. Setup Kibana on Kuberenetes

    3. Setup Fluent Bit on Kubernetes

    The whole code for this 3 part guide can be found in my GitHub repository for EFK.

    If you want to test the EFK stack setup on Linux machine, to test it with your existing setup of applications and to see if logs are getting collected by Fluent Bit and stored in Elasticsearch, you are thinking in the right direction. A quick proof of concept never hurts, before moving on to cloud setup.

    Pre-requisite for this Guide:

    I would recommend to study a bit about Kubernetes, like what is a namespace, what is a pod, and other Kubernetes resources like a service, deployment, statefulset, configmap, etc, so that you don't find this tutorial difficult to follow.

    Also, having a brief introduction about What is Elasticsearch and the Elasticsearch architecture will help you.

    Setup Elasticsearch cluster

    In this tutorial we will setup an Elasticsearch cluster with 3 nodes, one master node, one data node, and one client node. We will explicitly configure the Elasticsearch cluster nodes. Although it is not mandatory and we can simply create all 3 nodes using the same YAML files, but its better this way.

    In each Elasticsearch cluster node we will specify the xpack.security.enabled and xpack.monitoring.collection.enabled proeprties as true. Once the Elasticsearch cluster is up, we will use the elasticsearch-setup-passwords tool to generate password for Elasticsearch default users and will create a Kubernetes secret using the superuser password, which we will use in Kibana and Fluent Bit.

    You need a Kubernetes cluster to run the YAMLs and start the Elasticsearch cluster.

    1. Create a Namespace

    We will start by creating a namespace. I will be naming the namespace as logging namespace, you can change it as per your requirements. Here is the YAML to create a new namespace in Kubernetes:

    kind: Namespace
    apiVersion: v1
    metadata:
      name: logging

    Save the above code in a file with name namespace.yaml and run the below kubectl command to apply it.

    kubectl apply -f namespace.yaml


    namespace/logging created

    We will be starting all the services in this namespace only.

    2. Setup Elasticsearch Master node:

    Let's begin with the setup of the master node for Elasticsearch cluster, which will control the other nodes of the cluster. First we will create a configmap resource for the master node which will all the required properties defined.

    es-master-configmap.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: logging
      name: elasticsearch-master-config
      labels:
        app: elasticsearch
        role: master
    data:
      elasticsearch.yml: |-
        cluster.name: ${CLUSTER_NAME}
        node.name: ${NODE_NAME}
        discovery.seed_hosts: ${NODE_LIST}
        cluster.initial_master_nodes: ${MASTER_NODES}
        network.host: 0.0.0.0
        node:
          master: true
          data: false
          ingest: false
        xpack.security.enabled: true
        xpack.monitoring.collection.enabled: true

    Then we need to define a service for our elasticsearch master node to configure the network access between pods. We will be using the port 9300 for inter-node communication.

    es-master-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      namespace: logging
      name: elasticsearch-master
      labels:
        app: elasticsearch
        role: master
    spec:
      ports:
      - port: 9300
        name: transport
      selector:
        app: elasticsearch
        role: master

    And last but not the least the deployment, in which we will specify the number of replicas, docker image URL and version, initContainers(tasks to be performed before the elasticsearch container is started), environment variables, etc. We will be using the version 7.3.0 for this tutorial, but you can also try this with the latest version.

    es-master-deplyment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: logging
      name: elasticsearch-master
      labels:
        app: elasticsearch
        role: master
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: elasticsearch
          role: master
      template:
        metadata:
          labels:
            app: elasticsearch
            role: master
        spec:
          containers:
          - name: elasticsearch-master
            image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
            env:
            - name: CLUSTER_NAME
              value: elasticsearch
            - name: NODE_NAME
              value: elasticsearch-master
            - name: NODE_LIST
              value: elasticsearch-master,elasticsearch-data,elasticsearch-client
            - name: MASTER_NODES
              value: elasticsearch-master
            - name: "ES_JAVA_OPTS"
              value: "-Xms256m -Xmx256m"
            ports:
            - containerPort: 9300
              name: transport
            volumeMounts:
            - name: config
              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
              readOnly: true
              subPath: elasticsearch.yml
            - name: storage
              mountPath: /data
          volumes:
          - name: config
            configMap:
              name: elasticsearch-master-config
          - name: "storage"
            emptyDir:
              medium: ""
          initContainers:
          - name: increase-vm-max-map
            image: busybox
            command: ["sysctl", "-w", "vm.max_map_count=262144"]
            securityContext:
              privileged: true

    Now let's apply the above YAMLs. Run the following kubectl command to apply:

    kubectl apply  -f es-master-configmap.yaml \
    -f es-master-service.yaml \
    -f es-master-deployment.yaml

    This will configure and start your Elasticsearch master pod. Run the below command to see if the pod starts successfully,

    kubectl get pod -n logging

    Now let's move on to setup of the elasticsearch data node.

    3. Setup Elasticsearch Data node:

    In the data node too, we will define a configmap and a service with port 9300 for inter-node communication, but instead of a normal deployment, we will create a statefulset.

    A statefulset in Kubernetes is similar to deployment but with storage involved. We will also define a persistent volume claim to use a persistent volume for storage of data.

    es-data-configmap.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: logging
      name: elasticsearch-data-config
      labels:
        app: elasticsearch
        role: data
    data:
      elasticsearch.yml: |-
        cluster.name: ${CLUSTER_NAME}
        node.name: ${NODE_NAME}
        discovery.seed_hosts: ${NODE_LIST}
        cluster.initial_master_nodes: ${MASTER_NODES}
        network.host: 0.0.0.0
        node:
          master: false
          data: true
          ingest: false
        xpack.security.enabled: true
        xpack.monitoring.collection.enabled: true

    As mentioned above, a service to use port 9300 for inter-node communication.

    es-data-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      namespace: logging
      name: elasticsearch-data
      labels:
        app: elasticsearch
        role: data
    spec:
      ports:
      - port: 9300
        name: transport
      selector:
        app: elasticsearch
        role: data

    And finally the statefulset, which will have the information about docker image, replicas, environment variables, initContainers and the volume claim for persistent storage.

    es-data-statefulset.yaml

    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      namespace: logging
      name: elasticsearch-data
      labels:
        app: elasticsearch
        role: data
    spec:
      serviceName: "elasticsearch-data"
      replicas: 1
      selector:
        matchLabels:
          app: elasticsearch-data
      template:
        metadata:
          labels:
            app: elasticsearch-data
            role: data
        spec:
          containers:
          - name: elasticsearch-data
            image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
            env:
            - name: CLUSTER_NAME
              value: elasticsearch
            - name: NODE_NAME
              value: elasticsearch-data
            - name: NODE_LIST
              value: elasticsearch-master,elasticsearch-data,elasticsearch-client
            - name: MASTER_NODES
              value: elasticsearch-master
            - name: "ES_JAVA_OPTS"
              value: "-Xms300m -Xmx300m"
            ports:
            - containerPort: 9300
              name: transport
            volumeMounts:
            - name: config
              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
              readOnly: true
              subPath: elasticsearch.yml
            - name: elasticsearch-data-persistent-storage
              mountPath: /data/db
          volumes:
          - name: config
            configMap:
              name: elasticsearch-data-config
          initContainers:
          - name: increase-vm-max-map
            image: busybox
            command: ["sysctl", "-w", "vm.max_map_count=262144"]
            securityContext:
              privileged: true
      volumeClaimTemplates:
      - metadata:
          name: elasticsearch-data-persistent-storage
          annotations:
            volume.beta.kubernetes.io/storage-class: "gp2"
        spec:
          accessModes: [ "ReadWriteOnce" ]
          storageClassName: standard
          resources:
            requests:
              storage: 10Gi

    To apply the above configuration YAML files, run the below kubectl command.

    kubectl apply -f es-data-configmap.yaml \
    -f es-data-service.yaml \
    -f es-data-statefulset.yaml

    This will start the pod for the elasticsearch data node in our Kubernetes cluster.

    4. Setup Elasticsearch Client node:

    The elasticsearch client node is responsible for communicating with the outside world via REST API. This node will also have a configmap to define the configurations, a service in which define the port 9300 for inter-node communication and port 9200 for HTTP communication. Then we will have a deployment for the client node.

    es-client-configmap.yaml

    apiVersion: v1
    kind: ConfigMap
    metadata:
      namespace: logging
      name: elasticsearch-client-config
      labels:
        app: elasticsearch
        role: client
    data:
      elasticsearch.yml: |-
        cluster.name: ${CLUSTER_NAME}
        node.name: ${NODE_NAME}
        discovery.seed_hosts: ${NODE_LIST}
        cluster.initial_master_nodes: ${MASTER_NODES}
        network.host: 0.0.0.0
        node:
          master: false
          data: false
          ingest: true
        xpack.security.enabled: true
        xpack.monitoring.collection.enabled: true

    Then we will define the service YAML.

    es-client-service.yaml

    apiVersion: v1
    kind: Service
    metadata:
      namespace: logging
      name: elasticsearch-client
      labels:
        app: elasticsearch
        role: client
    spec:
      ports:
      - port: 9200
        name: client
      - port: 9300
        name: transport
      selector:
        app: elasticsearch
        role: client

    Now the deployment which will have the information about docker image, replicas, environment variables, initContainers etc.

    es-client-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      namespace: logging
      name: elasticsearch-client
      labels:
        app: elasticsearch
        role: client
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: elasticsearch
          role: client
      template:
        metadata:
          labels:
            app: elasticsearch
            role: client
        spec:
          containers:
          - name: elasticsearch-client
            image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
            env:
            - name: CLUSTER_NAME
              value: elasticsearch
            - name: NODE_NAME
              value: elasticsearch-client
            - name: NODE_LIST
              value: elasticsearch-master,elasticsearch-data,elasticsearch-client
            - name: MASTER_NODES
              value: elasticsearch-master
            - name: "ES_JAVA_OPTS"
              value: "-Xms256m -Xmx256m"
            ports:
            - containerPort: 9200
              name: client
            - containerPort: 9300
              name: transport
            volumeMounts:
            - name: config
              mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
              readOnly: true
              subPath: elasticsearch.yml
            - name: storage
              mountPath: /data
          volumes:
          - name: config
            configMap:
              name: elasticsearch-client-config
          - name: "storage"
            emptyDir:
              medium: ""
          initContainers:
          - name: increase-vm-max-map
            image: busybox
            command: ["sysctl", "-w", "vm.max_map_count=262144"]
            securityContext:
              privileged: true

    And then we will use the kubectl command to apply these YAML files to start the elasticsearch client node.

    kubectl apply -f es-client-configmap.yaml \
    -f es-client-service.yaml \
    -f es-client-deployment.yaml

    5. Verify the Elasticsearch setup

    Now that we have started the 3 elasticsearch node, let's verify if the pods are up and running. Run the following kubectl command:

    kubectl get pod -n logging


    elasticsearch pod start in kubernetes

    As we can see in the output above all the 3 pods are up and running.

    6. Generate Password for the Elasticsearch users

    Now we will enter in one of the pod and will run the tool elasticsearch-setup-passwords tool to generate password for Elasticsearch default users. Run the following command in the console to enter into the pod of elasticsearch-client pod and run the tool for auto generate password:

    kubectl exec -it $(kubectl get pods -n logging | grep elasticsearch-client | sed -n 1p | awk '{print $1}') -n logging -- bin/elasticsearch-setup-passwords auto -b

    You will see and output like this with random strings set as passwords for different elasticsearch users.

    elastic search password set output

    Copy the password for the elastic user and save it somewhere as this is the username/password that we will be using for logging into Kibana UI and for creating Kubernetes secret.

    7. Create a Kubernetes Secret

    Run the following command to create a secret in Kubernetes. We will then be using this password in Kibana and Fluent Bit. Edit the below command, update the last part where we have --from-literal password=zxQlD3k6NHHK22rPIJK1, and add your password for elastic user.

    kubectl create secret generic elasticsearch-pw-elastic -n logging --from-literal password=zxQlD3k6NHHK22rPIJK1


    secret/elasticsearch-pw-elastic created

    That's it for starting the elastic search service. We are done with one-third of the work. In the next tutorials we will setup Kibana and setup Fluent Bit service.

    Some tips for Troubleshooting

    To check if all the nodes of elasticseach started successfully and a stable connection was setup between all the nodes, we can check the logs of the elasticsearch master node using the kubectl logs command.

    kubectl logs -f <POD_NAME> -n logging

    Update the <POD_NAME> in the above command and use the elasticsearch master node's pod name.

    In the logs you should see the text "Cluster health status changed from [YELLOW] to [GREEN]", or run the below command to look for this text in the logs:

    kubectl logs -f -n logging $(kubectl get pods -n logging | grep elasticsearch-master | sed -n 1p | awk '{print $1}') \
    | grep "Cluster health status changed from \[YELLOW\] to \[GREEN\]"

    If you face any other issue, do share it in the comments and I might be able to help you out.

    Next Tutorial: Setup Kibana as part of EFK stack with X-Pack Enabled in Kubernetes.

    Also, subscribe to our Newsletter to get more such informative posts in your inbox.

    I like writing content about C/C++, DBMS, Java, Docker, general How-tos, Linux, PHP, Java, Go lang, Cloud, and Web development. I have 10 years of diverse experience in software development. Founder @ Studytonight
    IF YOU LIKE IT, THEN SHARE IT
    Advertisement

    RELATED POSTS