Bind different Persistent Volume for each replica in a Kubernetes Deployment












0















I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV.



I don't want to define 3 separate yamls for each logstash replica / instance



apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
spec:
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
image: "logstash-image"
imagePullPolicy: IfNotPresent
name: logstash
volumeMounts:
- mountPath: /data
name: logstash-data
restartPolicy: Always
volumes:
- name: logstash-data
persistentVolumeClaim:
claimName: logstash-vol


Need a way to do volume mount of different PVs to different pod replicas.










share|improve this question





























    0















    I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV.



    I don't want to define 3 separate yamls for each logstash replica / instance



    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: logstash
    spec:
    replicas: 3
    template:
    metadata:
    labels:
    app: logstash
    spec:
    containers:
    image: "logstash-image"
    imagePullPolicy: IfNotPresent
    name: logstash
    volumeMounts:
    - mountPath: /data
    name: logstash-data
    restartPolicy: Always
    volumes:
    - name: logstash-data
    persistentVolumeClaim:
    claimName: logstash-vol


    Need a way to do volume mount of different PVs to different pod replicas.










    share|improve this question



























      0












      0








      0


      2






      I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV.



      I don't want to define 3 separate yamls for each logstash replica / instance



      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: logstash
      spec:
      replicas: 3
      template:
      metadata:
      labels:
      app: logstash
      spec:
      containers:
      image: "logstash-image"
      imagePullPolicy: IfNotPresent
      name: logstash
      volumeMounts:
      - mountPath: /data
      name: logstash-data
      restartPolicy: Always
      volumes:
      - name: logstash-data
      persistentVolumeClaim:
      claimName: logstash-vol


      Need a way to do volume mount of different PVs to different pod replicas.










      share|improve this question
















      I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV.



      I don't want to define 3 separate yamls for each logstash replica / instance



      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: logstash
      spec:
      replicas: 3
      template:
      metadata:
      labels:
      app: logstash
      spec:
      containers:
      image: "logstash-image"
      imagePullPolicy: IfNotPresent
      name: logstash
      volumeMounts:
      - mountPath: /data
      name: logstash-data
      restartPolicy: Always
      volumes:
      - name: logstash-data
      persistentVolumeClaim:
      claimName: logstash-vol


      Need a way to do volume mount of different PVs to different pod replicas.







      kubernetes persistent-volumes persistent-volume-claims






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 1 at 20:09









      David Maze

      14.4k31327




      14.4k31327










      asked Jan 1 at 17:45









      Abhishek JaisinghAbhishek Jaisingh

      568




      568
























          1 Answer
          1






          active

          oldest

          votes


















          2














          With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:



          ...
          volumeClaimTemplates:
          - metadata:
          name: pv-data
          spec:
          accessModes:
          - ReadWriteOnce
          resources:
          requests:
          storage: 5G


          assuming you have 3 replica, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.



          The PVC is named as
          volumeClaimTemplate name + pod-name + ordinal number and as result, you will have the list of newly created PVCs:



          pv-data-<pod_name>-0
          pv-data-<pod_name>-1
          pv-data-<pod_name>-N


          StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively




          Note: this is called dynamic provisioning. You should be familiar with
          configuring kubernetes control plane components (like
          controller-manager) to achieve this, because you will need
          configured persistent storage (one of them) providers and understand
          the retain policy of your data, but this is completely another
          question...







          share|improve this answer

























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53997598%2fbind-different-persistent-volume-for-each-replica-in-a-kubernetes-deployment%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            2














            With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:



            ...
            volumeClaimTemplates:
            - metadata:
            name: pv-data
            spec:
            accessModes:
            - ReadWriteOnce
            resources:
            requests:
            storage: 5G


            assuming you have 3 replica, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.



            The PVC is named as
            volumeClaimTemplate name + pod-name + ordinal number and as result, you will have the list of newly created PVCs:



            pv-data-<pod_name>-0
            pv-data-<pod_name>-1
            pv-data-<pod_name>-N


            StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively




            Note: this is called dynamic provisioning. You should be familiar with
            configuring kubernetes control plane components (like
            controller-manager) to achieve this, because you will need
            configured persistent storage (one of them) providers and understand
            the retain policy of your data, but this is completely another
            question...







            share|improve this answer






























              2














              With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:



              ...
              volumeClaimTemplates:
              - metadata:
              name: pv-data
              spec:
              accessModes:
              - ReadWriteOnce
              resources:
              requests:
              storage: 5G


              assuming you have 3 replica, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.



              The PVC is named as
              volumeClaimTemplate name + pod-name + ordinal number and as result, you will have the list of newly created PVCs:



              pv-data-<pod_name>-0
              pv-data-<pod_name>-1
              pv-data-<pod_name>-N


              StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively




              Note: this is called dynamic provisioning. You should be familiar with
              configuring kubernetes control plane components (like
              controller-manager) to achieve this, because you will need
              configured persistent storage (one of them) providers and understand
              the retain policy of your data, but this is completely another
              question...







              share|improve this answer




























                2












                2








                2







                With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:



                ...
                volumeClaimTemplates:
                - metadata:
                name: pv-data
                spec:
                accessModes:
                - ReadWriteOnce
                resources:
                requests:
                storage: 5G


                assuming you have 3 replica, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.



                The PVC is named as
                volumeClaimTemplate name + pod-name + ordinal number and as result, you will have the list of newly created PVCs:



                pv-data-<pod_name>-0
                pv-data-<pod_name>-1
                pv-data-<pod_name>-N


                StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively




                Note: this is called dynamic provisioning. You should be familiar with
                configuring kubernetes control plane components (like
                controller-manager) to achieve this, because you will need
                configured persistent storage (one of them) providers and understand
                the retain policy of your data, but this is completely another
                question...







                share|improve this answer















                With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:



                ...
                volumeClaimTemplates:
                - metadata:
                name: pv-data
                spec:
                accessModes:
                - ReadWriteOnce
                resources:
                requests:
                storage: 5G


                assuming you have 3 replica, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.



                The PVC is named as
                volumeClaimTemplate name + pod-name + ordinal number and as result, you will have the list of newly created PVCs:



                pv-data-<pod_name>-0
                pv-data-<pod_name>-1
                pv-data-<pod_name>-N


                StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively




                Note: this is called dynamic provisioning. You should be familiar with
                configuring kubernetes control plane components (like
                controller-manager) to achieve this, because you will need
                configured persistent storage (one of them) providers and understand
                the retain policy of your data, but this is completely another
                question...








                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Jan 1 at 22:27

























                answered Jan 1 at 22:21









                Konstantin VustinKonstantin Vustin

                839112




                839112
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53997598%2fbind-different-persistent-volume-for-each-replica-in-a-kubernetes-deployment%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Monofisismo

                    Angular Downloading a file using contenturl with Basic Authentication

                    Olmecas