Nodemon fs.writeFileSync Crash












0















I have a queue of data from the AWS SQS service, and I am retrieving this data, posting it to a webpage created and hosted via Node.js, and then telling the SQS service to delete the file. I use Nodemon to create and update the page, such that every time I pull a new event, the page updates and users logged into the page see fresh data. I achieve this with code that goes something like:



sqs.receiveMessage(data){
if (data = 1) {
dataForWebPage = something
fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
}
if (data = 2) {
dataForWebPage = somethingDifferent
fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
}
}

sqs.deleteMessage(data)


When testing this on Windows using Visual Code Studio, this works well. Running 'nodemon myscript.js' and opening localhost:3000 displays the page. As events come in, nodemon restarts, the page updates seamlessly, and the events are purged from the queue.



However, if zip the files and modules up, and move the script over to a linux machine, running an identical script via SSH means that I can view the webpage, the page gets update, nodemon restarts and behaves in the same way that I expect, but the messages from the SQS queue do not get deleted. They simply stay in the queue, and are never removed. Moments later, my script will pull them again, making the webpage inaccurate. They will continue to look forever and never delete.



If I do not use nodemon or if I comment out the fs.writeFileSync, the app works as expected and the events from the SQS queue are deleted as expected. However, my webpage is not then updated.



I had a theory that this was due to nodemon restarting the service, and as a result, causing the script to stop and restart before it reached the 'deleteMessage' part. However, If I simply move the delete event so that it happens before any reset, it does not solve the problem. For example, the following code is still broken on Linux, but like the previous version, DOES work on Windows:



sqs.receiveMessage(data){
if (data = 1) {
dataForWebPage = something
sqs.deleteMessage(data)
fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
}
if (data = 2) {
dataForWebPage = somethingDifferent
sqs.deleteMessage(data)
fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
}
}


It seems that if I use the asynchronous version of this call, fs.writeFile, the SQS events are also deleted as expected, but as I receive a lot of events, I am using the synchronous version of this service to ensure that data does not queue, and is updated simultaneously.



Later in the code, I use fs.readFileSync, and that does not seem to be interfering with the call to delete the SQS events.



My questions are:



1) What is happening, and why is it happening?



2) Why only Linux, and not windows?



3) What's the best way to solve this to ensure I get live updates to the page, but events are being deleted as expected?










share|improve this question





























    0















    I have a queue of data from the AWS SQS service, and I am retrieving this data, posting it to a webpage created and hosted via Node.js, and then telling the SQS service to delete the file. I use Nodemon to create and update the page, such that every time I pull a new event, the page updates and users logged into the page see fresh data. I achieve this with code that goes something like:



    sqs.receiveMessage(data){
    if (data = 1) {
    dataForWebPage = something
    fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
    }
    if (data = 2) {
    dataForWebPage = somethingDifferent
    fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
    }
    }

    sqs.deleteMessage(data)


    When testing this on Windows using Visual Code Studio, this works well. Running 'nodemon myscript.js' and opening localhost:3000 displays the page. As events come in, nodemon restarts, the page updates seamlessly, and the events are purged from the queue.



    However, if zip the files and modules up, and move the script over to a linux machine, running an identical script via SSH means that I can view the webpage, the page gets update, nodemon restarts and behaves in the same way that I expect, but the messages from the SQS queue do not get deleted. They simply stay in the queue, and are never removed. Moments later, my script will pull them again, making the webpage inaccurate. They will continue to look forever and never delete.



    If I do not use nodemon or if I comment out the fs.writeFileSync, the app works as expected and the events from the SQS queue are deleted as expected. However, my webpage is not then updated.



    I had a theory that this was due to nodemon restarting the service, and as a result, causing the script to stop and restart before it reached the 'deleteMessage' part. However, If I simply move the delete event so that it happens before any reset, it does not solve the problem. For example, the following code is still broken on Linux, but like the previous version, DOES work on Windows:



    sqs.receiveMessage(data){
    if (data = 1) {
    dataForWebPage = something
    sqs.deleteMessage(data)
    fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
    }
    if (data = 2) {
    dataForWebPage = somethingDifferent
    sqs.deleteMessage(data)
    fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
    }
    }


    It seems that if I use the asynchronous version of this call, fs.writeFile, the SQS events are also deleted as expected, but as I receive a lot of events, I am using the synchronous version of this service to ensure that data does not queue, and is updated simultaneously.



    Later in the code, I use fs.readFileSync, and that does not seem to be interfering with the call to delete the SQS events.



    My questions are:



    1) What is happening, and why is it happening?



    2) Why only Linux, and not windows?



    3) What's the best way to solve this to ensure I get live updates to the page, but events are being deleted as expected?










    share|improve this question



























      0












      0








      0








      I have a queue of data from the AWS SQS service, and I am retrieving this data, posting it to a webpage created and hosted via Node.js, and then telling the SQS service to delete the file. I use Nodemon to create and update the page, such that every time I pull a new event, the page updates and users logged into the page see fresh data. I achieve this with code that goes something like:



      sqs.receiveMessage(data){
      if (data = 1) {
      dataForWebPage = something
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      if (data = 2) {
      dataForWebPage = somethingDifferent
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      }

      sqs.deleteMessage(data)


      When testing this on Windows using Visual Code Studio, this works well. Running 'nodemon myscript.js' and opening localhost:3000 displays the page. As events come in, nodemon restarts, the page updates seamlessly, and the events are purged from the queue.



      However, if zip the files and modules up, and move the script over to a linux machine, running an identical script via SSH means that I can view the webpage, the page gets update, nodemon restarts and behaves in the same way that I expect, but the messages from the SQS queue do not get deleted. They simply stay in the queue, and are never removed. Moments later, my script will pull them again, making the webpage inaccurate. They will continue to look forever and never delete.



      If I do not use nodemon or if I comment out the fs.writeFileSync, the app works as expected and the events from the SQS queue are deleted as expected. However, my webpage is not then updated.



      I had a theory that this was due to nodemon restarting the service, and as a result, causing the script to stop and restart before it reached the 'deleteMessage' part. However, If I simply move the delete event so that it happens before any reset, it does not solve the problem. For example, the following code is still broken on Linux, but like the previous version, DOES work on Windows:



      sqs.receiveMessage(data){
      if (data = 1) {
      dataForWebPage = something
      sqs.deleteMessage(data)
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      if (data = 2) {
      dataForWebPage = somethingDifferent
      sqs.deleteMessage(data)
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      }


      It seems that if I use the asynchronous version of this call, fs.writeFile, the SQS events are also deleted as expected, but as I receive a lot of events, I am using the synchronous version of this service to ensure that data does not queue, and is updated simultaneously.



      Later in the code, I use fs.readFileSync, and that does not seem to be interfering with the call to delete the SQS events.



      My questions are:



      1) What is happening, and why is it happening?



      2) Why only Linux, and not windows?



      3) What's the best way to solve this to ensure I get live updates to the page, but events are being deleted as expected?










      share|improve this question
















      I have a queue of data from the AWS SQS service, and I am retrieving this data, posting it to a webpage created and hosted via Node.js, and then telling the SQS service to delete the file. I use Nodemon to create and update the page, such that every time I pull a new event, the page updates and users logged into the page see fresh data. I achieve this with code that goes something like:



      sqs.receiveMessage(data){
      if (data = 1) {
      dataForWebPage = something
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      if (data = 2) {
      dataForWebPage = somethingDifferent
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      }

      sqs.deleteMessage(data)


      When testing this on Windows using Visual Code Studio, this works well. Running 'nodemon myscript.js' and opening localhost:3000 displays the page. As events come in, nodemon restarts, the page updates seamlessly, and the events are purged from the queue.



      However, if zip the files and modules up, and move the script over to a linux machine, running an identical script via SSH means that I can view the webpage, the page gets update, nodemon restarts and behaves in the same way that I expect, but the messages from the SQS queue do not get deleted. They simply stay in the queue, and are never removed. Moments later, my script will pull them again, making the webpage inaccurate. They will continue to look forever and never delete.



      If I do not use nodemon or if I comment out the fs.writeFileSync, the app works as expected and the events from the SQS queue are deleted as expected. However, my webpage is not then updated.



      I had a theory that this was due to nodemon restarting the service, and as a result, causing the script to stop and restart before it reached the 'deleteMessage' part. However, If I simply move the delete event so that it happens before any reset, it does not solve the problem. For example, the following code is still broken on Linux, but like the previous version, DOES work on Windows:



      sqs.receiveMessage(data){
      if (data = 1) {
      dataForWebPage = something
      sqs.deleteMessage(data)
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      if (data = 2) {
      dataForWebPage = somethingDifferent
      sqs.deleteMessage(data)
      fs.writeFileSync( "dataFile.json", JSON.stringify(dataForWebPage, null, 2), "utf8");
      }
      }


      It seems that if I use the asynchronous version of this call, fs.writeFile, the SQS events are also deleted as expected, but as I receive a lot of events, I am using the synchronous version of this service to ensure that data does not queue, and is updated simultaneously.



      Later in the code, I use fs.readFileSync, and that does not seem to be interfering with the call to delete the SQS events.



      My questions are:



      1) What is happening, and why is it happening?



      2) Why only Linux, and not windows?



      3) What's the best way to solve this to ensure I get live updates to the page, but events are being deleted as expected?







      node.js linux fs nodemon






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 30 '18 at 14:41







      HDCerberus

















      asked Dec 30 '18 at 14:32









      HDCerberusHDCerberus

      1,63411428




      1,63411428
























          1 Answer
          1






          active

          oldest

          votes


















          0















          1) What is happening, and why is it happening?




          Guessing : deleteMessage is asynchronous, and a sync operation to write file is blocking the event loop, so your deleteMessage http call may be blocked and as you restart the process, it's actually never executed.




          2) Why only Linux, and not windows?




          No idea.




          3) What's the best way to solve this to ensure I get live updates to
          the page, but events are being deleted as expected?




          I will be blunt : you have to redo all the architecture of your system.
          Voluntarily failing your webserver and restarting it to refresh a web page won't scale to more than one user, and not even one it seems. It's not meant to work that way.
          Depending of the constraint of the system you are trying to build (scale, speed, etc..) many different solution can work.



          To stay as simple as possible :



          A first improvement could be to keep your file storage but expose it through an API to get the data on the frontend from an ajax request, and polling it at regular interval. You will have a lot more request, but a lot less problem. It's maybe less "live" but few system actually need less than a few seconds live update.



          Secondly, don't do sync operation on nodejs, it's a huge performance bottleneck, leading to strange errors and huge latencies.



          When that works, file storage is usually a pain, and not really performant, maybe ask yourself if you need a database or a memcached/redis, also you can check if you need to replace polling an API from the webpage to a Socket that will prevent a lot of request and allow less than 1sec update.






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53978469%2fnodemon-fs-writefilesync-crash%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0















            1) What is happening, and why is it happening?




            Guessing : deleteMessage is asynchronous, and a sync operation to write file is blocking the event loop, so your deleteMessage http call may be blocked and as you restart the process, it's actually never executed.




            2) Why only Linux, and not windows?




            No idea.




            3) What's the best way to solve this to ensure I get live updates to
            the page, but events are being deleted as expected?




            I will be blunt : you have to redo all the architecture of your system.
            Voluntarily failing your webserver and restarting it to refresh a web page won't scale to more than one user, and not even one it seems. It's not meant to work that way.
            Depending of the constraint of the system you are trying to build (scale, speed, etc..) many different solution can work.



            To stay as simple as possible :



            A first improvement could be to keep your file storage but expose it through an API to get the data on the frontend from an ajax request, and polling it at regular interval. You will have a lot more request, but a lot less problem. It's maybe less "live" but few system actually need less than a few seconds live update.



            Secondly, don't do sync operation on nodejs, it's a huge performance bottleneck, leading to strange errors and huge latencies.



            When that works, file storage is usually a pain, and not really performant, maybe ask yourself if you need a database or a memcached/redis, also you can check if you need to replace polling an API from the webpage to a Socket that will prevent a lot of request and allow less than 1sec update.






            share|improve this answer




























              0















              1) What is happening, and why is it happening?




              Guessing : deleteMessage is asynchronous, and a sync operation to write file is blocking the event loop, so your deleteMessage http call may be blocked and as you restart the process, it's actually never executed.




              2) Why only Linux, and not windows?




              No idea.




              3) What's the best way to solve this to ensure I get live updates to
              the page, but events are being deleted as expected?




              I will be blunt : you have to redo all the architecture of your system.
              Voluntarily failing your webserver and restarting it to refresh a web page won't scale to more than one user, and not even one it seems. It's not meant to work that way.
              Depending of the constraint of the system you are trying to build (scale, speed, etc..) many different solution can work.



              To stay as simple as possible :



              A first improvement could be to keep your file storage but expose it through an API to get the data on the frontend from an ajax request, and polling it at regular interval. You will have a lot more request, but a lot less problem. It's maybe less "live" but few system actually need less than a few seconds live update.



              Secondly, don't do sync operation on nodejs, it's a huge performance bottleneck, leading to strange errors and huge latencies.



              When that works, file storage is usually a pain, and not really performant, maybe ask yourself if you need a database or a memcached/redis, also you can check if you need to replace polling an API from the webpage to a Socket that will prevent a lot of request and allow less than 1sec update.






              share|improve this answer


























                0












                0








                0








                1) What is happening, and why is it happening?




                Guessing : deleteMessage is asynchronous, and a sync operation to write file is blocking the event loop, so your deleteMessage http call may be blocked and as you restart the process, it's actually never executed.




                2) Why only Linux, and not windows?




                No idea.




                3) What's the best way to solve this to ensure I get live updates to
                the page, but events are being deleted as expected?




                I will be blunt : you have to redo all the architecture of your system.
                Voluntarily failing your webserver and restarting it to refresh a web page won't scale to more than one user, and not even one it seems. It's not meant to work that way.
                Depending of the constraint of the system you are trying to build (scale, speed, etc..) many different solution can work.



                To stay as simple as possible :



                A first improvement could be to keep your file storage but expose it through an API to get the data on the frontend from an ajax request, and polling it at regular interval. You will have a lot more request, but a lot less problem. It's maybe less "live" but few system actually need less than a few seconds live update.



                Secondly, don't do sync operation on nodejs, it's a huge performance bottleneck, leading to strange errors and huge latencies.



                When that works, file storage is usually a pain, and not really performant, maybe ask yourself if you need a database or a memcached/redis, also you can check if you need to replace polling an API from the webpage to a Socket that will prevent a lot of request and allow less than 1sec update.






                share|improve this answer














                1) What is happening, and why is it happening?




                Guessing : deleteMessage is asynchronous, and a sync operation to write file is blocking the event loop, so your deleteMessage http call may be blocked and as you restart the process, it's actually never executed.




                2) Why only Linux, and not windows?




                No idea.




                3) What's the best way to solve this to ensure I get live updates to
                the page, but events are being deleted as expected?




                I will be blunt : you have to redo all the architecture of your system.
                Voluntarily failing your webserver and restarting it to refresh a web page won't scale to more than one user, and not even one it seems. It's not meant to work that way.
                Depending of the constraint of the system you are trying to build (scale, speed, etc..) many different solution can work.



                To stay as simple as possible :



                A first improvement could be to keep your file storage but expose it through an API to get the data on the frontend from an ajax request, and polling it at regular interval. You will have a lot more request, but a lot less problem. It's maybe less "live" but few system actually need less than a few seconds live update.



                Secondly, don't do sync operation on nodejs, it's a huge performance bottleneck, leading to strange errors and huge latencies.



                When that works, file storage is usually a pain, and not really performant, maybe ask yourself if you need a database or a memcached/redis, also you can check if you need to replace polling an API from the webpage to a Socket that will prevent a lot of request and allow less than 1sec update.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Dec 30 '18 at 17:43









                Boris CharpentierBoris Charpentier

                3,0391824




                3,0391824






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53978469%2fnodemon-fs-writefilesync-crash%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Monofisismo

                    Angular Downloading a file using contenturl with Basic Authentication

                    Olmecas