Loading a video dataset (Keras)












1















I'm trying to implement an LRCN/C(LSTM)RNN to classify emotions in videos. My dataset structure is split in two folders - "train_set" and "valid_set".
When you open, either of them, you can find 3 folders, "positive", "negative" and "surprise". Lastly, each of these 3 folders has video-folders, each of which is a collection of frames of a video in .jpg. Videos have different length, hence a video-folder can have 200 frames, the one next to it 1200, 700...! To load the dataset I am using flow_from_directory. Here, I need a few clarifications:




  1. Will in my case flow_from_directory load the videos 1 by 1, sequentially? Their frames?

  2. If I load into batches, does flow_from_directory take a batch based on the sequential ordering of the images in a video?

  3. If I have video_1 folder of 5 images and video_2 folder of 3 videos, and a batch size of 7, will flow_from_directory end up selecting two batches of 5 and 3 videos or it will overlap the videos, taking all 5 images from the first folder + 2 of the second? Will it mix my videos?

  4. Is the dataset loading thread-safe? Worker one fetches video frames sequentially from folder 1, worker 2 from folder 2 etc... or each worker can takes frames from anywhere and any folder, which can spoil my sequential reading?

  5. If I enable shuffle, will it shuffle the order in which it would read the video folders or it will start fetching frames in random order from random folders?

  6. What does TimeDisributed layer do as from the documentation I cannot really imagine? What if I apply it to a CNN's dense layer or to each layer of a CNN?










share|improve this question





























    1















    I'm trying to implement an LRCN/C(LSTM)RNN to classify emotions in videos. My dataset structure is split in two folders - "train_set" and "valid_set".
    When you open, either of them, you can find 3 folders, "positive", "negative" and "surprise". Lastly, each of these 3 folders has video-folders, each of which is a collection of frames of a video in .jpg. Videos have different length, hence a video-folder can have 200 frames, the one next to it 1200, 700...! To load the dataset I am using flow_from_directory. Here, I need a few clarifications:




    1. Will in my case flow_from_directory load the videos 1 by 1, sequentially? Their frames?

    2. If I load into batches, does flow_from_directory take a batch based on the sequential ordering of the images in a video?

    3. If I have video_1 folder of 5 images and video_2 folder of 3 videos, and a batch size of 7, will flow_from_directory end up selecting two batches of 5 and 3 videos or it will overlap the videos, taking all 5 images from the first folder + 2 of the second? Will it mix my videos?

    4. Is the dataset loading thread-safe? Worker one fetches video frames sequentially from folder 1, worker 2 from folder 2 etc... or each worker can takes frames from anywhere and any folder, which can spoil my sequential reading?

    5. If I enable shuffle, will it shuffle the order in which it would read the video folders or it will start fetching frames in random order from random folders?

    6. What does TimeDisributed layer do as from the documentation I cannot really imagine? What if I apply it to a CNN's dense layer or to each layer of a CNN?










    share|improve this question



























      1












      1








      1








      I'm trying to implement an LRCN/C(LSTM)RNN to classify emotions in videos. My dataset structure is split in two folders - "train_set" and "valid_set".
      When you open, either of them, you can find 3 folders, "positive", "negative" and "surprise". Lastly, each of these 3 folders has video-folders, each of which is a collection of frames of a video in .jpg. Videos have different length, hence a video-folder can have 200 frames, the one next to it 1200, 700...! To load the dataset I am using flow_from_directory. Here, I need a few clarifications:




      1. Will in my case flow_from_directory load the videos 1 by 1, sequentially? Their frames?

      2. If I load into batches, does flow_from_directory take a batch based on the sequential ordering of the images in a video?

      3. If I have video_1 folder of 5 images and video_2 folder of 3 videos, and a batch size of 7, will flow_from_directory end up selecting two batches of 5 and 3 videos or it will overlap the videos, taking all 5 images from the first folder + 2 of the second? Will it mix my videos?

      4. Is the dataset loading thread-safe? Worker one fetches video frames sequentially from folder 1, worker 2 from folder 2 etc... or each worker can takes frames from anywhere and any folder, which can spoil my sequential reading?

      5. If I enable shuffle, will it shuffle the order in which it would read the video folders or it will start fetching frames in random order from random folders?

      6. What does TimeDisributed layer do as from the documentation I cannot really imagine? What if I apply it to a CNN's dense layer or to each layer of a CNN?










      share|improve this question
















      I'm trying to implement an LRCN/C(LSTM)RNN to classify emotions in videos. My dataset structure is split in two folders - "train_set" and "valid_set".
      When you open, either of them, you can find 3 folders, "positive", "negative" and "surprise". Lastly, each of these 3 folders has video-folders, each of which is a collection of frames of a video in .jpg. Videos have different length, hence a video-folder can have 200 frames, the one next to it 1200, 700...! To load the dataset I am using flow_from_directory. Here, I need a few clarifications:




      1. Will in my case flow_from_directory load the videos 1 by 1, sequentially? Their frames?

      2. If I load into batches, does flow_from_directory take a batch based on the sequential ordering of the images in a video?

      3. If I have video_1 folder of 5 images and video_2 folder of 3 videos, and a batch size of 7, will flow_from_directory end up selecting two batches of 5 and 3 videos or it will overlap the videos, taking all 5 images from the first folder + 2 of the second? Will it mix my videos?

      4. Is the dataset loading thread-safe? Worker one fetches video frames sequentially from folder 1, worker 2 from folder 2 etc... or each worker can takes frames from anywhere and any folder, which can spoil my sequential reading?

      5. If I enable shuffle, will it shuffle the order in which it would read the video folders or it will start fetching frames in random order from random folders?

      6. What does TimeDisributed layer do as from the documentation I cannot really imagine? What if I apply it to a CNN's dense layer or to each layer of a CNN?







      python tensorflow video keras






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 2 at 14:12









      Daniel Möller

      37k671107




      37k671107










      asked Jan 2 at 10:27









      KDX2KDX2

      2291317




      2291317
























          1 Answer
          1






          active

          oldest

          votes


















          2















          1. flow_from_directory is made for images, not movies. It will not understand your directory structure and will not create a "frames" dimension. You need your own generator (usually better to implement a keras.utils.Sequence)



          2. You can only load into batches if :




            • you load movies one by one due to their different lengths

            • you pad your videos with empty frames to make them all have the same length



          3. Same as 1.


          4. If you make your own generator implementing a keras.utils.Sequence(), the safety will be kept as long as your implementation knows what is each movie.


          5. It would shuffle images if you were loading images



          6. TimeDistributed allows data with an extra dimension at index 1. Example: a layer that usually takes (batch_size, ...other dims...) will take (batch_size, extra_dim, ...other dims...). This extra dimension may mean anything, not necessarily time, and it will remain untouched.




            • Recurrent layers don't need this (unless you really want an extra dimension there for unusual reasons), they already consider the index 1 as time.

            • CNNs will work exactly the same, for each image, but you can organize your data in the format (batch_size, video_frames, height, width, channels)








          share|improve this answer
























          • Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

            – KDX2
            Jan 2 at 14:27






          • 1





            One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

            – Daniel Möller
            Jan 2 at 14:43






          • 1





            You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

            – Daniel Möller
            Jan 2 at 14:45






          • 1





            As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

            – Daniel Möller
            Jan 5 at 20:57








          • 1





            batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

            – Daniel Möller
            Jan 7 at 1:38











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54004682%2floading-a-video-dataset-keras%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          2















          1. flow_from_directory is made for images, not movies. It will not understand your directory structure and will not create a "frames" dimension. You need your own generator (usually better to implement a keras.utils.Sequence)



          2. You can only load into batches if :




            • you load movies one by one due to their different lengths

            • you pad your videos with empty frames to make them all have the same length



          3. Same as 1.


          4. If you make your own generator implementing a keras.utils.Sequence(), the safety will be kept as long as your implementation knows what is each movie.


          5. It would shuffle images if you were loading images



          6. TimeDistributed allows data with an extra dimension at index 1. Example: a layer that usually takes (batch_size, ...other dims...) will take (batch_size, extra_dim, ...other dims...). This extra dimension may mean anything, not necessarily time, and it will remain untouched.




            • Recurrent layers don't need this (unless you really want an extra dimension there for unusual reasons), they already consider the index 1 as time.

            • CNNs will work exactly the same, for each image, but you can organize your data in the format (batch_size, video_frames, height, width, channels)








          share|improve this answer
























          • Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

            – KDX2
            Jan 2 at 14:27






          • 1





            One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

            – Daniel Möller
            Jan 2 at 14:43






          • 1





            You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

            – Daniel Möller
            Jan 2 at 14:45






          • 1





            As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

            – Daniel Möller
            Jan 5 at 20:57








          • 1





            batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

            – Daniel Möller
            Jan 7 at 1:38
















          2















          1. flow_from_directory is made for images, not movies. It will not understand your directory structure and will not create a "frames" dimension. You need your own generator (usually better to implement a keras.utils.Sequence)



          2. You can only load into batches if :




            • you load movies one by one due to their different lengths

            • you pad your videos with empty frames to make them all have the same length



          3. Same as 1.


          4. If you make your own generator implementing a keras.utils.Sequence(), the safety will be kept as long as your implementation knows what is each movie.


          5. It would shuffle images if you were loading images



          6. TimeDistributed allows data with an extra dimension at index 1. Example: a layer that usually takes (batch_size, ...other dims...) will take (batch_size, extra_dim, ...other dims...). This extra dimension may mean anything, not necessarily time, and it will remain untouched.




            • Recurrent layers don't need this (unless you really want an extra dimension there for unusual reasons), they already consider the index 1 as time.

            • CNNs will work exactly the same, for each image, but you can organize your data in the format (batch_size, video_frames, height, width, channels)








          share|improve this answer
























          • Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

            – KDX2
            Jan 2 at 14:27






          • 1





            One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

            – Daniel Möller
            Jan 2 at 14:43






          • 1





            You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

            – Daniel Möller
            Jan 2 at 14:45






          • 1





            As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

            – Daniel Möller
            Jan 5 at 20:57








          • 1





            batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

            – Daniel Möller
            Jan 7 at 1:38














          2












          2








          2








          1. flow_from_directory is made for images, not movies. It will not understand your directory structure and will not create a "frames" dimension. You need your own generator (usually better to implement a keras.utils.Sequence)



          2. You can only load into batches if :




            • you load movies one by one due to their different lengths

            • you pad your videos with empty frames to make them all have the same length



          3. Same as 1.


          4. If you make your own generator implementing a keras.utils.Sequence(), the safety will be kept as long as your implementation knows what is each movie.


          5. It would shuffle images if you were loading images



          6. TimeDistributed allows data with an extra dimension at index 1. Example: a layer that usually takes (batch_size, ...other dims...) will take (batch_size, extra_dim, ...other dims...). This extra dimension may mean anything, not necessarily time, and it will remain untouched.




            • Recurrent layers don't need this (unless you really want an extra dimension there for unusual reasons), they already consider the index 1 as time.

            • CNNs will work exactly the same, for each image, but you can organize your data in the format (batch_size, video_frames, height, width, channels)








          share|improve this answer














          1. flow_from_directory is made for images, not movies. It will not understand your directory structure and will not create a "frames" dimension. You need your own generator (usually better to implement a keras.utils.Sequence)



          2. You can only load into batches if :




            • you load movies one by one due to their different lengths

            • you pad your videos with empty frames to make them all have the same length



          3. Same as 1.


          4. If you make your own generator implementing a keras.utils.Sequence(), the safety will be kept as long as your implementation knows what is each movie.


          5. It would shuffle images if you were loading images



          6. TimeDistributed allows data with an extra dimension at index 1. Example: a layer that usually takes (batch_size, ...other dims...) will take (batch_size, extra_dim, ...other dims...). This extra dimension may mean anything, not necessarily time, and it will remain untouched.




            • Recurrent layers don't need this (unless you really want an extra dimension there for unusual reasons), they already consider the index 1 as time.

            • CNNs will work exactly the same, for each image, but you can organize your data in the format (batch_size, video_frames, height, width, channels)









          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 2 at 11:33









          Daniel MöllerDaniel Möller

          37k671107




          37k671107













          • Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

            – KDX2
            Jan 2 at 14:27






          • 1





            One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

            – Daniel Möller
            Jan 2 at 14:43






          • 1





            You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

            – Daniel Möller
            Jan 2 at 14:45






          • 1





            As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

            – Daniel Möller
            Jan 5 at 20:57








          • 1





            batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

            – Daniel Möller
            Jan 7 at 1:38



















          • Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

            – KDX2
            Jan 2 at 14:27






          • 1





            One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

            – Daniel Möller
            Jan 2 at 14:43






          • 1





            You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

            – Daniel Möller
            Jan 2 at 14:45






          • 1





            As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

            – Daniel Möller
            Jan 5 at 20:57








          • 1





            batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

            – Daniel Möller
            Jan 7 at 1:38

















          Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

          – KDX2
          Jan 2 at 14:27





          Thank you! Hence, If I am reading video frame, ofc, sequentially (assume batch size 1 for now), each frame gets classified by the CNN dense layer. Iteration 1 finishes, and the frame is fed into the RNN. At iteration 2, the second video frame runs through the CNN and the TimeDistributed now extends the output tensor in the time dimension to 2, making it point to time-step 2? Then that gets fed as a new step to the LSTM? Is it ilke that, per iteration? If so, does the CNN's dense re-input all its accumulated snapshots to the (in this case) RNN or it just passes the newest?

          – KDX2
          Jan 2 at 14:27




          1




          1





          One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

          – Daniel Möller
          Jan 2 at 14:43





          One batch should be (1, frames, heght, width, channels) in TimeDistributed(Conv2D(...)), or (1, frames, features) in RNN or Dense. There is no visible iteration, each layer will do it's job as it should. (Iterations happen invisibly inside the RNN regarding frames).

          – Daniel Möller
          Jan 2 at 14:43




          1




          1





          You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

          – Daniel Möller
          Jan 2 at 14:45





          You can choose return_sequences=True in the RNNs to keep the frames until the end - The last layers will classify every frame. Or you may not choose that in the last RNN layer to eliminate the frames dimension and get "video classification".

          – Daniel Möller
          Jan 2 at 14:45




          1




          1





          As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

          – Daniel Möller
          Jan 5 at 20:57







          As said in the answer, your shape should be (batch, frames, height, width, channels) for TimeDistributed(Conv2D()) or (batch, frames, allTheRest) for RNNs and Dense.

          – Daniel Möller
          Jan 5 at 20:57






          1




          1





          batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

          – Daniel Möller
          Jan 7 at 1:38





          batch = movies, each movie is a sample/example. Unfortunately, there is no automatic folder reading. Keras doesn't have a standard generator for movies. You must control your folder structure and image loading yourself in a custom generator. So, it doesn't matter what your folder structure is, as long as you load and separate which frames belong to which movies.

          – Daniel Möller
          Jan 7 at 1:38




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54004682%2floading-a-video-dataset-keras%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Monofisismo

          Angular Downloading a file using contenturl with Basic Authentication

          Olmecas