Running successive or multi-threaded model.fit causes ValueError in keras

Multi tool use
Multi tool use












0















I am using hyperopt and keras to automatically build neural networks, and I'm doing this in asynchronous tasks for each value that I want to regress with the same dataset. (I'm aware that you can train a network to output multiple regression variables, but this is how I'm doing it right now)



Let's say I'm using 1 thread. After every task the next task will raise



Traceback (most recent call last):
File "C:Program FilesPython36libthreading.py", line 916, in _bootstrap_inner
self.run()
File "C:Program FilesPython36libthreading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:Program FilesPython36libmultiprocessingpool.py", line 489, in _handle_results
task = get()
File "C:Program FilesPython36libmultiprocessingconnection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "C:Program FilesPython36libsite-packageskerasenginenetwork.py", line 1266, in __setstate__
model = saving.unpickle_model(state)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 435, in unpickle_model
return _deserialize_model(f)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 258, in _deserialize_model
.format(len(layer_names), len(filtered_layers))
ValueError: You are trying to load a weight file containing 21 layers into a model with 0 layers


I've read somewhere that this could be caused by not specifying the correct input_size, however, before adding any layers to the models I always add an InputLayer(X) to the model.



The traceback doesn't trace back to my code, so I was kind of riddled here.



Eventually I figured out that this was caused by the return action in the called function by the apply_sync. Both .wait() and .get() seem to produce the same error.



My question therefore is: is a return statement viable when using apply_sync and Pooling with Keras (following the pydocs .get() does work for me with simple functions), or should I e.g. put the results by enumeration in a global variable, or am I doing this inherently wrong?










share|improve this question























  • I've currently rolled back to a single threaded solution, I'll see if I can fix it eventually... :)

    – Joery
    Jan 2 at 15:28
















0















I am using hyperopt and keras to automatically build neural networks, and I'm doing this in asynchronous tasks for each value that I want to regress with the same dataset. (I'm aware that you can train a network to output multiple regression variables, but this is how I'm doing it right now)



Let's say I'm using 1 thread. After every task the next task will raise



Traceback (most recent call last):
File "C:Program FilesPython36libthreading.py", line 916, in _bootstrap_inner
self.run()
File "C:Program FilesPython36libthreading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:Program FilesPython36libmultiprocessingpool.py", line 489, in _handle_results
task = get()
File "C:Program FilesPython36libmultiprocessingconnection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "C:Program FilesPython36libsite-packageskerasenginenetwork.py", line 1266, in __setstate__
model = saving.unpickle_model(state)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 435, in unpickle_model
return _deserialize_model(f)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 258, in _deserialize_model
.format(len(layer_names), len(filtered_layers))
ValueError: You are trying to load a weight file containing 21 layers into a model with 0 layers


I've read somewhere that this could be caused by not specifying the correct input_size, however, before adding any layers to the models I always add an InputLayer(X) to the model.



The traceback doesn't trace back to my code, so I was kind of riddled here.



Eventually I figured out that this was caused by the return action in the called function by the apply_sync. Both .wait() and .get() seem to produce the same error.



My question therefore is: is a return statement viable when using apply_sync and Pooling with Keras (following the pydocs .get() does work for me with simple functions), or should I e.g. put the results by enumeration in a global variable, or am I doing this inherently wrong?










share|improve this question























  • I've currently rolled back to a single threaded solution, I'll see if I can fix it eventually... :)

    – Joery
    Jan 2 at 15:28














0












0








0








I am using hyperopt and keras to automatically build neural networks, and I'm doing this in asynchronous tasks for each value that I want to regress with the same dataset. (I'm aware that you can train a network to output multiple regression variables, but this is how I'm doing it right now)



Let's say I'm using 1 thread. After every task the next task will raise



Traceback (most recent call last):
File "C:Program FilesPython36libthreading.py", line 916, in _bootstrap_inner
self.run()
File "C:Program FilesPython36libthreading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:Program FilesPython36libmultiprocessingpool.py", line 489, in _handle_results
task = get()
File "C:Program FilesPython36libmultiprocessingconnection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "C:Program FilesPython36libsite-packageskerasenginenetwork.py", line 1266, in __setstate__
model = saving.unpickle_model(state)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 435, in unpickle_model
return _deserialize_model(f)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 258, in _deserialize_model
.format(len(layer_names), len(filtered_layers))
ValueError: You are trying to load a weight file containing 21 layers into a model with 0 layers


I've read somewhere that this could be caused by not specifying the correct input_size, however, before adding any layers to the models I always add an InputLayer(X) to the model.



The traceback doesn't trace back to my code, so I was kind of riddled here.



Eventually I figured out that this was caused by the return action in the called function by the apply_sync. Both .wait() and .get() seem to produce the same error.



My question therefore is: is a return statement viable when using apply_sync and Pooling with Keras (following the pydocs .get() does work for me with simple functions), or should I e.g. put the results by enumeration in a global variable, or am I doing this inherently wrong?










share|improve this question














I am using hyperopt and keras to automatically build neural networks, and I'm doing this in asynchronous tasks for each value that I want to regress with the same dataset. (I'm aware that you can train a network to output multiple regression variables, but this is how I'm doing it right now)



Let's say I'm using 1 thread. After every task the next task will raise



Traceback (most recent call last):
File "C:Program FilesPython36libthreading.py", line 916, in _bootstrap_inner
self.run()
File "C:Program FilesPython36libthreading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "C:Program FilesPython36libmultiprocessingpool.py", line 489, in _handle_results
task = get()
File "C:Program FilesPython36libmultiprocessingconnection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
File "C:Program FilesPython36libsite-packageskerasenginenetwork.py", line 1266, in __setstate__
model = saving.unpickle_model(state)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 435, in unpickle_model
return _deserialize_model(f)
File "C:Program FilesPython36libsite-packageskerasenginesaving.py", line 258, in _deserialize_model
.format(len(layer_names), len(filtered_layers))
ValueError: You are trying to load a weight file containing 21 layers into a model with 0 layers


I've read somewhere that this could be caused by not specifying the correct input_size, however, before adding any layers to the models I always add an InputLayer(X) to the model.



The traceback doesn't trace back to my code, so I was kind of riddled here.



Eventually I figured out that this was caused by the return action in the called function by the apply_sync. Both .wait() and .get() seem to produce the same error.



My question therefore is: is a return statement viable when using apply_sync and Pooling with Keras (following the pydocs .get() does work for me with simple functions), or should I e.g. put the results by enumeration in a global variable, or am I doing this inherently wrong?







python multithreading keras hyperopt






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 2 at 11:32









JoeryJoery

1




1













  • I've currently rolled back to a single threaded solution, I'll see if I can fix it eventually... :)

    – Joery
    Jan 2 at 15:28



















  • I've currently rolled back to a single threaded solution, I'll see if I can fix it eventually... :)

    – Joery
    Jan 2 at 15:28

















I've currently rolled back to a single threaded solution, I'll see if I can fix it eventually... :)

– Joery
Jan 2 at 15:28





I've currently rolled back to a single threaded solution, I'll see if I can fix it eventually... :)

– Joery
Jan 2 at 15:28












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54005574%2frunning-successive-or-multi-threaded-model-fit-causes-valueerror-in-keras%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54005574%2frunning-successive-or-multi-threaded-model-fit-causes-valueerror-in-keras%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







OalSYBzwSSuxiGN,X,KZ0k,T0GfBjl,w4WOP30On0g,51kuSfPORbw,8VBsAQS,NgAn4iDG,eLD3s4QciYv4KwLBMMXT89AF4hBXFMG,SsH
vR,J7 L,ZQQGmUu,qvz3cMEobLcO,ar94FxnPBfXQgQz89bGZ5MExgo0kOBdncO,GGO g15hK k5Y8AtRt4e U4rz dUfNM,AxHx,uJhL7u4gO

Popular posts from this blog

Monofisismo

Angular Downloading a file using contenturl with Basic Authentication

Olmecas