Graph disconnected when trying to create models in Keras with .get_layer method
In the normal code, I do something like this, and everything works fine:
from keras.layers import Input, Dense
from keras.models import Model
import keras.backend as K
import numpy as np
import tensorflow as tf
from sklearn.datasets import make_blobs
X, y = make_blobs(500,50,2)
def make_network1():
input_layer = Input((50,))
layer1 = Dense(100,name='network1_dense1')(input_layer)
output = Dense(50,name='network1_dense2')(layer1)
model = Model(input_layer,output)
return model
def make_network2():
input_layer = Input((50,))
layer1 = Dense(100,name='network2_dense1')(input_layer)
output = Dense(1,name='network2_output')(layer1)
model = Model(input_layer,output)
return model
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
model = Model(network1.input, output)
Now, I want to experiment with the .get_layer
method and .output
attribute in Keras by replacing the last line of code with:
model = Model(network1.input, network2.get_layer('network2_output').output)
Then it gives me the following error:
Graph disconnected: cannot obtain value for tensor
Tensor("input_4:0", shape=(?, 50), dtype=float32) at layer "input_4".
The following previous layers were accessed without issue:
My Question
However, shouldn't be output
and network2.get_layer('network2_output').output
the same thing? When I try to print both of them out, it says:
Tensor("model_14/network2_output/BiasAdd:0", shape=(?, 1), dtype=float32)
and
Tensor("network2_output_1/BiasAdd:0", shape=(?, 1), dtype=float32)
And network2
has been connected to the output of network1
already, I don't get why it is disconnected. How to make the code works with the .get_layer
and .output
methods?
I am using keras==2.24 and tensorflow-gpu==1.5.
python tensorflow keras neural-network keras-layer
add a comment |
In the normal code, I do something like this, and everything works fine:
from keras.layers import Input, Dense
from keras.models import Model
import keras.backend as K
import numpy as np
import tensorflow as tf
from sklearn.datasets import make_blobs
X, y = make_blobs(500,50,2)
def make_network1():
input_layer = Input((50,))
layer1 = Dense(100,name='network1_dense1')(input_layer)
output = Dense(50,name='network1_dense2')(layer1)
model = Model(input_layer,output)
return model
def make_network2():
input_layer = Input((50,))
layer1 = Dense(100,name='network2_dense1')(input_layer)
output = Dense(1,name='network2_output')(layer1)
model = Model(input_layer,output)
return model
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
model = Model(network1.input, output)
Now, I want to experiment with the .get_layer
method and .output
attribute in Keras by replacing the last line of code with:
model = Model(network1.input, network2.get_layer('network2_output').output)
Then it gives me the following error:
Graph disconnected: cannot obtain value for tensor
Tensor("input_4:0", shape=(?, 50), dtype=float32) at layer "input_4".
The following previous layers were accessed without issue:
My Question
However, shouldn't be output
and network2.get_layer('network2_output').output
the same thing? When I try to print both of them out, it says:
Tensor("model_14/network2_output/BiasAdd:0", shape=(?, 1), dtype=float32)
and
Tensor("network2_output_1/BiasAdd:0", shape=(?, 1), dtype=float32)
And network2
has been connected to the output of network1
already, I don't get why it is disconnected. How to make the code works with the .get_layer
and .output
methods?
I am using keras==2.24 and tensorflow-gpu==1.5.
python tensorflow keras neural-network keras-layer
add a comment |
In the normal code, I do something like this, and everything works fine:
from keras.layers import Input, Dense
from keras.models import Model
import keras.backend as K
import numpy as np
import tensorflow as tf
from sklearn.datasets import make_blobs
X, y = make_blobs(500,50,2)
def make_network1():
input_layer = Input((50,))
layer1 = Dense(100,name='network1_dense1')(input_layer)
output = Dense(50,name='network1_dense2')(layer1)
model = Model(input_layer,output)
return model
def make_network2():
input_layer = Input((50,))
layer1 = Dense(100,name='network2_dense1')(input_layer)
output = Dense(1,name='network2_output')(layer1)
model = Model(input_layer,output)
return model
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
model = Model(network1.input, output)
Now, I want to experiment with the .get_layer
method and .output
attribute in Keras by replacing the last line of code with:
model = Model(network1.input, network2.get_layer('network2_output').output)
Then it gives me the following error:
Graph disconnected: cannot obtain value for tensor
Tensor("input_4:0", shape=(?, 50), dtype=float32) at layer "input_4".
The following previous layers were accessed without issue:
My Question
However, shouldn't be output
and network2.get_layer('network2_output').output
the same thing? When I try to print both of them out, it says:
Tensor("model_14/network2_output/BiasAdd:0", shape=(?, 1), dtype=float32)
and
Tensor("network2_output_1/BiasAdd:0", shape=(?, 1), dtype=float32)
And network2
has been connected to the output of network1
already, I don't get why it is disconnected. How to make the code works with the .get_layer
and .output
methods?
I am using keras==2.24 and tensorflow-gpu==1.5.
python tensorflow keras neural-network keras-layer
In the normal code, I do something like this, and everything works fine:
from keras.layers import Input, Dense
from keras.models import Model
import keras.backend as K
import numpy as np
import tensorflow as tf
from sklearn.datasets import make_blobs
X, y = make_blobs(500,50,2)
def make_network1():
input_layer = Input((50,))
layer1 = Dense(100,name='network1_dense1')(input_layer)
output = Dense(50,name='network1_dense2')(layer1)
model = Model(input_layer,output)
return model
def make_network2():
input_layer = Input((50,))
layer1 = Dense(100,name='network2_dense1')(input_layer)
output = Dense(1,name='network2_output')(layer1)
model = Model(input_layer,output)
return model
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
model = Model(network1.input, output)
Now, I want to experiment with the .get_layer
method and .output
attribute in Keras by replacing the last line of code with:
model = Model(network1.input, network2.get_layer('network2_output').output)
Then it gives me the following error:
Graph disconnected: cannot obtain value for tensor
Tensor("input_4:0", shape=(?, 50), dtype=float32) at layer "input_4".
The following previous layers were accessed without issue:
My Question
However, shouldn't be output
and network2.get_layer('network2_output').output
the same thing? When I try to print both of them out, it says:
Tensor("model_14/network2_output/BiasAdd:0", shape=(?, 1), dtype=float32)
and
Tensor("network2_output_1/BiasAdd:0", shape=(?, 1), dtype=float32)
And network2
has been connected to the output of network1
already, I don't get why it is disconnected. How to make the code works with the .get_layer
and .output
methods?
I am using keras==2.24 and tensorflow-gpu==1.5.
python tensorflow keras neural-network keras-layer
python tensorflow keras neural-network keras-layer
edited Dec 31 '18 at 11:36
today
10.8k21837
10.8k21837
asked Dec 31 '18 at 11:06
Raven CheukRaven Cheuk
45413
45413
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
After running this line:
output = network2(network1.output)
the network2
model has two computation flows: one is the original one constructed when running make_network2()
, and another is the computation flow with network1.output
as the input constructed when running the above line. Therefore, it would have two outputs corresponding to each of these two computation flows:
>>> network2.get_output_at(0)
<tf.Tensor 'network2_output_4/BiasAdd:0' shape=(?, 1) dtype=float32>
>>> network2.get_output_at(1)
<tf.Tensor 'model_14/network2_output/BiasAdd:0' shape=(?, 1) dtype=float32>
Therefore, when you want to go from the network1.input
to the output of network2
model, you must use the second output which is connected to the network1.input
:
model = Model(network1.input, network2.get_output_at(1))
Essentially, network2.get_output_at(1)
is equivalent to output
obtained in this line: output = network2(network1.output)
.
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.
– Raven Cheuk
Dec 31 '18 at 18:46
@RavenCheuk No, because you have applied the wholenetwork2
model on thenetwork1.output
and since theModel
class inherits fromLayer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.
– today
Dec 31 '18 at 19:27
add a comment |
shouldn't be output and network2.get_layer('network2_output').output
the same thing?
No!, they are not the same thing.
Let me explain what is happening here
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
First you are creating two model's with one input for each layer and then you are replacing the second model's input with last layers output of the first model. This way you are making inputs of the output
variable to be the first model's input. So the network1.inputs
and output
are connected.
But on the following line there is no connection between network1.input
and network2.get_layer('network2_output').output
model = Model(network1.input, network2.get_layer('network2_output').output)
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53986725%2fgraph-disconnected-when-trying-to-create-models-in-keras-with-get-layer-method%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
After running this line:
output = network2(network1.output)
the network2
model has two computation flows: one is the original one constructed when running make_network2()
, and another is the computation flow with network1.output
as the input constructed when running the above line. Therefore, it would have two outputs corresponding to each of these two computation flows:
>>> network2.get_output_at(0)
<tf.Tensor 'network2_output_4/BiasAdd:0' shape=(?, 1) dtype=float32>
>>> network2.get_output_at(1)
<tf.Tensor 'model_14/network2_output/BiasAdd:0' shape=(?, 1) dtype=float32>
Therefore, when you want to go from the network1.input
to the output of network2
model, you must use the second output which is connected to the network1.input
:
model = Model(network1.input, network2.get_output_at(1))
Essentially, network2.get_output_at(1)
is equivalent to output
obtained in this line: output = network2(network1.output)
.
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.
– Raven Cheuk
Dec 31 '18 at 18:46
@RavenCheuk No, because you have applied the wholenetwork2
model on thenetwork1.output
and since theModel
class inherits fromLayer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.
– today
Dec 31 '18 at 19:27
add a comment |
After running this line:
output = network2(network1.output)
the network2
model has two computation flows: one is the original one constructed when running make_network2()
, and another is the computation flow with network1.output
as the input constructed when running the above line. Therefore, it would have two outputs corresponding to each of these two computation flows:
>>> network2.get_output_at(0)
<tf.Tensor 'network2_output_4/BiasAdd:0' shape=(?, 1) dtype=float32>
>>> network2.get_output_at(1)
<tf.Tensor 'model_14/network2_output/BiasAdd:0' shape=(?, 1) dtype=float32>
Therefore, when you want to go from the network1.input
to the output of network2
model, you must use the second output which is connected to the network1.input
:
model = Model(network1.input, network2.get_output_at(1))
Essentially, network2.get_output_at(1)
is equivalent to output
obtained in this line: output = network2(network1.output)
.
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.
– Raven Cheuk
Dec 31 '18 at 18:46
@RavenCheuk No, because you have applied the wholenetwork2
model on thenetwork1.output
and since theModel
class inherits fromLayer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.
– today
Dec 31 '18 at 19:27
add a comment |
After running this line:
output = network2(network1.output)
the network2
model has two computation flows: one is the original one constructed when running make_network2()
, and another is the computation flow with network1.output
as the input constructed when running the above line. Therefore, it would have two outputs corresponding to each of these two computation flows:
>>> network2.get_output_at(0)
<tf.Tensor 'network2_output_4/BiasAdd:0' shape=(?, 1) dtype=float32>
>>> network2.get_output_at(1)
<tf.Tensor 'model_14/network2_output/BiasAdd:0' shape=(?, 1) dtype=float32>
Therefore, when you want to go from the network1.input
to the output of network2
model, you must use the second output which is connected to the network1.input
:
model = Model(network1.input, network2.get_output_at(1))
Essentially, network2.get_output_at(1)
is equivalent to output
obtained in this line: output = network2(network1.output)
.
After running this line:
output = network2(network1.output)
the network2
model has two computation flows: one is the original one constructed when running make_network2()
, and another is the computation flow with network1.output
as the input constructed when running the above line. Therefore, it would have two outputs corresponding to each of these two computation flows:
>>> network2.get_output_at(0)
<tf.Tensor 'network2_output_4/BiasAdd:0' shape=(?, 1) dtype=float32>
>>> network2.get_output_at(1)
<tf.Tensor 'model_14/network2_output/BiasAdd:0' shape=(?, 1) dtype=float32>
Therefore, when you want to go from the network1.input
to the output of network2
model, you must use the second output which is connected to the network1.input
:
model = Model(network1.input, network2.get_output_at(1))
Essentially, network2.get_output_at(1)
is equivalent to output
obtained in this line: output = network2(network1.output)
.
edited Dec 31 '18 at 11:38
answered Dec 31 '18 at 11:30
todaytoday
10.8k21837
10.8k21837
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.
– Raven Cheuk
Dec 31 '18 at 18:46
@RavenCheuk No, because you have applied the wholenetwork2
model on thenetwork1.output
and since theModel
class inherits fromLayer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.
– today
Dec 31 '18 at 19:27
add a comment |
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.
– Raven Cheuk
Dec 31 '18 at 18:46
@RavenCheuk No, because you have applied the wholenetwork2
model on thenetwork1.output
and since theModel
class inherits fromLayer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.
– today
Dec 31 '18 at 19:27
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of
.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.– Raven Cheuk
Dec 31 '18 at 18:46
I understand it now. How about if I want to get the output for the middle layer? I can't find any equivalent of
.get_layer_output_at()
. Just curious if similar things can be done to the middle layers instead of the final output layer.– Raven Cheuk
Dec 31 '18 at 18:46
@RavenCheuk No, because you have applied the whole
network2
model on the network1.output
and since the Model
class inherits from Layer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.– today
Dec 31 '18 at 19:27
@RavenCheuk No, because you have applied the whole
network2
model on the network1.output
and since the Model
class inherits from Layer
class in Keras, it could be individually applied on tensors. Further, the call method for the models is implemented such that the internal attributes of the middle layers are not updated; rather they are only called to get their output. So we only have access to the new symbolic output of the model, and not those of its middle layers.– today
Dec 31 '18 at 19:27
add a comment |
shouldn't be output and network2.get_layer('network2_output').output
the same thing?
No!, they are not the same thing.
Let me explain what is happening here
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
First you are creating two model's with one input for each layer and then you are replacing the second model's input with last layers output of the first model. This way you are making inputs of the output
variable to be the first model's input. So the network1.inputs
and output
are connected.
But on the following line there is no connection between network1.input
and network2.get_layer('network2_output').output
model = Model(network1.input, network2.get_layer('network2_output').output)
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
add a comment |
shouldn't be output and network2.get_layer('network2_output').output
the same thing?
No!, they are not the same thing.
Let me explain what is happening here
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
First you are creating two model's with one input for each layer and then you are replacing the second model's input with last layers output of the first model. This way you are making inputs of the output
variable to be the first model's input. So the network1.inputs
and output
are connected.
But on the following line there is no connection between network1.input
and network2.get_layer('network2_output').output
model = Model(network1.input, network2.get_layer('network2_output').output)
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
add a comment |
shouldn't be output and network2.get_layer('network2_output').output
the same thing?
No!, they are not the same thing.
Let me explain what is happening here
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
First you are creating two model's with one input for each layer and then you are replacing the second model's input with last layers output of the first model. This way you are making inputs of the output
variable to be the first model's input. So the network1.inputs
and output
are connected.
But on the following line there is no connection between network1.input
and network2.get_layer('network2_output').output
model = Model(network1.input, network2.get_layer('network2_output').output)
shouldn't be output and network2.get_layer('network2_output').output
the same thing?
No!, they are not the same thing.
Let me explain what is happening here
network1 = make_network1()
network2 = make_network2()
output = network2(network1.output)
First you are creating two model's with one input for each layer and then you are replacing the second model's input with last layers output of the first model. This way you are making inputs of the output
variable to be the first model's input. So the network1.inputs
and output
are connected.
But on the following line there is no connection between network1.input
and network2.get_layer('network2_output').output
model = Model(network1.input, network2.get_layer('network2_output').output)
edited Jan 1 at 6:02
answered Dec 31 '18 at 11:31
MitikuMitiku
2,0341417
2,0341417
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
add a comment |
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
Is it possible to force the new input as the sole input of network2? I want to create my neural network by piecing several components (1 function 1 component) together. However, I am not sure if it is the correct way to do so
– Raven Cheuk
Jan 2 at 4:08
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53986725%2fgraph-disconnected-when-trying-to-create-models-in-keras-with-get-layer-method%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown