how to fit different inputs into different models?
I have 2 numpy arrays of images with same shape but different content:
array1 and array2. The following are two different functions:
def c_model(input_shape, name):
c_conv1a = Conv2D(64, kernel_size=(7, 7), activation='relu')(input_shape)
c_conv1a = BatchNormalization(axis=-1)(c_conv1a)
c_conv1a = MaxPooling2D(pool_size=(2, 2))(c_conv1a)
flatten = Flatten()(c_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
c_fc = Dense(1, activation='sigmoid', name=name)(fc)
return c_fc
def g_model(input_shape, name):
g_conv1a = Conv2D(64, kernel_size=(5, 5), activation='relu')(input_shape)
g_conv1a = BatchNormalization(axis=-1)(g_conv1a)
g_conv1a = MaxPooling2D(pool_size=(2, 2))(g_conv1a)
flatten = Flatten()(g_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
g_fc = Dense(1, activation='sigmoid', name=name)(fc)
return g_fc
After the following lines:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x=[array1, array2], y=[output1, output2])
how do i make sure that array1 is being fited in cmodel and array2 in gmodel?
python-3.x keras
add a comment |
I have 2 numpy arrays of images with same shape but different content:
array1 and array2. The following are two different functions:
def c_model(input_shape, name):
c_conv1a = Conv2D(64, kernel_size=(7, 7), activation='relu')(input_shape)
c_conv1a = BatchNormalization(axis=-1)(c_conv1a)
c_conv1a = MaxPooling2D(pool_size=(2, 2))(c_conv1a)
flatten = Flatten()(c_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
c_fc = Dense(1, activation='sigmoid', name=name)(fc)
return c_fc
def g_model(input_shape, name):
g_conv1a = Conv2D(64, kernel_size=(5, 5), activation='relu')(input_shape)
g_conv1a = BatchNormalization(axis=-1)(g_conv1a)
g_conv1a = MaxPooling2D(pool_size=(2, 2))(g_conv1a)
flatten = Flatten()(g_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
g_fc = Dense(1, activation='sigmoid', name=name)(fc)
return g_fc
After the following lines:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x=[array1, array2], y=[output1, output2])
how do i make sure that array1 is being fited in cmodel and array2 in gmodel?
python-3.x keras
add a comment |
I have 2 numpy arrays of images with same shape but different content:
array1 and array2. The following are two different functions:
def c_model(input_shape, name):
c_conv1a = Conv2D(64, kernel_size=(7, 7), activation='relu')(input_shape)
c_conv1a = BatchNormalization(axis=-1)(c_conv1a)
c_conv1a = MaxPooling2D(pool_size=(2, 2))(c_conv1a)
flatten = Flatten()(c_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
c_fc = Dense(1, activation='sigmoid', name=name)(fc)
return c_fc
def g_model(input_shape, name):
g_conv1a = Conv2D(64, kernel_size=(5, 5), activation='relu')(input_shape)
g_conv1a = BatchNormalization(axis=-1)(g_conv1a)
g_conv1a = MaxPooling2D(pool_size=(2, 2))(g_conv1a)
flatten = Flatten()(g_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
g_fc = Dense(1, activation='sigmoid', name=name)(fc)
return g_fc
After the following lines:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x=[array1, array2], y=[output1, output2])
how do i make sure that array1 is being fited in cmodel and array2 in gmodel?
python-3.x keras
I have 2 numpy arrays of images with same shape but different content:
array1 and array2. The following are two different functions:
def c_model(input_shape, name):
c_conv1a = Conv2D(64, kernel_size=(7, 7), activation='relu')(input_shape)
c_conv1a = BatchNormalization(axis=-1)(c_conv1a)
c_conv1a = MaxPooling2D(pool_size=(2, 2))(c_conv1a)
flatten = Flatten()(c_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
c_fc = Dense(1, activation='sigmoid', name=name)(fc)
return c_fc
def g_model(input_shape, name):
g_conv1a = Conv2D(64, kernel_size=(5, 5), activation='relu')(input_shape)
g_conv1a = BatchNormalization(axis=-1)(g_conv1a)
g_conv1a = MaxPooling2D(pool_size=(2, 2))(g_conv1a)
flatten = Flatten()(g_conv1a)
fc = Dense(128, activation='relu')(flatten)
fc = Dropout(0.3)(fc)
fc = Dense(256, activation='relu')(fc)
fc = Dropout(0.3)(fc)
g_fc = Dense(1, activation='sigmoid', name=name)(fc)
return g_fc
After the following lines:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x=[array1, array2], y=[output1, output2])
how do i make sure that array1 is being fited in cmodel and array2 in gmodel?
python-3.x keras
python-3.x keras
edited Jan 2 at 17:17
Daniel Möller
37.2k671108
37.2k671108
asked Jan 2 at 17:04
mMargegajmMargegaj
84
84
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
Your computation graph already ensures that is the case: you have 2 disjoint models c
and g
bound to an outer model with 2 inputs and 2 outputs. The only way array1
can affect output1
is through c
model and similar for array2
; therefore when you train, gradients with respect to outputs will only update the corresponding the model.
What you have is equivalent to:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel_out = c_model(shape1, "c")
gmodel_out = g_model(shape2, "g")
cmodel = Model(shape1, cmodel_out)
gmodel = Model(shape2, gmodel_out)
# ... compile models
cmodel.fit(array1, output1)
gmodel.fit(array2, output2)
as far as the computation graph is concerned.
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
add a comment |
It will be in the same order you defined. You defined [shape1, shape2]
, the order will be this one.
You passed [array1, array2]
, this will be the order.
You defined [cmodel, gmodel]
this is the order. You passed [output1, output2]
following the same order.
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
add a comment |
If I understand your question correctly, the way you are doing it already guarantees what you want. As already stated in the other answers, the order of the list elements determines which input numpy array will be fed into which input layer and which output layer will be compared against which output numpy array.
If you add a third input in the model constructor, e.g. m = Model(inputs=[shape1, shape2, shape3], ...)
, you will also need a third input numpy array: m.fit(x=[array1, array2, array3], ...)
, otherwise you will get an error.
If you add a third output in the model constructor, e.g. m = Model(outputs=[cmodel, gmodel, amodel], ...)
, you will also need a third output numpy array: m.fit(y=[output1, output2, output3], ...)
, otherwise you will get an error.
Note that there is no technical reason to have the same number of input and output layers. Only the two lists passed for inputs
and x
and the two lists passed for outputs
and y
must have the same size.
If, for whatever reason, you don't want to rely on this "matching by list item position", you have the alternative of passing dictionaries to m.fit
that map the names of the input and output layers to the input and output numpy arrays:
shape1 = Input(shape=(64,64,3), name="input1")
shape2 = Input(shape=(64,64,3), name="input2")
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x={"input2": array2, "input1": array1}, y={"c": output1, "g": output2})
A few side notes: I recommend to name your variables differently. Your variables shape1
and shape2
are no shapes. They are input layers (that happen to have a certain shape), so I would rather call them input1
and input2
or input_layer1
and input_layer2
.
Similarly, your variables cmodel
and gmodel
are no models. They are output layers of a model. Instead, m
is your model.
As already mentioned in another answer, your two "models" are completely isolated, so I don't see a reason to combine them into one model (unless, of course, there is some connection that you didn't further explain to keep the question short).
I also recommend to have a look at the Keras docs regarding multi-input and multi-output models.
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54010334%2fhow-to-fit-different-inputs-into-different-models%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
Your computation graph already ensures that is the case: you have 2 disjoint models c
and g
bound to an outer model with 2 inputs and 2 outputs. The only way array1
can affect output1
is through c
model and similar for array2
; therefore when you train, gradients with respect to outputs will only update the corresponding the model.
What you have is equivalent to:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel_out = c_model(shape1, "c")
gmodel_out = g_model(shape2, "g")
cmodel = Model(shape1, cmodel_out)
gmodel = Model(shape2, gmodel_out)
# ... compile models
cmodel.fit(array1, output1)
gmodel.fit(array2, output2)
as far as the computation graph is concerned.
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
add a comment |
Your computation graph already ensures that is the case: you have 2 disjoint models c
and g
bound to an outer model with 2 inputs and 2 outputs. The only way array1
can affect output1
is through c
model and similar for array2
; therefore when you train, gradients with respect to outputs will only update the corresponding the model.
What you have is equivalent to:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel_out = c_model(shape1, "c")
gmodel_out = g_model(shape2, "g")
cmodel = Model(shape1, cmodel_out)
gmodel = Model(shape2, gmodel_out)
# ... compile models
cmodel.fit(array1, output1)
gmodel.fit(array2, output2)
as far as the computation graph is concerned.
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
add a comment |
Your computation graph already ensures that is the case: you have 2 disjoint models c
and g
bound to an outer model with 2 inputs and 2 outputs. The only way array1
can affect output1
is through c
model and similar for array2
; therefore when you train, gradients with respect to outputs will only update the corresponding the model.
What you have is equivalent to:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel_out = c_model(shape1, "c")
gmodel_out = g_model(shape2, "g")
cmodel = Model(shape1, cmodel_out)
gmodel = Model(shape2, gmodel_out)
# ... compile models
cmodel.fit(array1, output1)
gmodel.fit(array2, output2)
as far as the computation graph is concerned.
Your computation graph already ensures that is the case: you have 2 disjoint models c
and g
bound to an outer model with 2 inputs and 2 outputs. The only way array1
can affect output1
is through c
model and similar for array2
; therefore when you train, gradients with respect to outputs will only update the corresponding the model.
What you have is equivalent to:
shape1 = Input(shape=(64,64,3))
shape2 = Input(shape=(64,64,3))
cmodel_out = c_model(shape1, "c")
gmodel_out = g_model(shape2, "g")
cmodel = Model(shape1, cmodel_out)
gmodel = Model(shape2, gmodel_out)
# ... compile models
cmodel.fit(array1, output1)
gmodel.fit(array2, output2)
as far as the computation graph is concerned.
answered Jan 2 at 17:17
nuricnuric
5,3972524
5,3972524
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
add a comment |
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
Thank you I understand that but i wanted to stack the layers all in one Model()...
– mMargegaj
Jan 2 at 18:31
add a comment |
It will be in the same order you defined. You defined [shape1, shape2]
, the order will be this one.
You passed [array1, array2]
, this will be the order.
You defined [cmodel, gmodel]
this is the order. You passed [output1, output2]
following the same order.
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
add a comment |
It will be in the same order you defined. You defined [shape1, shape2]
, the order will be this one.
You passed [array1, array2]
, this will be the order.
You defined [cmodel, gmodel]
this is the order. You passed [output1, output2]
following the same order.
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
add a comment |
It will be in the same order you defined. You defined [shape1, shape2]
, the order will be this one.
You passed [array1, array2]
, this will be the order.
You defined [cmodel, gmodel]
this is the order. You passed [output1, output2]
following the same order.
It will be in the same order you defined. You defined [shape1, shape2]
, the order will be this one.
You passed [array1, array2]
, this will be the order.
You defined [cmodel, gmodel]
this is the order. You passed [output1, output2]
following the same order.
answered Jan 2 at 17:18
Daniel MöllerDaniel Möller
37.2k671108
37.2k671108
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
add a comment |
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
So the order is important? What if I have another function let's say a_model, which numpy array will go to the a_model?
– mMargegaj
Jan 2 at 18:25
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
You choose. You define the order.
– Daniel Möller
Jan 2 at 18:46
add a comment |
If I understand your question correctly, the way you are doing it already guarantees what you want. As already stated in the other answers, the order of the list elements determines which input numpy array will be fed into which input layer and which output layer will be compared against which output numpy array.
If you add a third input in the model constructor, e.g. m = Model(inputs=[shape1, shape2, shape3], ...)
, you will also need a third input numpy array: m.fit(x=[array1, array2, array3], ...)
, otherwise you will get an error.
If you add a third output in the model constructor, e.g. m = Model(outputs=[cmodel, gmodel, amodel], ...)
, you will also need a third output numpy array: m.fit(y=[output1, output2, output3], ...)
, otherwise you will get an error.
Note that there is no technical reason to have the same number of input and output layers. Only the two lists passed for inputs
and x
and the two lists passed for outputs
and y
must have the same size.
If, for whatever reason, you don't want to rely on this "matching by list item position", you have the alternative of passing dictionaries to m.fit
that map the names of the input and output layers to the input and output numpy arrays:
shape1 = Input(shape=(64,64,3), name="input1")
shape2 = Input(shape=(64,64,3), name="input2")
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x={"input2": array2, "input1": array1}, y={"c": output1, "g": output2})
A few side notes: I recommend to name your variables differently. Your variables shape1
and shape2
are no shapes. They are input layers (that happen to have a certain shape), so I would rather call them input1
and input2
or input_layer1
and input_layer2
.
Similarly, your variables cmodel
and gmodel
are no models. They are output layers of a model. Instead, m
is your model.
As already mentioned in another answer, your two "models" are completely isolated, so I don't see a reason to combine them into one model (unless, of course, there is some connection that you didn't further explain to keep the question short).
I also recommend to have a look at the Keras docs regarding multi-input and multi-output models.
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
add a comment |
If I understand your question correctly, the way you are doing it already guarantees what you want. As already stated in the other answers, the order of the list elements determines which input numpy array will be fed into which input layer and which output layer will be compared against which output numpy array.
If you add a third input in the model constructor, e.g. m = Model(inputs=[shape1, shape2, shape3], ...)
, you will also need a third input numpy array: m.fit(x=[array1, array2, array3], ...)
, otherwise you will get an error.
If you add a third output in the model constructor, e.g. m = Model(outputs=[cmodel, gmodel, amodel], ...)
, you will also need a third output numpy array: m.fit(y=[output1, output2, output3], ...)
, otherwise you will get an error.
Note that there is no technical reason to have the same number of input and output layers. Only the two lists passed for inputs
and x
and the two lists passed for outputs
and y
must have the same size.
If, for whatever reason, you don't want to rely on this "matching by list item position", you have the alternative of passing dictionaries to m.fit
that map the names of the input and output layers to the input and output numpy arrays:
shape1 = Input(shape=(64,64,3), name="input1")
shape2 = Input(shape=(64,64,3), name="input2")
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x={"input2": array2, "input1": array1}, y={"c": output1, "g": output2})
A few side notes: I recommend to name your variables differently. Your variables shape1
and shape2
are no shapes. They are input layers (that happen to have a certain shape), so I would rather call them input1
and input2
or input_layer1
and input_layer2
.
Similarly, your variables cmodel
and gmodel
are no models. They are output layers of a model. Instead, m
is your model.
As already mentioned in another answer, your two "models" are completely isolated, so I don't see a reason to combine them into one model (unless, of course, there is some connection that you didn't further explain to keep the question short).
I also recommend to have a look at the Keras docs regarding multi-input and multi-output models.
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
add a comment |
If I understand your question correctly, the way you are doing it already guarantees what you want. As already stated in the other answers, the order of the list elements determines which input numpy array will be fed into which input layer and which output layer will be compared against which output numpy array.
If you add a third input in the model constructor, e.g. m = Model(inputs=[shape1, shape2, shape3], ...)
, you will also need a third input numpy array: m.fit(x=[array1, array2, array3], ...)
, otherwise you will get an error.
If you add a third output in the model constructor, e.g. m = Model(outputs=[cmodel, gmodel, amodel], ...)
, you will also need a third output numpy array: m.fit(y=[output1, output2, output3], ...)
, otherwise you will get an error.
Note that there is no technical reason to have the same number of input and output layers. Only the two lists passed for inputs
and x
and the two lists passed for outputs
and y
must have the same size.
If, for whatever reason, you don't want to rely on this "matching by list item position", you have the alternative of passing dictionaries to m.fit
that map the names of the input and output layers to the input and output numpy arrays:
shape1 = Input(shape=(64,64,3), name="input1")
shape2 = Input(shape=(64,64,3), name="input2")
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x={"input2": array2, "input1": array1}, y={"c": output1, "g": output2})
A few side notes: I recommend to name your variables differently. Your variables shape1
and shape2
are no shapes. They are input layers (that happen to have a certain shape), so I would rather call them input1
and input2
or input_layer1
and input_layer2
.
Similarly, your variables cmodel
and gmodel
are no models. They are output layers of a model. Instead, m
is your model.
As already mentioned in another answer, your two "models" are completely isolated, so I don't see a reason to combine them into one model (unless, of course, there is some connection that you didn't further explain to keep the question short).
I also recommend to have a look at the Keras docs regarding multi-input and multi-output models.
If I understand your question correctly, the way you are doing it already guarantees what you want. As already stated in the other answers, the order of the list elements determines which input numpy array will be fed into which input layer and which output layer will be compared against which output numpy array.
If you add a third input in the model constructor, e.g. m = Model(inputs=[shape1, shape2, shape3], ...)
, you will also need a third input numpy array: m.fit(x=[array1, array2, array3], ...)
, otherwise you will get an error.
If you add a third output in the model constructor, e.g. m = Model(outputs=[cmodel, gmodel, amodel], ...)
, you will also need a third output numpy array: m.fit(y=[output1, output2, output3], ...)
, otherwise you will get an error.
Note that there is no technical reason to have the same number of input and output layers. Only the two lists passed for inputs
and x
and the two lists passed for outputs
and y
must have the same size.
If, for whatever reason, you don't want to rely on this "matching by list item position", you have the alternative of passing dictionaries to m.fit
that map the names of the input and output layers to the input and output numpy arrays:
shape1 = Input(shape=(64,64,3), name="input1")
shape2 = Input(shape=(64,64,3), name="input2")
cmodel = c_model(shape1, "c")
gmodel = g_model(shape2, "g")
m = Model(inputs=[shape1, shape2], outputs=[cmodel, gmodel])
m.compile(...)
m.fit(x={"input2": array2, "input1": array1}, y={"c": output1, "g": output2})
A few side notes: I recommend to name your variables differently. Your variables shape1
and shape2
are no shapes. They are input layers (that happen to have a certain shape), so I would rather call them input1
and input2
or input_layer1
and input_layer2
.
Similarly, your variables cmodel
and gmodel
are no models. They are output layers of a model. Instead, m
is your model.
As already mentioned in another answer, your two "models" are completely isolated, so I don't see a reason to combine them into one model (unless, of course, there is some connection that you didn't further explain to keep the question short).
I also recommend to have a look at the Keras docs regarding multi-input and multi-output models.
answered Jan 3 at 9:33
sebrockmsebrockm
1,638320
1,638320
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
add a comment |
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
Thank you very much for detailed answer and side notes...
– mMargegaj
Jan 3 at 23:49
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54010334%2fhow-to-fit-different-inputs-into-different-models%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown