keras model with tf.contrib.losses.metric_learning.triplet_semihard_loss Assertion error
I am using python 3 with anaconda, and trying to use a tf.contrib loss function with a Keras model.
The code is the following
from keras.layers import Dense, Flatten
from keras.optimizers import Adam
from keras.models import Sequential
from tensorflow.contrib.losses import metric_learning
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50, activation="relu"))
model.compile(loss=metric_learning.triplet_semihard_loss, optimizer=Adam())
I get the following error:
File
"/home/user/.local/lib/python3.6/site-packages/keras/engine/training_utils.py",
line 404, in weighted
score_array = fn(y_true, y_pred) File "/home/user/anaconda3/envs/siamese/lib/python3.6/site-packages/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py",
line 179, in triplet_semihard_loss
assert lshape.shape == 1 AssertionError
When I am using the same network with a keras loss function it works fine, I tried to wrap the tf loss function in a function like so
def func(y_true, y_pred):
import tensorflow as tf
return tf.contrib.losses.metric_learning.triplet_semihard_loss(y_true, y_pred)
And still getting the same error
What am I doing wrong here?
update:
When changing the func to return the following
return K.categorical_crossentropy(y_true, y_pred)
everything works fine!
But i cant get it to work with the specific tf loss function...
When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1
it runs fine
Thanks
python tensorflow machine-learning keras deep-learning
add a comment |
I am using python 3 with anaconda, and trying to use a tf.contrib loss function with a Keras model.
The code is the following
from keras.layers import Dense, Flatten
from keras.optimizers import Adam
from keras.models import Sequential
from tensorflow.contrib.losses import metric_learning
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50, activation="relu"))
model.compile(loss=metric_learning.triplet_semihard_loss, optimizer=Adam())
I get the following error:
File
"/home/user/.local/lib/python3.6/site-packages/keras/engine/training_utils.py",
line 404, in weighted
score_array = fn(y_true, y_pred) File "/home/user/anaconda3/envs/siamese/lib/python3.6/site-packages/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py",
line 179, in triplet_semihard_loss
assert lshape.shape == 1 AssertionError
When I am using the same network with a keras loss function it works fine, I tried to wrap the tf loss function in a function like so
def func(y_true, y_pred):
import tensorflow as tf
return tf.contrib.losses.metric_learning.triplet_semihard_loss(y_true, y_pred)
And still getting the same error
What am I doing wrong here?
update:
When changing the func to return the following
return K.categorical_crossentropy(y_true, y_pred)
everything works fine!
But i cant get it to work with the specific tf loss function...
When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1
it runs fine
Thanks
python tensorflow machine-learning keras deep-learning
Still not clear where exactly your error pops up; is it duringfit
? Duringcompile
? Posting the full error trace would be a good idea...
– desertnaut
Jan 1 at 15:17
@desertnaut the error is in the compile function. When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1 it runs fine
– thebeancounter
Jan 1 at 15:21
Hello, I have too same problem, but the solution turned so easy. You just replace the arguments. First set labels and then set embeddings.
– Жасулан Бердибеков
Jan 20 at 14:49
add a comment |
I am using python 3 with anaconda, and trying to use a tf.contrib loss function with a Keras model.
The code is the following
from keras.layers import Dense, Flatten
from keras.optimizers import Adam
from keras.models import Sequential
from tensorflow.contrib.losses import metric_learning
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50, activation="relu"))
model.compile(loss=metric_learning.triplet_semihard_loss, optimizer=Adam())
I get the following error:
File
"/home/user/.local/lib/python3.6/site-packages/keras/engine/training_utils.py",
line 404, in weighted
score_array = fn(y_true, y_pred) File "/home/user/anaconda3/envs/siamese/lib/python3.6/site-packages/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py",
line 179, in triplet_semihard_loss
assert lshape.shape == 1 AssertionError
When I am using the same network with a keras loss function it works fine, I tried to wrap the tf loss function in a function like so
def func(y_true, y_pred):
import tensorflow as tf
return tf.contrib.losses.metric_learning.triplet_semihard_loss(y_true, y_pred)
And still getting the same error
What am I doing wrong here?
update:
When changing the func to return the following
return K.categorical_crossentropy(y_true, y_pred)
everything works fine!
But i cant get it to work with the specific tf loss function...
When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1
it runs fine
Thanks
python tensorflow machine-learning keras deep-learning
I am using python 3 with anaconda, and trying to use a tf.contrib loss function with a Keras model.
The code is the following
from keras.layers import Dense, Flatten
from keras.optimizers import Adam
from keras.models import Sequential
from tensorflow.contrib.losses import metric_learning
model = Sequential()
model.add(Flatten(input_shape=input_shape))
model.add(Dense(50, activation="relu"))
model.compile(loss=metric_learning.triplet_semihard_loss, optimizer=Adam())
I get the following error:
File
"/home/user/.local/lib/python3.6/site-packages/keras/engine/training_utils.py",
line 404, in weighted
score_array = fn(y_true, y_pred) File "/home/user/anaconda3/envs/siamese/lib/python3.6/site-packages/tensorflow/contrib/losses/python/metric_learning/metric_loss_ops.py",
line 179, in triplet_semihard_loss
assert lshape.shape == 1 AssertionError
When I am using the same network with a keras loss function it works fine, I tried to wrap the tf loss function in a function like so
def func(y_true, y_pred):
import tensorflow as tf
return tf.contrib.losses.metric_learning.triplet_semihard_loss(y_true, y_pred)
And still getting the same error
What am I doing wrong here?
update:
When changing the func to return the following
return K.categorical_crossentropy(y_true, y_pred)
everything works fine!
But i cant get it to work with the specific tf loss function...
When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1
it runs fine
Thanks
python tensorflow machine-learning keras deep-learning
python tensorflow machine-learning keras deep-learning
edited Jan 1 at 15:22
thebeancounter
asked Jan 1 at 13:53
thebeancounterthebeancounter
9081927
9081927
Still not clear where exactly your error pops up; is it duringfit
? Duringcompile
? Posting the full error trace would be a good idea...
– desertnaut
Jan 1 at 15:17
@desertnaut the error is in the compile function. When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1 it runs fine
– thebeancounter
Jan 1 at 15:21
Hello, I have too same problem, but the solution turned so easy. You just replace the arguments. First set labels and then set embeddings.
– Жасулан Бердибеков
Jan 20 at 14:49
add a comment |
Still not clear where exactly your error pops up; is it duringfit
? Duringcompile
? Posting the full error trace would be a good idea...
– desertnaut
Jan 1 at 15:17
@desertnaut the error is in the compile function. When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1 it runs fine
– thebeancounter
Jan 1 at 15:21
Hello, I have too same problem, but the solution turned so easy. You just replace the arguments. First set labels and then set embeddings.
– Жасулан Бердибеков
Jan 20 at 14:49
Still not clear where exactly your error pops up; is it during
fit
? During compile
? Posting the full error trace would be a good idea...– desertnaut
Jan 1 at 15:17
Still not clear where exactly your error pops up; is it during
fit
? During compile
? Posting the full error trace would be a good idea...– desertnaut
Jan 1 at 15:17
@desertnaut the error is in the compile function. When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1 it runs fine
– thebeancounter
Jan 1 at 15:21
@desertnaut the error is in the compile function. When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1 it runs fine
– thebeancounter
Jan 1 at 15:21
Hello, I have too same problem, but the solution turned so easy. You just replace the arguments. First set labels and then set embeddings.
– Жасулан Бердибеков
Jan 20 at 14:49
Hello, I have too same problem, but the solution turned so easy. You just replace the arguments. First set labels and then set embeddings.
– Жасулан Бердибеков
Jan 20 at 14:49
add a comment |
2 Answers
2
active
oldest
votes
The problem is that you pass wrong input to the loss function.
According to triplet_semihard_loss docstring you need to pass labels
and embeddings
.
So your code have to be:
def func(y, embeddings):
return tf.contrib.losses.metric_learning.triplet_semihard_loss(labels=y, embeddings=embeddings)
And two more notes about network for embeddings:
Last dense layer has to be without activation
Don't forget to normalise output vector
model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
add a comment |
It seems that your problem comes from an incorrect input in the loss function. In fact, the triplet loss wants the parameters:
Args:
labels: 1-D tf.int32 `Tensor` with shape [batch_size] of
multiclass integer labels.
embeddings: 2-D float `Tensor` of embedding vectors. Embeddings should
be l2 normalized.
Are you sure that y_true
has the correct shape? Can you give us more details about the tensors you are using?
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53996020%2fkeras-model-with-tf-contrib-losses-metric-learning-triplet-semihard-loss-asserti%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
The problem is that you pass wrong input to the loss function.
According to triplet_semihard_loss docstring you need to pass labels
and embeddings
.
So your code have to be:
def func(y, embeddings):
return tf.contrib.losses.metric_learning.triplet_semihard_loss(labels=y, embeddings=embeddings)
And two more notes about network for embeddings:
Last dense layer has to be without activation
Don't forget to normalise output vector
model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
add a comment |
The problem is that you pass wrong input to the loss function.
According to triplet_semihard_loss docstring you need to pass labels
and embeddings
.
So your code have to be:
def func(y, embeddings):
return tf.contrib.losses.metric_learning.triplet_semihard_loss(labels=y, embeddings=embeddings)
And two more notes about network for embeddings:
Last dense layer has to be without activation
Don't forget to normalise output vector
model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
add a comment |
The problem is that you pass wrong input to the loss function.
According to triplet_semihard_loss docstring you need to pass labels
and embeddings
.
So your code have to be:
def func(y, embeddings):
return tf.contrib.losses.metric_learning.triplet_semihard_loss(labels=y, embeddings=embeddings)
And two more notes about network for embeddings:
Last dense layer has to be without activation
Don't forget to normalise output vector
model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
The problem is that you pass wrong input to the loss function.
According to triplet_semihard_loss docstring you need to pass labels
and embeddings
.
So your code have to be:
def func(y, embeddings):
return tf.contrib.losses.metric_learning.triplet_semihard_loss(labels=y, embeddings=embeddings)
And two more notes about network for embeddings:
Last dense layer has to be without activation
Don't forget to normalise output vector
model.add(Lambda(lambda x: K.l2_normalize(x, axis=1)))
answered Feb 12 at 7:48
Misha BortnikovMisha Bortnikov
163
163
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
add a comment |
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
why would you say the input is wrong? labels=y_true seems to me similar to what you are writing here
– thebeancounter
Feb 12 at 8:38
add a comment |
It seems that your problem comes from an incorrect input in the loss function. In fact, the triplet loss wants the parameters:
Args:
labels: 1-D tf.int32 `Tensor` with shape [batch_size] of
multiclass integer labels.
embeddings: 2-D float `Tensor` of embedding vectors. Embeddings should
be l2 normalized.
Are you sure that y_true
has the correct shape? Can you give us more details about the tensors you are using?
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
add a comment |
It seems that your problem comes from an incorrect input in the loss function. In fact, the triplet loss wants the parameters:
Args:
labels: 1-D tf.int32 `Tensor` with shape [batch_size] of
multiclass integer labels.
embeddings: 2-D float `Tensor` of embedding vectors. Embeddings should
be l2 normalized.
Are you sure that y_true
has the correct shape? Can you give us more details about the tensors you are using?
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
add a comment |
It seems that your problem comes from an incorrect input in the loss function. In fact, the triplet loss wants the parameters:
Args:
labels: 1-D tf.int32 `Tensor` with shape [batch_size] of
multiclass integer labels.
embeddings: 2-D float `Tensor` of embedding vectors. Embeddings should
be l2 normalized.
Are you sure that y_true
has the correct shape? Can you give us more details about the tensors you are using?
It seems that your problem comes from an incorrect input in the loss function. In fact, the triplet loss wants the parameters:
Args:
labels: 1-D tf.int32 `Tensor` with shape [batch_size] of
multiclass integer labels.
embeddings: 2-D float `Tensor` of embedding vectors. Embeddings should
be l2 normalized.
Are you sure that y_true
has the correct shape? Can you give us more details about the tensors you are using?
answered Jan 1 at 18:37
gabrielegabriele
996
996
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
add a comment |
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
This is crashing before even getting the y-true or any input for that matter. It crashes in the compile stage
– thebeancounter
Jan 1 at 20:20
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
It crashes before running any computation because Tensorflow builds the computation graph in the first place, so if there is some mismatch among tensors' dimensions it will raise an error even before feeding any placeholder with its value. It is absolutely normal, then any other information may help to debug your code
– gabriele
Jan 1 at 22:17
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
the issue is that in the loss function there is an assert, checking that the y_true shape is 2d or more, but this loss function is using y_true as a single dimension vector of integers (labels) so it does not make any sense, i removed the assert and it's working fine @gabriele
– thebeancounter
Jan 2 at 10:26
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53996020%2fkeras-model-with-tf-contrib-losses-metric-learning-triplet-semihard-loss-asserti%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Still not clear where exactly your error pops up; is it during
fit
? Duringcompile
? Posting the full error trace would be a good idea...– desertnaut
Jan 1 at 15:17
@desertnaut the error is in the compile function. When i go into tf.contrib.losses.metric_learning.triplet_semihard_loss and remove this line of code: assert lshape.shape == 1 it runs fine
– thebeancounter
Jan 1 at 15:21
Hello, I have too same problem, but the solution turned so easy. You just replace the arguments. First set labels and then set embeddings.
– Жасулан Бердибеков
Jan 20 at 14:49