How to train image classification with numeric data combined using Tensorflow

Multi tool use
Multi tool use












0















Forgive my tries, still a nOOb in deep learning and Tensorflow.



I have two flows of train data, one is images, the second is continuous numerics. ImageDataGenerator keras implementation is for images, tf.estimator.inputs.numpy_input_fn is for numerics. Knowing that the two flows are well ordered one with the other.



Credits goes for https://github.com/keras-team/keras/issues/8130#issuecomment-336855177 for one generator combining both.



From there, I came up with this knowing that:



slice_and_order : get the right set of images from folder if it's train or validation.



import tensorflow as tf 
input_imgen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
rotation_range=5.,
horizontal_flip = True)

test_imgen = ImageDataGenerator(rescale = 1./255)


def slice_and_order(gen, cut, train_boolean):
filenames = gen.filenames #[idx : idx + gen.batch_size]
filenames = [int(filename.split('/')[1].split('.')[0]) for filename in filenames]
co=-1
length = len(filenames)
for i in gen:
idx = (gen.batch_index - 1) * gen.batch_size
current = int(gen.filenames[idx].split('/')[1].split('.')[0])
co+=1
if(train_boolean):
if(co%length < cut):
yield i
else:
continue
else:
if(co%length < cut):
continue
else:
yield i

def generate_generator_multiple(generator, dir1, batch_size, img_height, img_width, train_boolean):
genX1 = generator.flow_from_directory(dir1,
target_size = (img_height,img_width),
class_mode = 'categorical',
batch_size = 1,
shuffle=False,
seed=7)
genX1_ = None;
if(train_boolean):
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)
else:
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)

genX2_ = None;
with tf.Session() as session:
if(train_boolean):
genX2 = tf.estimator.inputs.numpy_input_fn(
res_train_flow['x'], res_train_flow['y'], batch_size=1, shuffle=False, num_epochs=1)
else:
genX2 = tf.estimator.inputs.numpy_input_fn(
res_valid_flow['x'], res_valid_flow['y'], batch_size=1, shuffle=False, num_epochs=1)

while True:
val = genX1.next()
features, target = genX2()
yield [val[0], features], target #Yield both images and their mutual label


inputgenerator=generate_generator_multiple(generator=input_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = True)

validgenerator=generate_generator_multiple(generator=test_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = False)


Now my problem is input data is the heterogeneity, like



([array([[[[0.8980393 , 0.9215687 , 0.9058824 ],
[0.8965367 , 0.9200661 , 0.90437984],
[0.89043367, 0.9139631 , 0.8982768 ],
...,
[0.7747699 , 0.79437774, 0.77084833],
[0.77384806, 0.7934559 , 0.7699265 ],
[0.7729261 , 0.79253393, 0.7690045 ]],
...,

[[0.7760461 , 0.7721245 , 0.7525167 ],
[0.77819717, 0.7742756 , 0.75466776],
[0.7803483 , 0.77642673, 0.7568189 ],
...,
[0.7188979 , 0.7385057 , 0.7502704 ],
[0.70090973, 0.7205176 , 0.7322823 ],
[0.7019608 , 0.72156864, 0.73333335]],

[[0.8027476 , 0.79882604, 0.7792182 ],
[0.8032237 , 0.7997209 , 0.78011304],
[0.8016872 , 0.7991063 , 0.77949846],
...,
[0.7273756 , 0.74698347, 0.7587482 ],
[0.7003445 , 0.7199524 , 0.7317171 ],
[0.7019608 , 0.72156864, 0.73333335]]]], dtype=float32),
{'max_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:1' shape=(?,) dtype=float64>,
'max_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:2' shape=(?,) dtype=float64>,
'min_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:3' shape=(?,) dtype=float64>,
'min_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:4' shape=(?,) dtype=float64>}],
<tf.Tensor 'fifo_queue_DequeueUpTo_9:5' shape=(?,) dtype=float64>)


Can I use one model for this one input ? or should I let each generator separate with two models (convolutional for Images, and Logistic for numerics) , and then feed each model separately, and combine both for output, seems complex for me.










share|improve this question























  • You can use a model for images and numerics inputs. The key is how your network structure connects features.

    – giser_yugang
    Dec 30 '18 at 12:26
















0















Forgive my tries, still a nOOb in deep learning and Tensorflow.



I have two flows of train data, one is images, the second is continuous numerics. ImageDataGenerator keras implementation is for images, tf.estimator.inputs.numpy_input_fn is for numerics. Knowing that the two flows are well ordered one with the other.



Credits goes for https://github.com/keras-team/keras/issues/8130#issuecomment-336855177 for one generator combining both.



From there, I came up with this knowing that:



slice_and_order : get the right set of images from folder if it's train or validation.



import tensorflow as tf 
input_imgen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
rotation_range=5.,
horizontal_flip = True)

test_imgen = ImageDataGenerator(rescale = 1./255)


def slice_and_order(gen, cut, train_boolean):
filenames = gen.filenames #[idx : idx + gen.batch_size]
filenames = [int(filename.split('/')[1].split('.')[0]) for filename in filenames]
co=-1
length = len(filenames)
for i in gen:
idx = (gen.batch_index - 1) * gen.batch_size
current = int(gen.filenames[idx].split('/')[1].split('.')[0])
co+=1
if(train_boolean):
if(co%length < cut):
yield i
else:
continue
else:
if(co%length < cut):
continue
else:
yield i

def generate_generator_multiple(generator, dir1, batch_size, img_height, img_width, train_boolean):
genX1 = generator.flow_from_directory(dir1,
target_size = (img_height,img_width),
class_mode = 'categorical',
batch_size = 1,
shuffle=False,
seed=7)
genX1_ = None;
if(train_boolean):
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)
else:
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)

genX2_ = None;
with tf.Session() as session:
if(train_boolean):
genX2 = tf.estimator.inputs.numpy_input_fn(
res_train_flow['x'], res_train_flow['y'], batch_size=1, shuffle=False, num_epochs=1)
else:
genX2 = tf.estimator.inputs.numpy_input_fn(
res_valid_flow['x'], res_valid_flow['y'], batch_size=1, shuffle=False, num_epochs=1)

while True:
val = genX1.next()
features, target = genX2()
yield [val[0], features], target #Yield both images and their mutual label


inputgenerator=generate_generator_multiple(generator=input_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = True)

validgenerator=generate_generator_multiple(generator=test_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = False)


Now my problem is input data is the heterogeneity, like



([array([[[[0.8980393 , 0.9215687 , 0.9058824 ],
[0.8965367 , 0.9200661 , 0.90437984],
[0.89043367, 0.9139631 , 0.8982768 ],
...,
[0.7747699 , 0.79437774, 0.77084833],
[0.77384806, 0.7934559 , 0.7699265 ],
[0.7729261 , 0.79253393, 0.7690045 ]],
...,

[[0.7760461 , 0.7721245 , 0.7525167 ],
[0.77819717, 0.7742756 , 0.75466776],
[0.7803483 , 0.77642673, 0.7568189 ],
...,
[0.7188979 , 0.7385057 , 0.7502704 ],
[0.70090973, 0.7205176 , 0.7322823 ],
[0.7019608 , 0.72156864, 0.73333335]],

[[0.8027476 , 0.79882604, 0.7792182 ],
[0.8032237 , 0.7997209 , 0.78011304],
[0.8016872 , 0.7991063 , 0.77949846],
...,
[0.7273756 , 0.74698347, 0.7587482 ],
[0.7003445 , 0.7199524 , 0.7317171 ],
[0.7019608 , 0.72156864, 0.73333335]]]], dtype=float32),
{'max_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:1' shape=(?,) dtype=float64>,
'max_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:2' shape=(?,) dtype=float64>,
'min_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:3' shape=(?,) dtype=float64>,
'min_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:4' shape=(?,) dtype=float64>}],
<tf.Tensor 'fifo_queue_DequeueUpTo_9:5' shape=(?,) dtype=float64>)


Can I use one model for this one input ? or should I let each generator separate with two models (convolutional for Images, and Logistic for numerics) , and then feed each model separately, and combine both for output, seems complex for me.










share|improve this question























  • You can use a model for images and numerics inputs. The key is how your network structure connects features.

    – giser_yugang
    Dec 30 '18 at 12:26














0












0








0








Forgive my tries, still a nOOb in deep learning and Tensorflow.



I have two flows of train data, one is images, the second is continuous numerics. ImageDataGenerator keras implementation is for images, tf.estimator.inputs.numpy_input_fn is for numerics. Knowing that the two flows are well ordered one with the other.



Credits goes for https://github.com/keras-team/keras/issues/8130#issuecomment-336855177 for one generator combining both.



From there, I came up with this knowing that:



slice_and_order : get the right set of images from folder if it's train or validation.



import tensorflow as tf 
input_imgen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
rotation_range=5.,
horizontal_flip = True)

test_imgen = ImageDataGenerator(rescale = 1./255)


def slice_and_order(gen, cut, train_boolean):
filenames = gen.filenames #[idx : idx + gen.batch_size]
filenames = [int(filename.split('/')[1].split('.')[0]) for filename in filenames]
co=-1
length = len(filenames)
for i in gen:
idx = (gen.batch_index - 1) * gen.batch_size
current = int(gen.filenames[idx].split('/')[1].split('.')[0])
co+=1
if(train_boolean):
if(co%length < cut):
yield i
else:
continue
else:
if(co%length < cut):
continue
else:
yield i

def generate_generator_multiple(generator, dir1, batch_size, img_height, img_width, train_boolean):
genX1 = generator.flow_from_directory(dir1,
target_size = (img_height,img_width),
class_mode = 'categorical',
batch_size = 1,
shuffle=False,
seed=7)
genX1_ = None;
if(train_boolean):
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)
else:
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)

genX2_ = None;
with tf.Session() as session:
if(train_boolean):
genX2 = tf.estimator.inputs.numpy_input_fn(
res_train_flow['x'], res_train_flow['y'], batch_size=1, shuffle=False, num_epochs=1)
else:
genX2 = tf.estimator.inputs.numpy_input_fn(
res_valid_flow['x'], res_valid_flow['y'], batch_size=1, shuffle=False, num_epochs=1)

while True:
val = genX1.next()
features, target = genX2()
yield [val[0], features], target #Yield both images and their mutual label


inputgenerator=generate_generator_multiple(generator=input_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = True)

validgenerator=generate_generator_multiple(generator=test_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = False)


Now my problem is input data is the heterogeneity, like



([array([[[[0.8980393 , 0.9215687 , 0.9058824 ],
[0.8965367 , 0.9200661 , 0.90437984],
[0.89043367, 0.9139631 , 0.8982768 ],
...,
[0.7747699 , 0.79437774, 0.77084833],
[0.77384806, 0.7934559 , 0.7699265 ],
[0.7729261 , 0.79253393, 0.7690045 ]],
...,

[[0.7760461 , 0.7721245 , 0.7525167 ],
[0.77819717, 0.7742756 , 0.75466776],
[0.7803483 , 0.77642673, 0.7568189 ],
...,
[0.7188979 , 0.7385057 , 0.7502704 ],
[0.70090973, 0.7205176 , 0.7322823 ],
[0.7019608 , 0.72156864, 0.73333335]],

[[0.8027476 , 0.79882604, 0.7792182 ],
[0.8032237 , 0.7997209 , 0.78011304],
[0.8016872 , 0.7991063 , 0.77949846],
...,
[0.7273756 , 0.74698347, 0.7587482 ],
[0.7003445 , 0.7199524 , 0.7317171 ],
[0.7019608 , 0.72156864, 0.73333335]]]], dtype=float32),
{'max_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:1' shape=(?,) dtype=float64>,
'max_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:2' shape=(?,) dtype=float64>,
'min_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:3' shape=(?,) dtype=float64>,
'min_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:4' shape=(?,) dtype=float64>}],
<tf.Tensor 'fifo_queue_DequeueUpTo_9:5' shape=(?,) dtype=float64>)


Can I use one model for this one input ? or should I let each generator separate with two models (convolutional for Images, and Logistic for numerics) , and then feed each model separately, and combine both for output, seems complex for me.










share|improve this question














Forgive my tries, still a nOOb in deep learning and Tensorflow.



I have two flows of train data, one is images, the second is continuous numerics. ImageDataGenerator keras implementation is for images, tf.estimator.inputs.numpy_input_fn is for numerics. Knowing that the two flows are well ordered one with the other.



Credits goes for https://github.com/keras-team/keras/issues/8130#issuecomment-336855177 for one generator combining both.



From there, I came up with this knowing that:



slice_and_order : get the right set of images from folder if it's train or validation.



import tensorflow as tf 
input_imgen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
rotation_range=5.,
horizontal_flip = True)

test_imgen = ImageDataGenerator(rescale = 1./255)


def slice_and_order(gen, cut, train_boolean):
filenames = gen.filenames #[idx : idx + gen.batch_size]
filenames = [int(filename.split('/')[1].split('.')[0]) for filename in filenames]
co=-1
length = len(filenames)
for i in gen:
idx = (gen.batch_index - 1) * gen.batch_size
current = int(gen.filenames[idx].split('/')[1].split('.')[0])
co+=1
if(train_boolean):
if(co%length < cut):
yield i
else:
continue
else:
if(co%length < cut):
continue
else:
yield i

def generate_generator_multiple(generator, dir1, batch_size, img_height, img_width, train_boolean):
genX1 = generator.flow_from_directory(dir1,
target_size = (img_height,img_width),
class_mode = 'categorical',
batch_size = 1,
shuffle=False,
seed=7)
genX1_ = None;
if(train_boolean):
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)
else:
genX1_ = slice_and_order(genX1, len(ind_train), train_boolean)

genX2_ = None;
with tf.Session() as session:
if(train_boolean):
genX2 = tf.estimator.inputs.numpy_input_fn(
res_train_flow['x'], res_train_flow['y'], batch_size=1, shuffle=False, num_epochs=1)
else:
genX2 = tf.estimator.inputs.numpy_input_fn(
res_valid_flow['x'], res_valid_flow['y'], batch_size=1, shuffle=False, num_epochs=1)

while True:
val = genX1.next()
features, target = genX2()
yield [val[0], features], target #Yield both images and their mutual label


inputgenerator=generate_generator_multiple(generator=input_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = True)

validgenerator=generate_generator_multiple(generator=test_imgen,
dir1="images/",
batch_size=1,
img_height=96,
img_width=96, train_boolean = False)


Now my problem is input data is the heterogeneity, like



([array([[[[0.8980393 , 0.9215687 , 0.9058824 ],
[0.8965367 , 0.9200661 , 0.90437984],
[0.89043367, 0.9139631 , 0.8982768 ],
...,
[0.7747699 , 0.79437774, 0.77084833],
[0.77384806, 0.7934559 , 0.7699265 ],
[0.7729261 , 0.79253393, 0.7690045 ]],
...,

[[0.7760461 , 0.7721245 , 0.7525167 ],
[0.77819717, 0.7742756 , 0.75466776],
[0.7803483 , 0.77642673, 0.7568189 ],
...,
[0.7188979 , 0.7385057 , 0.7502704 ],
[0.70090973, 0.7205176 , 0.7322823 ],
[0.7019608 , 0.72156864, 0.73333335]],

[[0.8027476 , 0.79882604, 0.7792182 ],
[0.8032237 , 0.7997209 , 0.78011304],
[0.8016872 , 0.7991063 , 0.77949846],
...,
[0.7273756 , 0.74698347, 0.7587482 ],
[0.7003445 , 0.7199524 , 0.7317171 ],
[0.7019608 , 0.72156864, 0.73333335]]]], dtype=float32),
{'max_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:1' shape=(?,) dtype=float64>,
'max_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:2' shape=(?,) dtype=float64>,
'min_lat': <tf.Tensor 'fifo_queue_DequeueUpTo_9:3' shape=(?,) dtype=float64>,
'min_lon': <tf.Tensor 'fifo_queue_DequeueUpTo_9:4' shape=(?,) dtype=float64>}],
<tf.Tensor 'fifo_queue_DequeueUpTo_9:5' shape=(?,) dtype=float64>)


Can I use one model for this one input ? or should I let each generator separate with two models (convolutional for Images, and Logistic for numerics) , and then feed each model separately, and combine both for output, seems complex for me.







python tensorflow keras






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Dec 30 '18 at 12:19









Curcuma_Curcuma_

1581321




1581321













  • You can use a model for images and numerics inputs. The key is how your network structure connects features.

    – giser_yugang
    Dec 30 '18 at 12:26



















  • You can use a model for images and numerics inputs. The key is how your network structure connects features.

    – giser_yugang
    Dec 30 '18 at 12:26

















You can use a model for images and numerics inputs. The key is how your network structure connects features.

– giser_yugang
Dec 30 '18 at 12:26





You can use a model for images and numerics inputs. The key is how your network structure connects features.

– giser_yugang
Dec 30 '18 at 12:26












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53977530%2fhow-to-train-image-classification-with-numeric-data-combined-using-tensorflow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53977530%2fhow-to-train-image-classification-with-numeric-data-combined-using-tensorflow%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







hEmXW,M,AJIjusZ,R765YeP,DqkQ48PWTbP U5,9JQ cdQG5DnOV9hK9DSr
nCXkV,r,6cIOo rJBya9o3p4zC2 f0A 8vTV71rNi7D9zocg HZJUSWtuzV95Cz

Popular posts from this blog

Monofisismo

Angular Downloading a file using contenturl with Basic Authentication

Olmecas