How can I multithread use of “ImageAI” module's detectObjectsFromImage method?












2















I managed to get the following code to take a list of images from a directory, apply object detection to them using the ImageAI Python module, and save the "processed" images to a new directory. This works great using the code below when I specify only a single additional thread to run it in.



When I attempt to increase the thread Range, it starts to throw this exception:




Cannot interpret feed_dict key as Tensor: Tensor
Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an
element of this graph.




I wasn't able to find anything in the docs that says multithreading it isn't possible.



Some additional console output in case it's helpful (this is output when attempting 5 threads):



C:UsersMeDesktopProject>cd c:UsersMeDesktopProject && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:UsersMeAppDataLocalProgramsPythonPython36python.exe c:UsersMe.vscodeextensionsms-python.python-2018.12.1pythonFilesptvsd_launcher.py --default --client --host localhost
--port 61076 c:UsersMeDesktopProjectapp.py "
Using TensorFlow backend.
2019-01-02 21:52:37.108429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
pciBusID: 0000:03:00.0
totalMemory: 8.00GiB freeMemory: 6.61GiB
2019-01-02 21:52:37.113537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-01-02 21:52:38.396715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-01-02 21:52:38.399994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-01-02 21:52:38.402189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-01-02 21:52:38.404115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2019-01-02 21:52:38.419623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-01-02 21:52:38.422946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-01-02 21:52:38.425552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-01-02 21:52:38.427139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-01-02 21:52:38.428894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2019-01-02 21:52:38.436241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-01-02 21:52:38.439749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-01-02 21:52:38.442604: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-01-02 21:52:38.444264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-01-02 21:52:38.445908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2019-01-02 21:52:38.460772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-01-02 21:52:38.463329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-01-02 21:52:38.466661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-01-02 21:52:38.468411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-01-02 21:52:38.470123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2019-01-02 21:52:38.476201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2019-01-02 21:52:38.478613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-01-02 21:52:38.484133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
2019-01-02 21:52:38.493996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
2019-01-02 21:52:38.496042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
Backend TkAgg is interactive backend. Turning interactive mode on.
----Errors out on this line----


Working code for single thread:



import os
import cv2
import threading
import subprocess
import tensorflow as tf

from queue import Queue

from imageai.Detection import ObjectDetection

cwd = os.getcwd()
image_dir = os.path.join(cwd, 'TrainingImages')
processed_dir = os.path.join(cwd, 'ProcessedImages')

def process_image(image_filename):
detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath(os.path.join(cwd, "resnet50_coco_best_v2.0.1.h5"))
detector.loadModel(detection_speed="fast")

if image_filename.endswith(".JPG"):
try:
detections = detector.detectObjectsFromImage(input_image=os.path.join(image_dir, image_filename), output_image_path=os.path.join(processed_dir, 'processed_' + image_filename))
return detections
except ValueError as e:
print("Error processing: {}: {}".format(os.path.join(image_dir, image_filename), e))
return False


def worker(q):
''' The worker thread pulls an item from the queue and processes it '''
while True:
image_filename = q.get()
detections = process_image(image_filename)

if detections:
for obj in detections:
print("{} : {}".format(obj["name"], obj["percentage_probability"]))
else:
print("No objects detected")

q.task_done()


if __name__ == '__main__':
# Get all files in image directory
image_list = os.listdir(image_dir)

# Create Queue for images
q = Queue()

for image_filename in image_list:
q.put(image_filename)

for i in range(1):
t = threading.Thread(target=worker, args=(q, ))
t.daemon = True
t.start()

q.join()


Project utilizes the following:



Python version 3.6.5
Package Version
------------------- ---------
absl-py 0.6.1
astor 0.7.1
astroid 1.6.4
certifi 2018.4.16
chardet 3.0.4
colorama 0.3.9
cycler 0.10.0
gast 0.2.0
grpcio 1.17.1
h5py 2.9.0
idna 2.6
imageai 2.0.2
isort 4.3.4
Keras 2.2.4
Keras-Applications 1.0.6
Keras-Preprocessing 1.0.5
kiwisolver 1.0.1
lazy-object-proxy 1.3.1
Markdown 3.0.1
matplotlib 3.0.2
mccabe 0.6.1
numpy 1.15.4
opencv-python 3.4.5.20
Pillow 5.4.0
pip 18.1
pipenv 2018.5.18
protobuf 3.6.1
pylint 1.9.1
pyparsing 2.3.0
python-dateutil 2.7.5
PyYAML 3.13
requests 2.18.4
scipy 1.2.0
setuptools 39.0.1
six 1.11.0
tensorboard 1.12.1
tensorflow-gpu 1.12.0
termcolor 1.1.0
urllib3 1.22
virtualenv 16.0.0
virtualenv-clone 0.3.0
Werkzeug 0.14.1
wheel 0.32.3
wrapt 1.10.11


Question:



Can someone help me understand how to multithread the code above to allow processing multiple images asynchonously?



I tried to include as much info as possible.



Some additional info: It DOES appear to still run with multiple threads; however, every thread after the 1st throws that error above. It then continues on to process the first image, then essentially "hangs" (just doesn't do anything). I'm not familiar enough yet with the terminology yet to know what the Tensor "Graph" means or what to search for to troubleshoot further.



Here's the output with 5 threads:



2019-01-02 22:23:53.417469: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
person : 78.54841947555542
truck : 56.28065466880798









share|improve this question





























    2















    I managed to get the following code to take a list of images from a directory, apply object detection to them using the ImageAI Python module, and save the "processed" images to a new directory. This works great using the code below when I specify only a single additional thread to run it in.



    When I attempt to increase the thread Range, it starts to throw this exception:




    Cannot interpret feed_dict key as Tensor: Tensor
    Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an
    element of this graph.




    I wasn't able to find anything in the docs that says multithreading it isn't possible.



    Some additional console output in case it's helpful (this is output when attempting 5 threads):



    C:UsersMeDesktopProject>cd c:UsersMeDesktopProject && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:UsersMeAppDataLocalProgramsPythonPython36python.exe c:UsersMe.vscodeextensionsms-python.python-2018.12.1pythonFilesptvsd_launcher.py --default --client --host localhost
    --port 61076 c:UsersMeDesktopProjectapp.py "
    Using TensorFlow backend.
    2019-01-02 21:52:37.108429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
    name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
    pciBusID: 0000:03:00.0
    totalMemory: 8.00GiB freeMemory: 6.61GiB
    2019-01-02 21:52:37.113537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
    2019-01-02 21:52:38.396715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-01-02 21:52:38.399994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
    2019-01-02 21:52:38.402189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
    2019-01-02 21:52:38.404115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
    MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
    2019-01-02 21:52:38.419623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
    2019-01-02 21:52:38.422946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-01-02 21:52:38.425552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
    2019-01-02 21:52:38.427139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
    2019-01-02 21:52:38.428894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
    MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
    2019-01-02 21:52:38.436241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
    2019-01-02 21:52:38.439749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-01-02 21:52:38.442604: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
    2019-01-02 21:52:38.444264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
    2019-01-02 21:52:38.445908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
    MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
    2019-01-02 21:52:38.460772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
    2019-01-02 21:52:38.463329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-01-02 21:52:38.466661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
    2019-01-02 21:52:38.468411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
    2019-01-02 21:52:38.470123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
    MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
    2019-01-02 21:52:38.476201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
    2019-01-02 21:52:38.478613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
    2019-01-02 21:52:38.484133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
    2019-01-02 21:52:38.493996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
    2019-01-02 21:52:38.496042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
    MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
    Backend TkAgg is interactive backend. Turning interactive mode on.
    ----Errors out on this line----


    Working code for single thread:



    import os
    import cv2
    import threading
    import subprocess
    import tensorflow as tf

    from queue import Queue

    from imageai.Detection import ObjectDetection

    cwd = os.getcwd()
    image_dir = os.path.join(cwd, 'TrainingImages')
    processed_dir = os.path.join(cwd, 'ProcessedImages')

    def process_image(image_filename):
    detector = ObjectDetection()
    detector.setModelTypeAsRetinaNet()
    detector.setModelPath(os.path.join(cwd, "resnet50_coco_best_v2.0.1.h5"))
    detector.loadModel(detection_speed="fast")

    if image_filename.endswith(".JPG"):
    try:
    detections = detector.detectObjectsFromImage(input_image=os.path.join(image_dir, image_filename), output_image_path=os.path.join(processed_dir, 'processed_' + image_filename))
    return detections
    except ValueError as e:
    print("Error processing: {}: {}".format(os.path.join(image_dir, image_filename), e))
    return False


    def worker(q):
    ''' The worker thread pulls an item from the queue and processes it '''
    while True:
    image_filename = q.get()
    detections = process_image(image_filename)

    if detections:
    for obj in detections:
    print("{} : {}".format(obj["name"], obj["percentage_probability"]))
    else:
    print("No objects detected")

    q.task_done()


    if __name__ == '__main__':
    # Get all files in image directory
    image_list = os.listdir(image_dir)

    # Create Queue for images
    q = Queue()

    for image_filename in image_list:
    q.put(image_filename)

    for i in range(1):
    t = threading.Thread(target=worker, args=(q, ))
    t.daemon = True
    t.start()

    q.join()


    Project utilizes the following:



    Python version 3.6.5
    Package Version
    ------------------- ---------
    absl-py 0.6.1
    astor 0.7.1
    astroid 1.6.4
    certifi 2018.4.16
    chardet 3.0.4
    colorama 0.3.9
    cycler 0.10.0
    gast 0.2.0
    grpcio 1.17.1
    h5py 2.9.0
    idna 2.6
    imageai 2.0.2
    isort 4.3.4
    Keras 2.2.4
    Keras-Applications 1.0.6
    Keras-Preprocessing 1.0.5
    kiwisolver 1.0.1
    lazy-object-proxy 1.3.1
    Markdown 3.0.1
    matplotlib 3.0.2
    mccabe 0.6.1
    numpy 1.15.4
    opencv-python 3.4.5.20
    Pillow 5.4.0
    pip 18.1
    pipenv 2018.5.18
    protobuf 3.6.1
    pylint 1.9.1
    pyparsing 2.3.0
    python-dateutil 2.7.5
    PyYAML 3.13
    requests 2.18.4
    scipy 1.2.0
    setuptools 39.0.1
    six 1.11.0
    tensorboard 1.12.1
    tensorflow-gpu 1.12.0
    termcolor 1.1.0
    urllib3 1.22
    virtualenv 16.0.0
    virtualenv-clone 0.3.0
    Werkzeug 0.14.1
    wheel 0.32.3
    wrapt 1.10.11


    Question:



    Can someone help me understand how to multithread the code above to allow processing multiple images asynchonously?



    I tried to include as much info as possible.



    Some additional info: It DOES appear to still run with multiple threads; however, every thread after the 1st throws that error above. It then continues on to process the first image, then essentially "hangs" (just doesn't do anything). I'm not familiar enough yet with the terminology yet to know what the Tensor "Graph" means or what to search for to troubleshoot further.



    Here's the output with 5 threads:



    2019-01-02 22:23:53.417469: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
    Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
    Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
    Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
    Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
    person : 78.54841947555542
    truck : 56.28065466880798









    share|improve this question



























      2












      2








      2








      I managed to get the following code to take a list of images from a directory, apply object detection to them using the ImageAI Python module, and save the "processed" images to a new directory. This works great using the code below when I specify only a single additional thread to run it in.



      When I attempt to increase the thread Range, it starts to throw this exception:




      Cannot interpret feed_dict key as Tensor: Tensor
      Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an
      element of this graph.




      I wasn't able to find anything in the docs that says multithreading it isn't possible.



      Some additional console output in case it's helpful (this is output when attempting 5 threads):



      C:UsersMeDesktopProject>cd c:UsersMeDesktopProject && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:UsersMeAppDataLocalProgramsPythonPython36python.exe c:UsersMe.vscodeextensionsms-python.python-2018.12.1pythonFilesptvsd_launcher.py --default --client --host localhost
      --port 61076 c:UsersMeDesktopProjectapp.py "
      Using TensorFlow backend.
      2019-01-02 21:52:37.108429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
      name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
      pciBusID: 0000:03:00.0
      totalMemory: 8.00GiB freeMemory: 6.61GiB
      2019-01-02 21:52:37.113537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.396715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.399994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.402189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.404115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.419623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.422946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.425552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.427139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.428894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.436241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.439749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.442604: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.444264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.445908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.460772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.463329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.466661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.468411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.470123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.476201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.478613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.484133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.493996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.496042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      Backend TkAgg is interactive backend. Turning interactive mode on.
      ----Errors out on this line----


      Working code for single thread:



      import os
      import cv2
      import threading
      import subprocess
      import tensorflow as tf

      from queue import Queue

      from imageai.Detection import ObjectDetection

      cwd = os.getcwd()
      image_dir = os.path.join(cwd, 'TrainingImages')
      processed_dir = os.path.join(cwd, 'ProcessedImages')

      def process_image(image_filename):
      detector = ObjectDetection()
      detector.setModelTypeAsRetinaNet()
      detector.setModelPath(os.path.join(cwd, "resnet50_coco_best_v2.0.1.h5"))
      detector.loadModel(detection_speed="fast")

      if image_filename.endswith(".JPG"):
      try:
      detections = detector.detectObjectsFromImage(input_image=os.path.join(image_dir, image_filename), output_image_path=os.path.join(processed_dir, 'processed_' + image_filename))
      return detections
      except ValueError as e:
      print("Error processing: {}: {}".format(os.path.join(image_dir, image_filename), e))
      return False


      def worker(q):
      ''' The worker thread pulls an item from the queue and processes it '''
      while True:
      image_filename = q.get()
      detections = process_image(image_filename)

      if detections:
      for obj in detections:
      print("{} : {}".format(obj["name"], obj["percentage_probability"]))
      else:
      print("No objects detected")

      q.task_done()


      if __name__ == '__main__':
      # Get all files in image directory
      image_list = os.listdir(image_dir)

      # Create Queue for images
      q = Queue()

      for image_filename in image_list:
      q.put(image_filename)

      for i in range(1):
      t = threading.Thread(target=worker, args=(q, ))
      t.daemon = True
      t.start()

      q.join()


      Project utilizes the following:



      Python version 3.6.5
      Package Version
      ------------------- ---------
      absl-py 0.6.1
      astor 0.7.1
      astroid 1.6.4
      certifi 2018.4.16
      chardet 3.0.4
      colorama 0.3.9
      cycler 0.10.0
      gast 0.2.0
      grpcio 1.17.1
      h5py 2.9.0
      idna 2.6
      imageai 2.0.2
      isort 4.3.4
      Keras 2.2.4
      Keras-Applications 1.0.6
      Keras-Preprocessing 1.0.5
      kiwisolver 1.0.1
      lazy-object-proxy 1.3.1
      Markdown 3.0.1
      matplotlib 3.0.2
      mccabe 0.6.1
      numpy 1.15.4
      opencv-python 3.4.5.20
      Pillow 5.4.0
      pip 18.1
      pipenv 2018.5.18
      protobuf 3.6.1
      pylint 1.9.1
      pyparsing 2.3.0
      python-dateutil 2.7.5
      PyYAML 3.13
      requests 2.18.4
      scipy 1.2.0
      setuptools 39.0.1
      six 1.11.0
      tensorboard 1.12.1
      tensorflow-gpu 1.12.0
      termcolor 1.1.0
      urllib3 1.22
      virtualenv 16.0.0
      virtualenv-clone 0.3.0
      Werkzeug 0.14.1
      wheel 0.32.3
      wrapt 1.10.11


      Question:



      Can someone help me understand how to multithread the code above to allow processing multiple images asynchonously?



      I tried to include as much info as possible.



      Some additional info: It DOES appear to still run with multiple threads; however, every thread after the 1st throws that error above. It then continues on to process the first image, then essentially "hangs" (just doesn't do anything). I'm not familiar enough yet with the terminology yet to know what the Tensor "Graph" means or what to search for to troubleshoot further.



      Here's the output with 5 threads:



      2019-01-02 22:23:53.417469: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      person : 78.54841947555542
      truck : 56.28065466880798









      share|improve this question
















      I managed to get the following code to take a list of images from a directory, apply object detection to them using the ImageAI Python module, and save the "processed" images to a new directory. This works great using the code below when I specify only a single additional thread to run it in.



      When I attempt to increase the thread Range, it starts to throw this exception:




      Cannot interpret feed_dict key as Tensor: Tensor
      Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an
      element of this graph.




      I wasn't able to find anything in the docs that says multithreading it isn't possible.



      Some additional console output in case it's helpful (this is output when attempting 5 threads):



      C:UsersMeDesktopProject>cd c:UsersMeDesktopProject && cmd /C "set "PYTHONIOENCODING=UTF-8" && set "PYTHONUNBUFFERED=1" && C:UsersMeAppDataLocalProgramsPythonPython36python.exe c:UsersMe.vscodeextensionsms-python.python-2018.12.1pythonFilesptvsd_launcher.py --default --client --host localhost
      --port 61076 c:UsersMeDesktopProjectapp.py "
      Using TensorFlow backend.
      2019-01-02 21:52:37.108429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties:
      name: GeForce GTX 1070 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683
      pciBusID: 0000:03:00.0
      totalMemory: 8.00GiB freeMemory: 6.61GiB
      2019-01-02 21:52:37.113537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.396715: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.399994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.402189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.404115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.419623: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.422946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.425552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.427139: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.428894: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.436241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.439749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.442604: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.444264: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.445908: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.460772: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.463329: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.466661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.468411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.470123: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      2019-01-02 21:52:38.476201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
      2019-01-02 21:52:38.478613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
      2019-01-02 21:52:38.484133: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0
      2019-01-02 21:52:38.493996: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N
      2019-01-02 21:52:38.496042: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368
      MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      Backend TkAgg is interactive backend. Turning interactive mode on.
      ----Errors out on this line----


      Working code for single thread:



      import os
      import cv2
      import threading
      import subprocess
      import tensorflow as tf

      from queue import Queue

      from imageai.Detection import ObjectDetection

      cwd = os.getcwd()
      image_dir = os.path.join(cwd, 'TrainingImages')
      processed_dir = os.path.join(cwd, 'ProcessedImages')

      def process_image(image_filename):
      detector = ObjectDetection()
      detector.setModelTypeAsRetinaNet()
      detector.setModelPath(os.path.join(cwd, "resnet50_coco_best_v2.0.1.h5"))
      detector.loadModel(detection_speed="fast")

      if image_filename.endswith(".JPG"):
      try:
      detections = detector.detectObjectsFromImage(input_image=os.path.join(image_dir, image_filename), output_image_path=os.path.join(processed_dir, 'processed_' + image_filename))
      return detections
      except ValueError as e:
      print("Error processing: {}: {}".format(os.path.join(image_dir, image_filename), e))
      return False


      def worker(q):
      ''' The worker thread pulls an item from the queue and processes it '''
      while True:
      image_filename = q.get()
      detections = process_image(image_filename)

      if detections:
      for obj in detections:
      print("{} : {}".format(obj["name"], obj["percentage_probability"]))
      else:
      print("No objects detected")

      q.task_done()


      if __name__ == '__main__':
      # Get all files in image directory
      image_list = os.listdir(image_dir)

      # Create Queue for images
      q = Queue()

      for image_filename in image_list:
      q.put(image_filename)

      for i in range(1):
      t = threading.Thread(target=worker, args=(q, ))
      t.daemon = True
      t.start()

      q.join()


      Project utilizes the following:



      Python version 3.6.5
      Package Version
      ------------------- ---------
      absl-py 0.6.1
      astor 0.7.1
      astroid 1.6.4
      certifi 2018.4.16
      chardet 3.0.4
      colorama 0.3.9
      cycler 0.10.0
      gast 0.2.0
      grpcio 1.17.1
      h5py 2.9.0
      idna 2.6
      imageai 2.0.2
      isort 4.3.4
      Keras 2.2.4
      Keras-Applications 1.0.6
      Keras-Preprocessing 1.0.5
      kiwisolver 1.0.1
      lazy-object-proxy 1.3.1
      Markdown 3.0.1
      matplotlib 3.0.2
      mccabe 0.6.1
      numpy 1.15.4
      opencv-python 3.4.5.20
      Pillow 5.4.0
      pip 18.1
      pipenv 2018.5.18
      protobuf 3.6.1
      pylint 1.9.1
      pyparsing 2.3.0
      python-dateutil 2.7.5
      PyYAML 3.13
      requests 2.18.4
      scipy 1.2.0
      setuptools 39.0.1
      six 1.11.0
      tensorboard 1.12.1
      tensorflow-gpu 1.12.0
      termcolor 1.1.0
      urllib3 1.22
      virtualenv 16.0.0
      virtualenv-clone 0.3.0
      Werkzeug 0.14.1
      wheel 0.32.3
      wrapt 1.10.11


      Question:



      Can someone help me understand how to multithread the code above to allow processing multiple images asynchonously?



      I tried to include as much info as possible.



      Some additional info: It DOES appear to still run with multiple threads; however, every thread after the 1st throws that error above. It then continues on to process the first image, then essentially "hangs" (just doesn't do anything). I'm not familiar enough yet with the terminology yet to know what the Tensor "Graph" means or what to search for to troubleshoot further.



      Here's the output with 5 threads:



      2019-01-02 22:23:53.417469: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6368 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      Error thrown: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(7, 7, 3, 64), dtype=float32) is not an element of this graph.
      person : 78.54841947555542
      truck : 56.28065466880798






      python-3.x multithreading object-detection






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Jan 3 at 12:07









      E_net4

      12.6k73872




      12.6k73872










      asked Jan 3 at 4:07









      Source MattersSource Matters

      462315




      462315
























          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54016235%2fhow-can-i-multithread-use-of-imageai-modules-detectobjectsfromimage-method%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54016235%2fhow-can-i-multithread-use-of-imageai-modules-detectobjectsfromimage-method%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          generate and download xml file after input submit (php and mysql) - JPK

          Angular Downloading a file using contenturl with Basic Authentication

          Can't read property showImagePicker of undefined in react native iOS