How does “conda install” handle the installation of CUDA and CudNN for [Tensorflow GPU]?

Multi tool use
Multi tool use












-1















I’m running:
Python 3.6.7
- Anaconda 4.5.12
- GPU Zotac RTX2080Ti
- Windows10



And trying to test if my system actually is using CUDA 9 and CudNN which I have installed everything by
conda install -c anaconda tensorflow-gpu 
This seem to have CUDA 9 and CudNN included in itself therefore I assumed it was all working fine as I also could train some GANs and see that my GPU is rising to approx 70 degrees and also can hear it working :-)



This is what I get as result for running the code below to check if Tensorflow can communicate with the GPU.



from tensorflow.python.client import device_lib

print(device_lib.list_local_devices())



2018-12-30 09:38:19.922499: I
tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
instructions that this TensorFlow binary was not compiled to use: AVX
AVX2 2018-12-30 09:38:20.282040: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5
memoryClockRate(GHz): 1.665 pciBusID: 0000:07:00.0 totalMemory:
11.00GiB freeMemory: 8.99GiB 2018-12-30 09:38:20.287430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
gpu devices: 0 2018-12-30 09:38:21.762777: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
interconnect StreamExecutor with strength 1 edge matrix: 2018-12-30
09:38:21.766780: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0
2018-12-30 09:38:21.768893: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N
2018-12-30 09:38:21.770801: I
tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
TensorFlow device (/device:GPU:0 with 8665 MB memory) -> physical GPU
(device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:07:00.0,
compute capability: 7.5) [name: "/device:CPU:0" device_type: "CPU"
memory_limit: 268435456 locality { } incarnation: 5100127316371337047
, name: "/device:GPU:0" device_type: "GPU" memory_limit: 9086468751
locality {   bus_id: 1   links {   } } incarnation:
17768141356003925426 physical_device_desc: "device: 0, name: GeForce
RTX 2080 Ti, pci bus id: 0000:07:00.0, compute capability: 7.5"




So, my question is that would it work at all [for example train a GAN as I stated above] even though it is not using CudNN? If so, how can I check? The reason I’m worried is that I saw this error yesterday on a different code, which included something like ‘You might not be using CudNN for this’ - sadly I couldn’t reproduce the error but it got me worried cause I don't want to train anything slower than I potentially can.



I've also tried this but still wasn't sure of the result...



import ctypes
ctypes.WinDLL("cudnn64_7.dll")
WinDLL 'cudnn64_7.dll', handle 7fff66600000 at 0x1674cc82128>


PS:I know these similar topics have been extensively discussed but it seemed RTX2080 made it special due to a few common driver incompatibility issues around it



Thanks very much.










share|improve this question





























    -1















    I’m running:
    Python 3.6.7
    - Anaconda 4.5.12
    - GPU Zotac RTX2080Ti
    - Windows10



    And trying to test if my system actually is using CUDA 9 and CudNN which I have installed everything by
    conda install -c anaconda tensorflow-gpu 
    This seem to have CUDA 9 and CudNN included in itself therefore I assumed it was all working fine as I also could train some GANs and see that my GPU is rising to approx 70 degrees and also can hear it working :-)



    This is what I get as result for running the code below to check if Tensorflow can communicate with the GPU.



    from tensorflow.python.client import device_lib

    print(device_lib.list_local_devices())



    2018-12-30 09:38:19.922499: I
    tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
    instructions that this TensorFlow binary was not compiled to use: AVX
    AVX2 2018-12-30 09:38:20.282040: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
    with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5
    memoryClockRate(GHz): 1.665 pciBusID: 0000:07:00.0 totalMemory:
    11.00GiB freeMemory: 8.99GiB 2018-12-30 09:38:20.287430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
    gpu devices: 0 2018-12-30 09:38:21.762777: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
    interconnect StreamExecutor with strength 1 edge matrix: 2018-12-30
    09:38:21.766780: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0
    2018-12-30 09:38:21.768893: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N
    2018-12-30 09:38:21.770801: I
    tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
    TensorFlow device (/device:GPU:0 with 8665 MB memory) -> physical GPU
    (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:07:00.0,
    compute capability: 7.5) [name: "/device:CPU:0" device_type: "CPU"
    memory_limit: 268435456 locality { } incarnation: 5100127316371337047
    , name: "/device:GPU:0" device_type: "GPU" memory_limit: 9086468751
    locality {   bus_id: 1   links {   } } incarnation:
    17768141356003925426 physical_device_desc: "device: 0, name: GeForce
    RTX 2080 Ti, pci bus id: 0000:07:00.0, compute capability: 7.5"




    So, my question is that would it work at all [for example train a GAN as I stated above] even though it is not using CudNN? If so, how can I check? The reason I’m worried is that I saw this error yesterday on a different code, which included something like ‘You might not be using CudNN for this’ - sadly I couldn’t reproduce the error but it got me worried cause I don't want to train anything slower than I potentially can.



    I've also tried this but still wasn't sure of the result...



    import ctypes
    ctypes.WinDLL("cudnn64_7.dll")
    WinDLL 'cudnn64_7.dll', handle 7fff66600000 at 0x1674cc82128>


    PS:I know these similar topics have been extensively discussed but it seemed RTX2080 made it special due to a few common driver incompatibility issues around it



    Thanks very much.










    share|improve this question



























      -1












      -1








      -1








      I’m running:
      Python 3.6.7
      - Anaconda 4.5.12
      - GPU Zotac RTX2080Ti
      - Windows10



      And trying to test if my system actually is using CUDA 9 and CudNN which I have installed everything by
      conda install -c anaconda tensorflow-gpu 
      This seem to have CUDA 9 and CudNN included in itself therefore I assumed it was all working fine as I also could train some GANs and see that my GPU is rising to approx 70 degrees and also can hear it working :-)



      This is what I get as result for running the code below to check if Tensorflow can communicate with the GPU.



      from tensorflow.python.client import device_lib

      print(device_lib.list_local_devices())



      2018-12-30 09:38:19.922499: I
      tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
      instructions that this TensorFlow binary was not compiled to use: AVX
      AVX2 2018-12-30 09:38:20.282040: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
      with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5
      memoryClockRate(GHz): 1.665 pciBusID: 0000:07:00.0 totalMemory:
      11.00GiB freeMemory: 8.99GiB 2018-12-30 09:38:20.287430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
      gpu devices: 0 2018-12-30 09:38:21.762777: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
      interconnect StreamExecutor with strength 1 edge matrix: 2018-12-30
      09:38:21.766780: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0
      2018-12-30 09:38:21.768893: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N
      2018-12-30 09:38:21.770801: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
      TensorFlow device (/device:GPU:0 with 8665 MB memory) -> physical GPU
      (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:07:00.0,
      compute capability: 7.5) [name: "/device:CPU:0" device_type: "CPU"
      memory_limit: 268435456 locality { } incarnation: 5100127316371337047
      , name: "/device:GPU:0" device_type: "GPU" memory_limit: 9086468751
      locality {   bus_id: 1   links {   } } incarnation:
      17768141356003925426 physical_device_desc: "device: 0, name: GeForce
      RTX 2080 Ti, pci bus id: 0000:07:00.0, compute capability: 7.5"




      So, my question is that would it work at all [for example train a GAN as I stated above] even though it is not using CudNN? If so, how can I check? The reason I’m worried is that I saw this error yesterday on a different code, which included something like ‘You might not be using CudNN for this’ - sadly I couldn’t reproduce the error but it got me worried cause I don't want to train anything slower than I potentially can.



      I've also tried this but still wasn't sure of the result...



      import ctypes
      ctypes.WinDLL("cudnn64_7.dll")
      WinDLL 'cudnn64_7.dll', handle 7fff66600000 at 0x1674cc82128>


      PS:I know these similar topics have been extensively discussed but it seemed RTX2080 made it special due to a few common driver incompatibility issues around it



      Thanks very much.










      share|improve this question
















      I’m running:
      Python 3.6.7
      - Anaconda 4.5.12
      - GPU Zotac RTX2080Ti
      - Windows10



      And trying to test if my system actually is using CUDA 9 and CudNN which I have installed everything by
      conda install -c anaconda tensorflow-gpu 
      This seem to have CUDA 9 and CudNN included in itself therefore I assumed it was all working fine as I also could train some GANs and see that my GPU is rising to approx 70 degrees and also can hear it working :-)



      This is what I get as result for running the code below to check if Tensorflow can communicate with the GPU.



      from tensorflow.python.client import device_lib

      print(device_lib.list_local_devices())



      2018-12-30 09:38:19.922499: I
      tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports
      instructions that this TensorFlow binary was not compiled to use: AVX
      AVX2 2018-12-30 09:38:20.282040: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0
      with properties: name: GeForce RTX 2080 Ti major: 7 minor: 5
      memoryClockRate(GHz): 1.665 pciBusID: 0000:07:00.0 totalMemory:
      11.00GiB freeMemory: 8.99GiB 2018-12-30 09:38:20.287430: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible
      gpu devices: 0 2018-12-30 09:38:21.762777: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device
      interconnect StreamExecutor with strength 1 edge matrix: 2018-12-30
      09:38:21.766780: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0
      2018-12-30 09:38:21.768893: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N
      2018-12-30 09:38:21.770801: I
      tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created
      TensorFlow device (/device:GPU:0 with 8665 MB memory) -> physical GPU
      (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:07:00.0,
      compute capability: 7.5) [name: "/device:CPU:0" device_type: "CPU"
      memory_limit: 268435456 locality { } incarnation: 5100127316371337047
      , name: "/device:GPU:0" device_type: "GPU" memory_limit: 9086468751
      locality {   bus_id: 1   links {   } } incarnation:
      17768141356003925426 physical_device_desc: "device: 0, name: GeForce
      RTX 2080 Ti, pci bus id: 0000:07:00.0, compute capability: 7.5"




      So, my question is that would it work at all [for example train a GAN as I stated above] even though it is not using CudNN? If so, how can I check? The reason I’m worried is that I saw this error yesterday on a different code, which included something like ‘You might not be using CudNN for this’ - sadly I couldn’t reproduce the error but it got me worried cause I don't want to train anything slower than I potentially can.



      I've also tried this but still wasn't sure of the result...



      import ctypes
      ctypes.WinDLL("cudnn64_7.dll")
      WinDLL 'cudnn64_7.dll', handle 7fff66600000 at 0x1674cc82128>


      PS:I know these similar topics have been extensively discussed but it seemed RTX2080 made it special due to a few common driver incompatibility issues around it



      Thanks very much.







      python tensorflow anaconda cudnn rtx






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Dec 30 '18 at 16:28









      LaSul

      560217




      560217










      asked Dec 30 '18 at 12:33









      unknownplayerunknownplayer

      13




      13
























          0






          active

          oldest

          votes











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53977615%2fhow-does-conda-install-handle-the-installation-of-cuda-and-cudnn-for-tensorfl%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53977615%2fhow-does-conda-install-handle-the-installation-of-cuda-and-cudnn-for-tensorfl%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          JpZChYwaygbxKitqU4a1dA176wgdrB368
          XfAp7r8rtHs6

          Popular posts from this blog

          Monofisismo

          Angular Downloading a file using contenturl with Basic Authentication

          Olmecas