ModuleNotFoundError: No module named 'pyodbc' Although package successfully installed





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







2















I am trying to deploy python code to azure functions. The function downloads some files from a FTP Server, saves them in the function app directory of the host machine and uploads them to an Azure SQL DB.



The first part works correctly; the files are saved in the same directory as the main python file.



When the sqlalchemy methods are invoked (specifically create_engine() )
a ModuleNotFoundError: No module named 'pyodbc' is thrown.
(return __import__('pyodbc') is called)



When I try to import pyodbc specifically, the same ModuleNotFoundError is thrown.



The code works fine locally on my machine.



I specified the path to the modules in the code, thus SQLAlchemy loads (and other modules, too like feedparser).



I installed the modules through the Kudu console following this tutorial:
(tl;dr make venv and install with pip)



https://github.com/yokawasa/azure-functions-python-samples/blob/master/docs/install-python-modules.md



OS of host machine: OS version: Microsoft Windows NT 10.0.14393.0
64 bit System



# -*- coding: utf-8 -*-
import sys
import os

sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'env/Lib/site-packages')))

from ftplib import FTP
from ast import literal_eval
import csv
import json
from sqlalchemy import create_engine, MetaData, Table
import pyodbc


def post_processing(input, output, csv_counter, header):
'''Removes uneccessary headers from the csv files and writes the data
into a csv file'''
# code...
return csv_counter, header


def download():

# first gets all the dates as the base for the full paths
# afterwards build each possible path to a csv file
# structure or dimensions of folders has to be known in advance
# advantage: specific folders are not hardcoded
# code...

return response


def upload():
postreqdata = json.loads(open(os.environ['req']).read())
response = open(os.environ['res'], 'w')
response.write("Finito" + postreqdata["name"])
response.close()

try:
# dev settings
f = open("connection\con_info_sink.txt", "r")
connection_info = literal_eval(f.read())
f.close()
except FileNotFoundError:
# prod settings
connection_info = {
"db_user": os.environ["DB_USER"],
"db_password": os.environ["DB_PASSWORD"],
"db_name": os.environ["DB_NAME"],
"db_host": os.environ["DB_HOST"],
}

engine = create_engine(
"mssql+pyodbc://{db_user}:{db_password}@{db_host}/{db_name} ?driver=ODBC+Driver+13+for+SQL+Server".format(**connection_info), convert_unicode=True)

# orm mapping
metadata = MetaData(engine)
pcards = Table("pcardlog", metadata, autoload=True, schema='staging')

col_names_pcard = pcards.columns.keys()
fakedata_pcard =

with open('source.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
fakedata_pcard.append(
{col_names_pcard[0]: row[0],
col_names_pcard[1]: row[1],
col_names_pcard[2]: row[2],
col_names_pcard[3]: row[3],
col_names_pcard[4]: row[4],
col_names_pcard[5]: row[5],
col_names_pcard[6]: row[6],
})

con = engine.connect()
con.execute(pcards.insert(), fakedata_pcard)

return response


if __name__ == "__main__":
response = download()
response = upload()


I expected pyodbc to load without problems, as I already deployed a web app in Azure once. There I installed pyodbc with wheelhouse directly from the binaries.



I tried installing pyodbc at this project with wheels, too, but that changed nothing.



Update 1:
I found this post on Stackoverflow stating that installing custom drivers is not possible on azure function apps.



ODBC or OleDB database drivers under Azure App Service



Update 2: After stumbling over a GitHub issue for the azure functions cli



https://github.com/Azure/azure-functions-python-worker/issues/249#issuecomment-452478272



and tried deploying the function in a linux environment with docker and pyodbc
finally loaded. However, the driver specification for SQLAlchemy is now wrong.



I will deploy the function in an Azure Web App as I know it will work there.
Azure functions with Python is in Preview so the functionality might change in the future.










share|improve this question

























  • does the module show up when you try python -m pip list

    – jmugz3
    Jan 8 at 21:31











  • @jmugz3 Yes it does. In fact, sqlalchemy does too and is imported without issues. py -V Python 3.6.6 PS>py -m pip list py -m pip list Package Version ---------- ------- feedparser 5.2.1 pip 18.1 pymssql 2.1.4 pyodbc 4.0.25 setuptools 39.0.1 SQLAlchemy 1.2.15

    – Alexander Dadiani
    Jan 8 at 22:47


















2















I am trying to deploy python code to azure functions. The function downloads some files from a FTP Server, saves them in the function app directory of the host machine and uploads them to an Azure SQL DB.



The first part works correctly; the files are saved in the same directory as the main python file.



When the sqlalchemy methods are invoked (specifically create_engine() )
a ModuleNotFoundError: No module named 'pyodbc' is thrown.
(return __import__('pyodbc') is called)



When I try to import pyodbc specifically, the same ModuleNotFoundError is thrown.



The code works fine locally on my machine.



I specified the path to the modules in the code, thus SQLAlchemy loads (and other modules, too like feedparser).



I installed the modules through the Kudu console following this tutorial:
(tl;dr make venv and install with pip)



https://github.com/yokawasa/azure-functions-python-samples/blob/master/docs/install-python-modules.md



OS of host machine: OS version: Microsoft Windows NT 10.0.14393.0
64 bit System



# -*- coding: utf-8 -*-
import sys
import os

sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'env/Lib/site-packages')))

from ftplib import FTP
from ast import literal_eval
import csv
import json
from sqlalchemy import create_engine, MetaData, Table
import pyodbc


def post_processing(input, output, csv_counter, header):
'''Removes uneccessary headers from the csv files and writes the data
into a csv file'''
# code...
return csv_counter, header


def download():

# first gets all the dates as the base for the full paths
# afterwards build each possible path to a csv file
# structure or dimensions of folders has to be known in advance
# advantage: specific folders are not hardcoded
# code...

return response


def upload():
postreqdata = json.loads(open(os.environ['req']).read())
response = open(os.environ['res'], 'w')
response.write("Finito" + postreqdata["name"])
response.close()

try:
# dev settings
f = open("connection\con_info_sink.txt", "r")
connection_info = literal_eval(f.read())
f.close()
except FileNotFoundError:
# prod settings
connection_info = {
"db_user": os.environ["DB_USER"],
"db_password": os.environ["DB_PASSWORD"],
"db_name": os.environ["DB_NAME"],
"db_host": os.environ["DB_HOST"],
}

engine = create_engine(
"mssql+pyodbc://{db_user}:{db_password}@{db_host}/{db_name} ?driver=ODBC+Driver+13+for+SQL+Server".format(**connection_info), convert_unicode=True)

# orm mapping
metadata = MetaData(engine)
pcards = Table("pcardlog", metadata, autoload=True, schema='staging')

col_names_pcard = pcards.columns.keys()
fakedata_pcard =

with open('source.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
fakedata_pcard.append(
{col_names_pcard[0]: row[0],
col_names_pcard[1]: row[1],
col_names_pcard[2]: row[2],
col_names_pcard[3]: row[3],
col_names_pcard[4]: row[4],
col_names_pcard[5]: row[5],
col_names_pcard[6]: row[6],
})

con = engine.connect()
con.execute(pcards.insert(), fakedata_pcard)

return response


if __name__ == "__main__":
response = download()
response = upload()


I expected pyodbc to load without problems, as I already deployed a web app in Azure once. There I installed pyodbc with wheelhouse directly from the binaries.



I tried installing pyodbc at this project with wheels, too, but that changed nothing.



Update 1:
I found this post on Stackoverflow stating that installing custom drivers is not possible on azure function apps.



ODBC or OleDB database drivers under Azure App Service



Update 2: After stumbling over a GitHub issue for the azure functions cli



https://github.com/Azure/azure-functions-python-worker/issues/249#issuecomment-452478272



and tried deploying the function in a linux environment with docker and pyodbc
finally loaded. However, the driver specification for SQLAlchemy is now wrong.



I will deploy the function in an Azure Web App as I know it will work there.
Azure functions with Python is in Preview so the functionality might change in the future.










share|improve this question

























  • does the module show up when you try python -m pip list

    – jmugz3
    Jan 8 at 21:31











  • @jmugz3 Yes it does. In fact, sqlalchemy does too and is imported without issues. py -V Python 3.6.6 PS>py -m pip list py -m pip list Package Version ---------- ------- feedparser 5.2.1 pip 18.1 pymssql 2.1.4 pyodbc 4.0.25 setuptools 39.0.1 SQLAlchemy 1.2.15

    – Alexander Dadiani
    Jan 8 at 22:47














2












2








2


2






I am trying to deploy python code to azure functions. The function downloads some files from a FTP Server, saves them in the function app directory of the host machine and uploads them to an Azure SQL DB.



The first part works correctly; the files are saved in the same directory as the main python file.



When the sqlalchemy methods are invoked (specifically create_engine() )
a ModuleNotFoundError: No module named 'pyodbc' is thrown.
(return __import__('pyodbc') is called)



When I try to import pyodbc specifically, the same ModuleNotFoundError is thrown.



The code works fine locally on my machine.



I specified the path to the modules in the code, thus SQLAlchemy loads (and other modules, too like feedparser).



I installed the modules through the Kudu console following this tutorial:
(tl;dr make venv and install with pip)



https://github.com/yokawasa/azure-functions-python-samples/blob/master/docs/install-python-modules.md



OS of host machine: OS version: Microsoft Windows NT 10.0.14393.0
64 bit System



# -*- coding: utf-8 -*-
import sys
import os

sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'env/Lib/site-packages')))

from ftplib import FTP
from ast import literal_eval
import csv
import json
from sqlalchemy import create_engine, MetaData, Table
import pyodbc


def post_processing(input, output, csv_counter, header):
'''Removes uneccessary headers from the csv files and writes the data
into a csv file'''
# code...
return csv_counter, header


def download():

# first gets all the dates as the base for the full paths
# afterwards build each possible path to a csv file
# structure or dimensions of folders has to be known in advance
# advantage: specific folders are not hardcoded
# code...

return response


def upload():
postreqdata = json.loads(open(os.environ['req']).read())
response = open(os.environ['res'], 'w')
response.write("Finito" + postreqdata["name"])
response.close()

try:
# dev settings
f = open("connection\con_info_sink.txt", "r")
connection_info = literal_eval(f.read())
f.close()
except FileNotFoundError:
# prod settings
connection_info = {
"db_user": os.environ["DB_USER"],
"db_password": os.environ["DB_PASSWORD"],
"db_name": os.environ["DB_NAME"],
"db_host": os.environ["DB_HOST"],
}

engine = create_engine(
"mssql+pyodbc://{db_user}:{db_password}@{db_host}/{db_name} ?driver=ODBC+Driver+13+for+SQL+Server".format(**connection_info), convert_unicode=True)

# orm mapping
metadata = MetaData(engine)
pcards = Table("pcardlog", metadata, autoload=True, schema='staging')

col_names_pcard = pcards.columns.keys()
fakedata_pcard =

with open('source.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
fakedata_pcard.append(
{col_names_pcard[0]: row[0],
col_names_pcard[1]: row[1],
col_names_pcard[2]: row[2],
col_names_pcard[3]: row[3],
col_names_pcard[4]: row[4],
col_names_pcard[5]: row[5],
col_names_pcard[6]: row[6],
})

con = engine.connect()
con.execute(pcards.insert(), fakedata_pcard)

return response


if __name__ == "__main__":
response = download()
response = upload()


I expected pyodbc to load without problems, as I already deployed a web app in Azure once. There I installed pyodbc with wheelhouse directly from the binaries.



I tried installing pyodbc at this project with wheels, too, but that changed nothing.



Update 1:
I found this post on Stackoverflow stating that installing custom drivers is not possible on azure function apps.



ODBC or OleDB database drivers under Azure App Service



Update 2: After stumbling over a GitHub issue for the azure functions cli



https://github.com/Azure/azure-functions-python-worker/issues/249#issuecomment-452478272



and tried deploying the function in a linux environment with docker and pyodbc
finally loaded. However, the driver specification for SQLAlchemy is now wrong.



I will deploy the function in an Azure Web App as I know it will work there.
Azure functions with Python is in Preview so the functionality might change in the future.










share|improve this question
















I am trying to deploy python code to azure functions. The function downloads some files from a FTP Server, saves them in the function app directory of the host machine and uploads them to an Azure SQL DB.



The first part works correctly; the files are saved in the same directory as the main python file.



When the sqlalchemy methods are invoked (specifically create_engine() )
a ModuleNotFoundError: No module named 'pyodbc' is thrown.
(return __import__('pyodbc') is called)



When I try to import pyodbc specifically, the same ModuleNotFoundError is thrown.



The code works fine locally on my machine.



I specified the path to the modules in the code, thus SQLAlchemy loads (and other modules, too like feedparser).



I installed the modules through the Kudu console following this tutorial:
(tl;dr make venv and install with pip)



https://github.com/yokawasa/azure-functions-python-samples/blob/master/docs/install-python-modules.md



OS of host machine: OS version: Microsoft Windows NT 10.0.14393.0
64 bit System



# -*- coding: utf-8 -*-
import sys
import os

sys.path.append(os.path.abspath(os.path.join(os.path.dirname( __file__ ), 'env/Lib/site-packages')))

from ftplib import FTP
from ast import literal_eval
import csv
import json
from sqlalchemy import create_engine, MetaData, Table
import pyodbc


def post_processing(input, output, csv_counter, header):
'''Removes uneccessary headers from the csv files and writes the data
into a csv file'''
# code...
return csv_counter, header


def download():

# first gets all the dates as the base for the full paths
# afterwards build each possible path to a csv file
# structure or dimensions of folders has to be known in advance
# advantage: specific folders are not hardcoded
# code...

return response


def upload():
postreqdata = json.loads(open(os.environ['req']).read())
response = open(os.environ['res'], 'w')
response.write("Finito" + postreqdata["name"])
response.close()

try:
# dev settings
f = open("connection\con_info_sink.txt", "r")
connection_info = literal_eval(f.read())
f.close()
except FileNotFoundError:
# prod settings
connection_info = {
"db_user": os.environ["DB_USER"],
"db_password": os.environ["DB_PASSWORD"],
"db_name": os.environ["DB_NAME"],
"db_host": os.environ["DB_HOST"],
}

engine = create_engine(
"mssql+pyodbc://{db_user}:{db_password}@{db_host}/{db_name} ?driver=ODBC+Driver+13+for+SQL+Server".format(**connection_info), convert_unicode=True)

# orm mapping
metadata = MetaData(engine)
pcards = Table("pcardlog", metadata, autoload=True, schema='staging')

col_names_pcard = pcards.columns.keys()
fakedata_pcard =

with open('source.csv', newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=';')
for row in reader:
fakedata_pcard.append(
{col_names_pcard[0]: row[0],
col_names_pcard[1]: row[1],
col_names_pcard[2]: row[2],
col_names_pcard[3]: row[3],
col_names_pcard[4]: row[4],
col_names_pcard[5]: row[5],
col_names_pcard[6]: row[6],
})

con = engine.connect()
con.execute(pcards.insert(), fakedata_pcard)

return response


if __name__ == "__main__":
response = download()
response = upload()


I expected pyodbc to load without problems, as I already deployed a web app in Azure once. There I installed pyodbc with wheelhouse directly from the binaries.



I tried installing pyodbc at this project with wheels, too, but that changed nothing.



Update 1:
I found this post on Stackoverflow stating that installing custom drivers is not possible on azure function apps.



ODBC or OleDB database drivers under Azure App Service



Update 2: After stumbling over a GitHub issue for the azure functions cli



https://github.com/Azure/azure-functions-python-worker/issues/249#issuecomment-452478272



and tried deploying the function in a linux environment with docker and pyodbc
finally loaded. However, the driver specification for SQLAlchemy is now wrong.



I will deploy the function in an Azure Web App as I know it will work there.
Azure functions with Python is in Preview so the functionality might change in the future.







python azure-functions python-import






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 9 at 15:59







Alexander Dadiani

















asked Jan 4 at 17:13









Alexander DadianiAlexander Dadiani

113




113













  • does the module show up when you try python -m pip list

    – jmugz3
    Jan 8 at 21:31











  • @jmugz3 Yes it does. In fact, sqlalchemy does too and is imported without issues. py -V Python 3.6.6 PS>py -m pip list py -m pip list Package Version ---------- ------- feedparser 5.2.1 pip 18.1 pymssql 2.1.4 pyodbc 4.0.25 setuptools 39.0.1 SQLAlchemy 1.2.15

    – Alexander Dadiani
    Jan 8 at 22:47



















  • does the module show up when you try python -m pip list

    – jmugz3
    Jan 8 at 21:31











  • @jmugz3 Yes it does. In fact, sqlalchemy does too and is imported without issues. py -V Python 3.6.6 PS>py -m pip list py -m pip list Package Version ---------- ------- feedparser 5.2.1 pip 18.1 pymssql 2.1.4 pyodbc 4.0.25 setuptools 39.0.1 SQLAlchemy 1.2.15

    – Alexander Dadiani
    Jan 8 at 22:47

















does the module show up when you try python -m pip list

– jmugz3
Jan 8 at 21:31





does the module show up when you try python -m pip list

– jmugz3
Jan 8 at 21:31













@jmugz3 Yes it does. In fact, sqlalchemy does too and is imported without issues. py -V Python 3.6.6 PS>py -m pip list py -m pip list Package Version ---------- ------- feedparser 5.2.1 pip 18.1 pymssql 2.1.4 pyodbc 4.0.25 setuptools 39.0.1 SQLAlchemy 1.2.15

– Alexander Dadiani
Jan 8 at 22:47





@jmugz3 Yes it does. In fact, sqlalchemy does too and is imported without issues. py -V Python 3.6.6 PS>py -m pip list py -m pip list Package Version ---------- ------- feedparser 5.2.1 pip 18.1 pymssql 2.1.4 pyodbc 4.0.25 setuptools 39.0.1 SQLAlchemy 1.2.15

– Alexander Dadiani
Jan 8 at 22:47












0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54043339%2fmodulenotfounderror-no-module-named-pyodbc-although-package-successfully-inst%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54043339%2fmodulenotfounderror-no-module-named-pyodbc-although-package-successfully-inst%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Monofisismo

Angular Downloading a file using contenturl with Basic Authentication

Olmecas