Locking a file in Python
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
python file-locking
add a comment |
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
python file-locking
add a comment |
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
python file-locking
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
python file-locking
python file-locking
edited Sep 29 '13 at 22:29
user212218
asked Jan 28 '09 at 23:20
Evan FosmarkEvan Fosmark
55.8k2894114
55.8k2894114
add a comment |
add a comment |
12 Answers
12
active
oldest
votes
Alright, so I ended up going with the code I wrote here, on my websitelink is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
5
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
7
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
3
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
3
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
6
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
|
show 9 more comments
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
13
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
2
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
1
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
add a comment |
I prefer lockfile — Platform-independent file locking
3
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
1
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
|
show 3 more comments
Locking is platform and device specific, but generally, you have a few options:
- Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
- Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
- Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
- There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
4
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
3
New note:os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828
– Ghostkeeper
Aug 29 '16 at 1:27
add a comment |
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
import fcntl, os
def lock_file(f):
fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen
can be used in a with
block where one would normally use an open
statement.
WARNING: If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
WARNING: The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
unlock_file
file on linux should not callfcntl
again with theLOCK_UN
flag?
– eadmaster
Nov 16 '18 at 15:26
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
In__exit__
youclose
outside of the lock afterunlock_file
. I believe the runtime could flush (i.e., write) data duringclose
. I believe one mustflush
andfsync
under the lock to make sure no additional data is written outside the lock duringclose
.
– Benjamin Bannier
Jan 7 at 8:44
Thanks for the correction! I verified that there is the possibility for a race condition without theflush
andfsync
. I've added the two lines you suggested before callingunlock
. I re-tested and the race condition appears to be resolved.
– Thomas Lux
Jan 8 at 23:51
1
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
|
show 1 more comment
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
17
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
1
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
9
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions likeflock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.
– Mark Amery
May 10 '16 at 11:38
add a comment |
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
add a comment |
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
6
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
add a comment |
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
add a comment |
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + 'n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
add a comment |
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
4
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
add a comment |
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f489861%2flocking-a-file-in-python%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
12 Answers
12
active
oldest
votes
12 Answers
12
active
oldest
votes
active
oldest
votes
active
oldest
votes
Alright, so I ended up going with the code I wrote here, on my websitelink is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
5
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
7
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
3
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
3
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
6
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
|
show 9 more comments
Alright, so I ended up going with the code I wrote here, on my websitelink is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
5
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
7
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
3
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
3
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
6
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
|
show 9 more comments
Alright, so I ended up going with the code I wrote here, on my websitelink is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
Alright, so I ended up going with the code I wrote here, on my websitelink is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
edited Feb 19 '18 at 7:40
Reigel
53.6k1999125
53.6k1999125
answered Jan 31 '09 at 8:30
Evan FosmarkEvan Fosmark
55.8k2894114
55.8k2894114
5
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
7
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
3
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
3
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
6
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
|
show 9 more comments
5
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
7
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
3
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
3
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
6
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
5
5
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
As noted by a comment at the blog post, this solution isn't "perfect", in that it's possible for the program to terminate in such a way that the lock is left in place and you have to manually delete the lock before the file becomes accessible again. However, that aside, this is still a good solution.
– leetNightshade
Nov 8 '12 at 21:27
7
7
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
link is now dead, unfortunately
– bk0
Jan 21 '14 at 18:57
3
3
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
Yet another improved version of Evan's FileLock can be found here: github.com/ilastik/lazyflow/blob/master/lazyflow/utility/…
– Stuart Berg
Feb 20 '14 at 16:21
3
3
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
OpenStack did publish their own (well, Skip Montanaro's) implementation - pylockfile - Very similar to the ones mentioned in previous comments, but still worth taking a look.
– jweyrich
Dec 19 '14 at 13:40
6
6
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
@jweyrich Openstacks pylockfile is now deprecated. It is advised to use fasteners or oslo.concurrency instead.
– harbun
Apr 28 '16 at 9:10
|
show 9 more comments
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
13
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
2
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
1
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
add a comment |
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
13
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
2
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
1
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
add a comment |
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
edited Jun 22 '15 at 9:34
Wolph
56.9k7105133
56.9k7105133
answered Jan 29 '09 at 1:01
John FouhyJohn Fouhy
30.3k145271
30.3k145271
13
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
2
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
1
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
add a comment |
13
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
2
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
1
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
13
13
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
+1 -- SQLite is almost always the way to go in these kinds of situations.
– cdleary
Jan 29 '09 at 5:38
2
2
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
Portalocker requires Python Extensions for Windows, on that.
– n611x007
Feb 21 '13 at 9:59
1
1
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@naxa there is a variant of it which relies only on msvcrt and ctypes, see roundup.hg.sourceforge.net/hgweb/roundup/roundup/file/tip/…
– Shmil The Cat
Apr 15 '13 at 21:21
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
@n611x007 Portalocker has just been updated so it doesn't require any extensions on Windows anymore :)
– Wolph
Sep 6 '16 at 0:03
add a comment |
I prefer lockfile — Platform-independent file locking
3
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
1
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
|
show 3 more comments
I prefer lockfile — Platform-independent file locking
3
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
1
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
|
show 3 more comments
I prefer lockfile — Platform-independent file locking
I prefer lockfile — Platform-independent file locking
edited Mar 15 '13 at 16:05
Janus Troelsen
13.7k596157
13.7k596157
answered Jul 27 '10 at 13:04
ferrdoferrdo
15912
15912
3
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
1
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
|
show 3 more comments
3
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
1
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
3
3
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
This library seems well written, but there's no mechanism for detecting stale lock files. It tracks the PID that created the lock, so should be possible to tell if that process is still running.
– sherbang
Dec 28 '11 at 19:06
1
1
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@sherbang: what about remove_existing_pidfile?
– Janus Troelsen
Mar 15 '13 at 16:06
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@JanusTroelsen the pidlockfile module doesn't acquire locks atomically.
– sherbang
Mar 15 '13 at 20:25
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@sherbang Are you sure? It opens the lock file with mode O_CREAT|O_EXCL.
– mhsmith
Jun 21 '13 at 14:53
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
@rgove You're correct, and I misspoke. Yes, it obtains locks atomically. What I should have said was that it doesn't allow for an atomic way to deal with stale locks. Although, I can't recall now if there is a way to handle that atomically.
– sherbang
Jun 24 '13 at 8:43
|
show 3 more comments
Locking is platform and device specific, but generally, you have a few options:
- Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
- Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
- Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
- There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
4
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
3
New note:os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828
– Ghostkeeper
Aug 29 '16 at 1:27
add a comment |
Locking is platform and device specific, but generally, you have a few options:
- Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
- Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
- Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
- There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
4
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
3
New note:os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828
– Ghostkeeper
Aug 29 '16 at 1:27
add a comment |
Locking is platform and device specific, but generally, you have a few options:
- Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
- Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
- Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
- There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
Locking is platform and device specific, but generally, you have a few options:
- Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
- Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
- Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
- There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
edited May 8 '18 at 9:10
Eric O Lebigot
56.7k35166217
56.7k35166217
answered Jan 29 '09 at 8:46
Richard LevasseurRichard Levasseur
9,84354155
9,84354155
4
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
3
New note:os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828
– Ghostkeeper
Aug 29 '16 at 1:27
add a comment |
4
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
3
New note:os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828
– Ghostkeeper
Aug 29 '16 at 1:27
4
4
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
Note: Move/Rename is not atomic in Win32. Reference: stackoverflow.com/questions/167414/…
– sherbang
Dec 27 '11 at 21:28
3
3
New note:
os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828– Ghostkeeper
Aug 29 '16 at 1:27
New note:
os.rename
is now atomic in Win32 since Python 3.3: bugs.python.org/issue8828– Ghostkeeper
Aug 29 '16 at 1:27
add a comment |
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
import fcntl, os
def lock_file(f):
fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen
can be used in a with
block where one would normally use an open
statement.
WARNING: If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
WARNING: The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
unlock_file
file on linux should not callfcntl
again with theLOCK_UN
flag?
– eadmaster
Nov 16 '18 at 15:26
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
In__exit__
youclose
outside of the lock afterunlock_file
. I believe the runtime could flush (i.e., write) data duringclose
. I believe one mustflush
andfsync
under the lock to make sure no additional data is written outside the lock duringclose
.
– Benjamin Bannier
Jan 7 at 8:44
Thanks for the correction! I verified that there is the possibility for a race condition without theflush
andfsync
. I've added the two lines you suggested before callingunlock
. I re-tested and the race condition appears to be resolved.
– Thomas Lux
Jan 8 at 23:51
1
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
|
show 1 more comment
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
import fcntl, os
def lock_file(f):
fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen
can be used in a with
block where one would normally use an open
statement.
WARNING: If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
WARNING: The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
unlock_file
file on linux should not callfcntl
again with theLOCK_UN
flag?
– eadmaster
Nov 16 '18 at 15:26
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
In__exit__
youclose
outside of the lock afterunlock_file
. I believe the runtime could flush (i.e., write) data duringclose
. I believe one mustflush
andfsync
under the lock to make sure no additional data is written outside the lock duringclose
.
– Benjamin Bannier
Jan 7 at 8:44
Thanks for the correction! I verified that there is the possibility for a race condition without theflush
andfsync
. I've added the two lines you suggested before callingunlock
. I re-tested and the race condition appears to be resolved.
– Thomas Lux
Jan 8 at 23:51
1
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
|
show 1 more comment
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
import fcntl, os
def lock_file(f):
fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen
can be used in a with
block where one would normally use an open
statement.
WARNING: If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
WARNING: The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
import fcntl, os
def lock_file(f):
fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen
can be used in a with
block where one would normally use an open
statement.
WARNING: If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
WARNING: The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
edited Jan 8 at 23:48
answered Sep 25 '17 at 14:12
Thomas LuxThomas Lux
176111
176111
unlock_file
file on linux should not callfcntl
again with theLOCK_UN
flag?
– eadmaster
Nov 16 '18 at 15:26
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
In__exit__
youclose
outside of the lock afterunlock_file
. I believe the runtime could flush (i.e., write) data duringclose
. I believe one mustflush
andfsync
under the lock to make sure no additional data is written outside the lock duringclose
.
– Benjamin Bannier
Jan 7 at 8:44
Thanks for the correction! I verified that there is the possibility for a race condition without theflush
andfsync
. I've added the two lines you suggested before callingunlock
. I re-tested and the race condition appears to be resolved.
– Thomas Lux
Jan 8 at 23:51
1
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
|
show 1 more comment
unlock_file
file on linux should not callfcntl
again with theLOCK_UN
flag?
– eadmaster
Nov 16 '18 at 15:26
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
In__exit__
youclose
outside of the lock afterunlock_file
. I believe the runtime could flush (i.e., write) data duringclose
. I believe one mustflush
andfsync
under the lock to make sure no additional data is written outside the lock duringclose
.
– Benjamin Bannier
Jan 7 at 8:44
Thanks for the correction! I verified that there is the possibility for a race condition without theflush
andfsync
. I've added the two lines you suggested before callingunlock
. I re-tested and the race condition appears to be resolved.
– Thomas Lux
Jan 8 at 23:51
1
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
unlock_file
file on linux should not call fcntl
again with the LOCK_UN
flag?– eadmaster
Nov 16 '18 at 15:26
unlock_file
file on linux should not call fcntl
again with the LOCK_UN
flag?– eadmaster
Nov 16 '18 at 15:26
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
The unlock automatically happens when the file object is closed. However, it was bad programming practice of me not to include it. I've updated the code and added the fcntl unlock operation!
– Thomas Lux
Dec 3 '18 at 15:36
In
__exit__
you close
outside of the lock after unlock_file
. I believe the runtime could flush (i.e., write) data during close
. I believe one must flush
and fsync
under the lock to make sure no additional data is written outside the lock during close
.– Benjamin Bannier
Jan 7 at 8:44
In
__exit__
you close
outside of the lock after unlock_file
. I believe the runtime could flush (i.e., write) data during close
. I believe one must flush
and fsync
under the lock to make sure no additional data is written outside the lock during close
.– Benjamin Bannier
Jan 7 at 8:44
Thanks for the correction! I verified that there is the possibility for a race condition without the
flush
and fsync
. I've added the two lines you suggested before calling unlock
. I re-tested and the race condition appears to be resolved.– Thomas Lux
Jan 8 at 23:51
Thanks for the correction! I verified that there is the possibility for a race condition without the
flush
and fsync
. I've added the two lines you suggested before calling unlock
. I re-tested and the race condition appears to be resolved.– Thomas Lux
Jan 8 at 23:51
1
1
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
The only thing that will go "wrong" is that by the time process 1 locks the file its contents will be truncated (contents erased). You can test this yourself by adding another file "open" with a "w" to the code above before the lock. This is unavoidable though, because you must open the file before locking it. To clarify, the "atomic" is in the sense that only legitimate file contents will be found in a file. This means that you will never get a file with contents from multiple competing processes mixed together.
– Thomas Lux
Jan 15 at 19:27
|
show 1 more comment
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
17
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
1
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
9
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions likeflock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.
– Mark Amery
May 10 '16 at 11:38
add a comment |
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
17
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
1
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
9
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions likeflock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.
– Mark Amery
May 10 '16 at 11:38
add a comment |
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
answered Jan 29 '09 at 0:24
KevinKevin
27k96875
27k96875
17
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
1
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
9
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions likeflock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.
– Mark Amery
May 10 '16 at 11:38
add a comment |
17
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
1
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
9
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions likeflock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.
– Mark Amery
May 10 '16 at 11:38
17
17
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
"separate process that coordinates read/write access to that file" - in other words, implement a database server :-)
– Eli Bendersky
Jan 31 '09 at 8:39
1
1
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
This is actually the best answer. To just say "use a database server" is overly simplified, as a db is not always going to be the right tool for the job. What if it needs to be a plain text file? A good solution might be to spawn a child process and then access it via a named pipe, unix socket, or shared memory.
– Brendon Crawford
Jul 22 '11 at 4:55
9
9
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions like
flock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.– Mark Amery
May 10 '16 at 11:38
-1 because this is just FUD without explanation. Locking a file for writing seems like a pretty straightforward concept to me that OSes offer up with functions like
flock
for it. An approach of "roll your own mutexes and a daemon process to manage them" seems like a rather extreme and complicated approach to take to solve... a problem you haven't actually told us about, but just scarily suggested exists.– Mark Amery
May 10 '16 at 11:38
add a comment |
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
add a comment |
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
add a comment |
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
edited Mar 17 '17 at 16:38
Community♦
11
11
answered Dec 6 '15 at 23:09
Maxime ViarguesMaxime Viargues
5461712
5461712
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
add a comment |
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
re: Portalocker, you can now install pywin32 through pip via the pypiwin32 package.
– Timothy Jannace
Sep 18 '18 at 18:23
add a comment |
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
6
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
add a comment |
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
6
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
add a comment |
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
answered Jan 28 '09 at 23:45
Greg HewgillGreg Hewgill
674k14610141168
674k14610141168
6
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
add a comment |
6
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
6
6
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
You may already know this, but the platform module is also available to obtain information on the running platform. platform.system(). docs.python.org/library/platform.html.
– monkut
Jan 29 '09 at 0:54
add a comment |
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
add a comment |
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
add a comment |
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
edited Aug 29 '17 at 7:35
community wiki
2 revs, 2 users 94%
Günay Gültekin
add a comment |
add a comment |
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + 'n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
add a comment |
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + 'n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
add a comment |
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + 'n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + 'n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
edited May 6 '18 at 10:58
Eric O Lebigot
56.7k35166217
56.7k35166217
answered Aug 7 '14 at 1:01
whitebeardwhitebeard
6261118
6261118
add a comment |
add a comment |
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
4
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
add a comment |
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
4
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
add a comment |
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
answered Aug 19 '13 at 15:22
SpeqSpeq
11213
11213
4
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
add a comment |
4
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
4
4
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
O_EXCL option is not related to lock
– Sergei
Apr 16 '14 at 10:31
add a comment |
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
add a comment |
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
add a comment |
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
edited Dec 21 '16 at 10:26
rrao
191310
191310
answered Sep 26 '16 at 16:41
CobryCobry
1,47231932
1,47231932
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f489861%2flocking-a-file-in-python%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown