For more information checkout the official documentation.
tox-dev / filelock Goto Github PK
View Code? Open in Web Editor NEWA platform-independent file lock for Python.
Home Page: https://py-filelock.readthedocs.io
License: The Unlicense
A platform-independent file lock for Python.
Home Page: https://py-filelock.readthedocs.io
License: The Unlicense
For more information checkout the official documentation.
I'm using a lustre file system. Looks like it doesn't support flock syscall.
Here is a small script I'm trying to run:
import filelock
t = filelock.FileLock('test.lock')
with t:
pass
Here is the output:
$ python test.py
... (Runs indefinitely)
Let's try running under strace:
$ strace python test.py
<A lot of garbage>
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
open("test.lock", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0777) = 3
flock(3, LOCK_EX|LOCK_NB) = -1 ENOSYS (Function not implemented)
close(3) = 0
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
open("test.lock", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0777) = 3
flock(3, LOCK_EX|LOCK_NB) = -1 ENOSYS (Function not implemented)
close(3) = 0
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
open("test.lock", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0777) = 3
flock(3, LOCK_EX|LOCK_NB) = -1 ENOSYS (Function not implemented)
close(3) = 0
select(0, NULL, NULL, NULL, {0, 50000}) = 0 (Timeout)
open("test.lock", O_RDWR|O_CREAT|O_TRUNC|O_CLOEXEC, 0777) = 3
flock(3, LOCK_EX|LOCK_NB) = -1 ENOSYS (Function not implemented)
close(3) = 0
<And so on>
Solution: inside UnixFileLock
do not ignore all exceptions. If OSError is raised with errno=ENOSYS, then show a warning, delete the lock file and fallback to SoftFileLock.
On Python 3 Readiness, filelock shows up as not supporting Python 3, even though the examples are written in Python 3. Python 3 Readiness uses caniusepython3, so it is possible some people might be erroneously holding up their updates waiting for Filelock to show as ready.
Py-filelock should publish its Python 3 status.
The acquire function defines
class ReturnProxy(object):
inside its function.
This results in an obscure circular reference on the class (at least in Python 2) and memory is leaked. What leaks is the class (not the instance of the class) .
I noticed because when we run our test suite we enable gc debug . To reproduce you can do the code below. It will render the object being leaked. If you want to prove memory is indeed being leaked, just change the if 0 for if 1 and see your RAM shoot up.
For now I worked around it myself by taking the class out of the function.
from filelock import FileLock
import os
import gc
gc.enable( )
gc.set_debug(gc.DEBUG_LEAK)
class Blah(object):
"""Reproduce leak outside of filelock"""
def something(self):
class Inner(object):
a = range(1000000) #Make it a big class
pass
return None
def test():
if 0:
#If you want to drive your process out of memory, run
#this simple test (look at Task Manager in windows RAM consumption shoot up)
for i in range(100000):
#If it does not leak, then c
#will be destroyed and cleared from memory
c = Blah()
c.something()
if 1:
lockPath = r'C:\temp\temp.txt.lock'
lock = FileLock(lockPath)
ret = lock.acquire()
print 'hola'
lock.release()
#ret.lock = None
#with lock:
# print 'hola'
test()
gc.collect()
print "\nGARBAGE OBJECTS:"
cnt = 0
for x in gc.garbage:
cnt += 1
if cnt > 1000:
print "Still more garbage. Quitting ..."
break
try:
s = str(x)
except:
s = 'ERROR TO STRING'
print type(x),
if len(s) > 120: s = s[:117]+'...'
print 'o="%s"' % s
Hello! I think this is a great package. Maybe you are interested in an outsiders view at the README.
Use case: I was about to write to processes which were supposed to run at different times and both would update the same file. Since I cannot guarantee that one process would not take much longer than anticipated, I thought about ways for these processes to communicate. I heard about lock files so I googled for python implementations, as I also heard one needs to be careful with this stuff and use a lot of atomic operations ... stuff I have no idea about.
I found these packages:
Then I read the README (on pypi)
[...] This means, when locking the same lock object twice, it will not block.
This puzzled me a lot. I thought: "Isn't blocking exactly the expected behaviour of a lockfile? when somebody already has aquired the lock?"
So I tested it using two ipython sessions:
Upper:
In [1]: from filelock import FileLock
In [2]: lock = FileLock('.lock')
In [3]: lock.acquire()
Out[3]: <filelock.BaseFileLock.acquire.<locals>.ReturnProxy at 0x...>
Lower:
In [1]: from filelock import FileLock
In [2]: lock = FileLock('.lock')
In [3]: lock.acquire()
# it blocks!
So your package works exactly as I expected it to work before reading the README. After reading the README I was puzzled, and after testing I was happy again.
Now after reading a bit more ... I think I get the point that sentence is trying to make. Apparently inside the same process acquire()
-ing the same lock multiple times does not block the process itself. As a beginner I can not imagine why one would actually want to acquire the lock multiple times within the same process, but I guess there are use cases.
I was wondering if you would like to smooth out the path for using your great package for beginners a little bit. Maybe one could be a bit more elaborate in the first part of the README. So after this:
This package contains a single module, which implements a platform independent file lock in Python.
one could maybe say:
File locking is a mechanism that restricts access to a computer file by allowing only one process to access it in a specific time. Another process trying to
acquire
the same FileLock will block.
And then further down
The lock includes a lock counter and is thread safe. This means, when [the same process] is acquiring the same lock object twice, it will not block.
I could make a PR for this if you like.
By default this package will always print to a logger, but for my use case it only ends up cluttering the logs with "released, acquired, released, acquired, etc". I'd appreciate it if you could add the option to disable the logger :)
H:\>python test.py
..EE......E
======================================================================
ERROR: test_default_timeout (__main__.TestFileLock)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 213, in test_default_timeout
self.assertRaises(filelock.Timeout, lock2.acquire)
File "C:\Python27\lib\unittest\case.py", line 475, in assertRaises
callableObj(*args, **kwargs)
File "H:\filelock.py", line 230, in acquire
self._acquire()
File "H:\filelock.py", line 312, in _acquire
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
IOError: [Errno 13] Permission denied
======================================================================
ERROR: test_del (__main__.TestFileLock)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 265, in test_del
self.assertRaises(filelock.Timeout, lock2.acquire, timeout = 0)
File "C:\Python27\lib\unittest\case.py", line 475, in assertRaises
callableObj(*args, **kwargs)
File "H:\filelock.py", line 230, in acquire
self._acquire()
File "H:\filelock.py", line 312, in _acquire
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
IOError: [Errno 13] Permission denied
======================================================================
ERROR: test_timeout (__main__.TestFileLock)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test.py", line 192, in test_timeout
self.assertRaises(filelock.Timeout, lock2.acquire, timeout=1)
File "C:\Python27\lib\unittest\case.py", line 475, in assertRaises
callableObj(*args, **kwargs)
File "H:\filelock.py", line 230, in acquire
self._acquire()
File "H:\filelock.py", line 312, in _acquire
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
IOError: [Errno 13] Permission denied
----------------------------------------------------------------------
Ran 11 tests in 0.988s
FAILED (errors=3)
According to https://docs.python.org/2/library/msvcrt.html :
msvcrt.locking "Raises IOError on failure." instead of OSError
After changing the ''OSError'' to 'IOError', all tests passed.
H:\>python test.py
...........
----------------------------------------------------------------------
Ran 11 tests in 3.170s
OK
I have an application whose files may change paths when users rename directories.
For example, a user may rename C:\\...\\Desktop\\MyFolder\\sensitivefile.txt
to C:\\...\\Desktop\\AnotherFolder\\sensitivefile.txt
I've noticed that, at least on Windows, acquiring a file lock in a directory that does not exist (or no longer exists because it was renamed) causes the program to hang indefinitely as it polls infinitely trying to acquire the lock.
fl = FileLock("this\\path\\does\\not\\exist.lock")
with fl:
print("Filelock acquired!")
# Filelock is never acquired, program blocks indefinitely
This can be an issue if a race condition occurs where a user changes the directory name at the right time (which has occurred for me).
Instead of blocking forever, it would be better to throw some sort of error letting the developer know that the path no longer exists, so the lock file can't be created at the specified location.
One proposal: The problem is that we ignore OSError
in the _acquire
method. Instead, we should let FileNotFoundError
propagate as follows:
def _acquire(self):
open_mode = os.O_RDWR | os.O_CREAT | os.O_TRUNC
try:
fd = os.open(self._lock_file, open_mode)
except FileNotFoundError:
# Will err if the path to the parent directory does not exist. If we do not
# raise this here, the acquire() statement will block indefinitely
raise
except OSError:
pass
else:
try:
msvcrt.locking(fd, msvcrt.LK_NBLCK, 1)
except (IOError, OSError):
os.close(fd)
else:
self._lock_file_fd = fd
return None
Hi there!
I'd like to use this module, but it won't work in my case since I'm dealing with some nfs mounts which have 'nolock' set. In this case, flock() we be a no-op.
There's more info here:
http://linux.die.net/man/5/nfs
Thanks!
I have a general question about the implementation of the _acquire methods. I apologize if this is not the best place to ask this question.
.
The portions of code concerning my questions:
.
UnixFileLock._acquire:
try:
fd = os.open(self._lock_file, open_mode)
except OSError:
pass
else:
...
UnixFileLock._acquire:
fd = os.open(self._lock_file, open_mode)
...
What resources/documenation should I be looking at to understand the behavior of os.open under different operating systems.... When is OSError raised? Why is it unnecessary to handle the exception when running on unix systems?
I'm maintaining py-filelock for Debian (https://packages.qa.debian.org/p/python-filelock.html). We are runningtest.py
in a clean chroot during the package build to verify that all tests pass in the version to be packaged. However, we recently noticed that current versions of the library fail their test.py
runs with random assertions being hit:
PYTHONPATH=. python2 /build/python-filelock-3.0.0/debian/test.py
.........FF..F........
======================================================================
FAIL: test_threaded1 (__main__.FileLockTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/build/python-filelock-3.0.0/debian/test.py", line 233, in test_threaded1
threads1[i].join()
File "/build/python-filelock-3.0.0/debian/test.py", line 66, in join
raise (wrapper_ex.__class__, wrapper_ex, self.ex[2])
AssertionError
======================================================================
FAIL: test_timeout (__main__.FileLockTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/build/python-filelock-3.0.0/debian/test.py", line 253, in test_timeout
self.assertRaises(filelock.Timeout, lock2.acquire, timeout=1)
AssertionError: Timeout not raised
======================================================================
FAIL: test_default_timeout (__main__.SoftFileLockTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/build/python-filelock-3.0.0/debian/test.py", line 278, in test_default_timeout
self.assertRaises(filelock.Timeout, lock2.acquire)
AssertionError: Timeout not raised
----------------------------------------------------------------------
Ran 22 tests in 6.045s
Patching the source to revert cb1d83d seems to reliably fix the issue but I'm not sure stray lock files are left lying around. Any comments?
I was trying out this library, but found that opening a file for reading while under a lock would completely erase the file's contents.
The following code reproduces the issue:
#!/usr/bin/python3
import filelock
with open("test.txt", "w") as f:
f.write("This is some testing text")
lock = filelock.FileLock("test.txt")
with lock:
with open("test.txt", "r") as f:
a = f.read()
print(a)
Is it possible to push tags (Releases) on the github repo, as what is done on pypi, when incrementing build numbers ?
It would be nice if a timeout could be specified at creation time as well, e.g:
lock = FileLock('.lock', timeout=10)
with lock:
...
Would it be possible to publish a release and push it it to pypi? I'm mostly interested in the recently added trove classifiers :-)
It would be nice to keep PyPI releases and git tags in sync :)
Hello, I am trying to implement locking across machines. However, SoftFileLock can deadlock sometimes if process ends abruptly. Are there any other solutions that can ensure that deadlocks do not occur?
Hi,
It seems that the latest docs (https://filelock.readthedocs.io/en/latest/) is not aligned with the code.
I wanted to try acquire a lock but without blocking if alreadly locked. for this to happen I expected the timeout param to acquire
to be equal to 0 (as in many other locking primitives).
The docs says (acquire method of BaseFileLock) that
If timeout <= 0, there is no timeout and this method will block until the lock could be acquired
According to source code (both docstring of BaseFileLock.acquire and method source code) if timeout == 0 the method won't block.
BaseFileLock.acquire docstring:
:arg float timeout:
The maximum time waited for the file lock.
If ``timeout < 0``, there is no timeout and this method will
block until the lock could be acquired.
If ``timeout`` is None, the default :attr:`~timeout` is used.
BaseFileLock.acquire code:
elif timeout >= 0 and time.time() - start_time > timeout:
logger().debug('Timeout on acquiring lock %s on %s', lock_id, lock_filename)
raise Timeout(self._lock_file)
Why latest docs are not aligned - is it just not up-to-date? what needed in order to regenerate it to be aligned.
Thanks!
support multiprocess ?
Hi,
if I do
with lock.acquire(timeout = 5):
fp = open(flnm, "a")
print >> fp, "some text"
fp.close()
and then look at the file, I only see one line of text in it (no matter what was in it before).
What am I doing wrong?
I am attaching my code that demonstrates the issue.
please run:
you will see that the file gets truncated.
And the only difference is one line:
with lock.acquire(timeout = 5):
Hi,
OS: Win10/7
Python: 3.6.8
py-filelock: 3.0.12
I try to acquire a filelock without blocking (i.e. timeout=0). when the file is locked this operation takes 1sec (although should return immediately).
I did dome deep dive here and figured out that the reason is msvcrt.locking function (which implemented using _locking - even in a pure C++ project it takes 1sec to try acquire already locked file using _locking.
Using winapi LockFile doesn't have this problem - it returns immediately when the file is already locked .
Do you think it worth change py-filelock implementation to use LockFile instead of _locking?
in my usecase, i'm trying to acquire the lock but if it is already locked I can ignore it and so other stuff. paying 1sec for each such check is crucial for me.
Thanks!
Once the lock has been released, what is the best practice method for deleting the lock_file? Any chance of adding that to the docs.
Cheers!
After they are released, an attempt is made to delete the lock files in both the WindowsFileLock
and SoftFileLock
classes, but not in UnixFileLock
. Is there a reason this is missing from UnixFileLock
? It seems strange that lock files clean up after themselves on some systems but not on others.
Is SoftFileLock reliable over NFS? What NFS settings are required for SoftFileLock to work properly?
Is there a way to add an argument to the LockFile class, such as "writer" or "reader", in order to:
allow multiple readings between processes and threads (like with LockFile("file.txt", mode="reader"):
) avoiding writing
not allow readings while writing (like with LockFile("file.txt", mode="writer"):
)
Thanks
Edit: the operations are not referred to the file "file.txt", that file is just used as a locker.
===> Configuring for py27-filelock-3.0.11
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "setup.py", line 42, in <module>
license_ = open("LICENSE.rst").read()
IOError: [Errno 2] No such file or directory: 'LICENSE.rst'
*** Error code 1
FreeBSD, python-27, tarball from the PyPI site.
I'd like to be able to use the with
and specify a timeout.
Doing this:
with self.lock.acquire(timeout=10):
Results in a nested lock (since __enter__
acquires a lock too). I couldn't see a way to do this without explicitly releasing the lock.
Any suggestions for how to use with and timeout together?
Thanks
I'm writing a program with a lot of file reading/writing, so I'm trying to make the code more manageable.
I'd like a function like this:
def locked_open(filename, mode='r'):
lock = FileLock(filename)
with lock:
<not sure what to put here, return open(filename, mode) doesn't work>
....
with locked_open(filename, mode):
<do stuff>
Can you think of a way to do this?
Both cases of test_del
hang indefinitely on PyPy. This is probably due to the differences of how PyPy does (not) perform GC.
Here's a traceback from pytest after adding a timeout:
$ pytest -v --timeout=15 test.py
=============================================================== test session starts ===============================================================
platform linux2 -- Python 2.7.13[pypy-6.0.0-final], pytest-3.8.2, py-1.5.4, pluggy-0.7.1 -- /usr/bin/pypy
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/tmp/py-filelock/.hypothesis/examples')
rootdir: /tmp/py-filelock, inifile:
plugins: virtualenv-1.2.11, timeout-1.2.1, shutil-1.2.11, mock-1.10.0, hypothesis-3.74.3, backports.unittest-mock-1.4
timeout: 15.0s method: signal
collected 23 items
test.py::FileLockTest::test_context PASSED [ 4%]
test.py::FileLockTest::test_context1 PASSED [ 8%]
test.py::FileLockTest::test_default_timeout PASSED [ 13%]
test.py::FileLockTest::test_del FAILED [ 17%]
test.py::FileLockTest::test_nested PASSED [ 21%]
test.py::FileLockTest::test_nested1 PASSED [ 26%]
test.py::FileLockTest::test_nested_forced_release PASSED [ 30%]
test.py::FileLockTest::test_simple PASSED [ 34%]
test.py::FileLockTest::test_threaded PASSED [ 39%]
test.py::FileLockTest::test_threaded1 PASSED [ 43%]
test.py::FileLockTest::test_timeout PASSED [ 47%]
test.py::SoftFileLockTest::test_cleanup PASSED [ 52%]
test.py::SoftFileLockTest::test_context PASSED [ 56%]
test.py::SoftFileLockTest::test_context1 PASSED [ 60%]
test.py::SoftFileLockTest::test_default_timeout PASSED [ 65%]
test.py::SoftFileLockTest::test_del FAILED [ 69%]
test.py::SoftFileLockTest::test_nested PASSED [ 73%]
test.py::SoftFileLockTest::test_nested1 PASSED [ 78%]
test.py::SoftFileLockTest::test_nested_forced_release PASSED [ 82%]
test.py::SoftFileLockTest::test_simple PASSED [ 86%]
test.py::SoftFileLockTest::test_threaded PASSED [ 91%]
test.py::SoftFileLockTest::test_threaded1 PASSED [ 95%]
test.py::SoftFileLockTest::test_timeout PASSED [100%]
==================================================================== FAILURES =====================================================================
______________________________________________________________ FileLockTest.test_del ______________________________________________________________
self = <test.FileLockTest testMethod=test_del>
def test_del(self):
"""
Tests, if the lock is released, when the object is deleted.
"""
lock1 = self.LOCK_TYPE(self.LOCK_PATH)
lock2 = self.LOCK_TYPE(self.LOCK_PATH)
# Acquire lock 1.
lock1.acquire()
self.assertTrue(lock1.is_locked)
self.assertFalse(lock2.is_locked)
# Try to acquire lock 2.
self.assertRaises(filelock.Timeout, lock2.acquire, timeout = 1) # FIXME (SoftFileLock)
# Delete lock 1 and try to acquire lock 2 again.
del lock1
> lock2.acquire()
test.py:355:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <filelock.UnixFileLock object at 0x000055eb24d21328>, timeout = -1.0, poll_intervall = 0.05
def acquire(self, timeout=None, poll_intervall=0.05):
"""
Acquires the file lock or fails with a :exc:`Timeout` error.
.. code-block:: python
# You can use this method in the context manager (recommended)
with lock.acquire():
pass
# Or use an equivalent try-finally construct:
lock.acquire()
try:
pass
finally:
lock.release()
:arg float timeout:
The maximum time waited for the file lock.
If ``timeout <= 0``, there is no timeout and this method will
block until the lock could be acquired.
If ``timeout`` is None, the default :attr:`~timeout` is used.
:arg float poll_intervall:
We check once in *poll_intervall* seconds if we can acquire the
file lock.
:raises Timeout:
if the lock could not be acquired in *timeout* seconds.
.. versionchanged:: 2.0.0
This method returns now a *proxy* object instead of *self*,
so that it can be used in a with statement without side effects.
"""
# Use the default timeout, if no timeout is provided.
if timeout is None:
timeout = self.timeout
# Increment the number right at the beginning.
# We can still undo it, if something fails.
with self._thread_lock:
self._lock_counter += 1
lock_id = id(self)
lock_filename = self._lock_file
start_time = time.time()
try:
while True:
with self._thread_lock:
if not self.is_locked:
logger().debug('Attempting to acquire lock %s on %s', lock_id, lock_filename)
self._acquire()
if self.is_locked:
logger().info('Lock %s acquired on %s', lock_id, lock_filename)
break
elif timeout >= 0 and time.time() - start_time > timeout:
logger().debug('Timeout on acquiring lock %s on %s', lock_id, lock_filename)
raise Timeout(self._lock_file)
else:
logger().debug(
'Lock %s not acquired on %s, waiting %s seconds ...',
lock_id, lock_filename, poll_intervall
)
> time.sleep(poll_intervall)
E Failed: Timeout >15.0s
filelock.py:284: Failed
____________________________________________________________ SoftFileLockTest.test_del ____________________________________________________________
self = <test.SoftFileLockTest testMethod=test_del>
def test_del(self):
"""
Tests, if the lock is released, when the object is deleted.
"""
lock1 = self.LOCK_TYPE(self.LOCK_PATH)
lock2 = self.LOCK_TYPE(self.LOCK_PATH)
# Acquire lock 1.
lock1.acquire()
self.assertTrue(lock1.is_locked)
self.assertFalse(lock2.is_locked)
# Try to acquire lock 2.
self.assertRaises(filelock.Timeout, lock2.acquire, timeout = 1) # FIXME (SoftFileLock)
# Delete lock 1 and try to acquire lock 2 again.
del lock1
> lock2.acquire()
test.py:355:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <filelock.SoftFileLock object at 0x00007fb06012fbb0>, timeout = -1.0, poll_intervall = 0.05
def acquire(self, timeout=None, poll_intervall=0.05):
"""
Acquires the file lock or fails with a :exc:`Timeout` error.
.. code-block:: python
# You can use this method in the context manager (recommended)
with lock.acquire():
pass
# Or use an equivalent try-finally construct:
lock.acquire()
try:
pass
finally:
lock.release()
:arg float timeout:
The maximum time waited for the file lock.
If ``timeout <= 0``, there is no timeout and this method will
block until the lock could be acquired.
If ``timeout`` is None, the default :attr:`~timeout` is used.
:arg float poll_intervall:
We check once in *poll_intervall* seconds if we can acquire the
file lock.
:raises Timeout:
if the lock could not be acquired in *timeout* seconds.
.. versionchanged:: 2.0.0
This method returns now a *proxy* object instead of *self*,
so that it can be used in a with statement without side effects.
"""
# Use the default timeout, if no timeout is provided.
if timeout is None:
timeout = self.timeout
# Increment the number right at the beginning.
# We can still undo it, if something fails.
with self._thread_lock:
self._lock_counter += 1
lock_id = id(self)
lock_filename = self._lock_file
start_time = time.time()
try:
while True:
with self._thread_lock:
if not self.is_locked:
logger().debug('Attempting to acquire lock %s on %s', lock_id, lock_filename)
self._acquire()
if self.is_locked:
logger().info('Lock %s acquired on %s', lock_id, lock_filename)
break
elif timeout >= 0 and time.time() - start_time > timeout:
logger().debug('Timeout on acquiring lock %s on %s', lock_id, lock_filename)
raise Timeout(self._lock_file)
else:
logger().debug(
'Lock %s not acquired on %s, waiting %s seconds ...',
lock_id, lock_filename, poll_intervall
)
> time.sleep(poll_intervall)
E Failed: Timeout >15.0s
filelock.py:284: Failed
====================================================== 2 failed, 21 passed in 38.50 seconds =======================================================
Deployed new version of system and it grabbed new version of filelock and this part of code actually deletes file in file_path variable!
file_path = get_project_path(project_slug) + '/' + settings.PROJECT_JSON
with FileLock(file_path):
project_file = open(file_path, 'w')
project_file.write(json.dumps(data))
project_file.close()
I've put print os.path.exists
just before and after this part of code and this is from log:
True
2017-12-30 21:00:33,622 - filelock - DEBUG - Attempting to acquire lock 140185207182392 on /.../project.json
2017-12-30 21:00:33,622 - filelock - INFO - Lock 140185207182392 acquired on /.../project.json
2017-12-30 21:00:33,622 - filelock - DEBUG - Attempting to release lock 140185207182392 on /.../project.json
2017-12-30 21:00:33,623 - filelock - INFO - Lock 140185207182392 released on /.../project.json
False
DO NOT INSTALL LATEST VERSION or you might have your files deleted!
poll_intervall should be spelled poll_interval (but leave the old one for compatibility!)
I use this function in several different scripts in a project:
@contextmanager
def locked_file(filename: str, mode: str = 'r') -> Generator:
if mode == 'r' or mode == 'rb' and not os.path.exists(filename):
raise OSError(f'File {filename} not found.')
lock_path = filename + '.lock'
lock = FileLock(lock_path, timeout=10) # throw error after 10 seconds
with lock, open(filename, mode) as f:
try:
yield f
finally:
try:
os.unlink(lock_path)
except NameError:
raise
except FileNotFoundError:
pass
# usage example
with locked_file('wow.txt', 'w') as f:
f.write('hello there')
A little background, this is used on a project that several use on several different machines that share a filesystem. There are also cron jobs running scripts that use locked_file, and subprocesses are sometimes spawned off that use locked_file. Not sure if any of this is relevant, just putting it out there.
Relatively frequently, users get an error that indicates the lock already exists, but it doesn't wait the 10 second timeout seeing if it can acquire the lock it just throws the following exception. Any idea why?
The error goes away upon re-running whatever program originally broke, but it's very frustrating.
In terms of debugging this, I can't reliably reproduce it. I tried writing a script that sleeps for 10 seconds in with
block and running it simultaneously from two different machines, but it worked as expected consistently.
Traceback (most recent call last):
File "/gpfs/main/course/cs1470/admin/course-grading/htabin/../hta/handin/check_submissions.py", line 67, in <module>
with locked_file(data_file) as f:
File "/local/projects/python3.7/lib/python3.7/contextlib.py", line 112, in _enter_
return next(self.gen)
File "/gpfs/main/course/cs1470/admin/course-grading/hta/handin/helpers.py", line 84, in locked_file
with lock.acquire(timeout=10), open(filename, mode) as f:
File "/gpfs/main/course/cs1470/admin/course-grading/ta/venv/lib/python3.7/site-packages/filelock.py", line 251, in acquire
self._acquire()
File "/gpfs/main/course/cs1470/admin/course-grading/ta/venv/lib/python3.7/site-packages/filelock.py", line 383, in _acquire
fd = os.open(self._lock_file, open_mode)
FileExistsError: [Errno 17] File exists: '/course/cs1470/admin/course-grading/ta/assignments.json.lock'
py-filelock documentation has following information.
timeout (float) – The maximum time waited for the file lock. If timeout <= 0, there is no timeout and this method will block until the lock could be acquired. If timeout is None, the default timeout is used.
However in actual code when timeout = 0
filelock doesn't wait and raises Timeout exception if lock is acquired by someone else, which seems right but doesn't match with documentation.
So documentation should say following.
timeout (float) – The maximum time waited for the file lock. If timeout < 0, there is no timeout and this method will block until the lock could be acquired. If timeout is None, the default timeout is used.
This would allow users to pass in exponential backoff generator instead of fixed periods. I can make a PR if you agree to do this.
Hi there,
When I run the following script, it runs successfully on Linux but fails on Windows 10:
import filelock
lock = filelock.FileLock('test.txt')
with lock:
f = open('test.txt', 'w')
f.write('hello')
f.close()
The error on Windows is:
PermissionError: [Errno 13] Permission denied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "lock.py", line 9, in <module>
f.close()
PermissionError: [Errno 13] Permission denied
However, if I move f.close()
outside of the context manager, the script successfully runs on Windows and Linux:
lock = filelock.FileLock('test.txt')
with lock:
f = open('test.txt', 'w')
f.write('hello')
f.close()
Why exactly must the f.close()
be placed outside the context manager on Windows, but not on Linux? Is it related to msvcrt.locking()?
I'm using filelock 2.0.7, Windows 10 (x64) and Debian Jessie (x64).
Hey Benedikt,
i'm really new to Python and work for a state monitoring for a RPi and different functions with the internal Watchdog at the moment.
I tried the following, but it doesn't work because either pemission denied in Windows or there is no file anymore (Raspbian):
path='ConfigLED.json'
lock=filelock.FileLock(path)
with lock:
with open(path) as config:
data=json.load(config)
print(data)
lock.release(True)
I think the problem is, that the lock is to the file and only with the object, which is combined with the lock, I can load the data. But the property (fd) is privat, right? Can you help me, how to realize that or is it not possible?
BR Jens
I was wondering if you had any recommendations of using this package in a multi user environment.
We are trying to lock an abstract resource (well a user-land device driver) using the filelock between multipler users potentially connected to a given linux server.
Our original idea was to put a lock file in /tmp
. Unfortunately, it seems the sticky bit restricts the usage of the same file by two users.
$ python test_filelock.py
$ sudo su j -c "/home/mark/miniforge3/envs/dev/bin/python test_filelock.py "
Traceback (most recent call last):
File "test_filelock.py", line 5, in <module>
os.ch
mod(lock_filename, 0o777)
PermissionError: [Errno 1] Operation not permitted: '/tmp/my_lock'
$ ls -lah /tmp/my_lock
-rwxrwxrwx 1 mark mark 0 Jun 23 14:31 /tmp/my_lock
test_filelock.py
contains:
from filelock import FileLock
import os
lock_filename = '/tmp/my_lock'
f = FileLock(lock_filename)
os.chmod(lock_filename, 0o777)
f.acquire()
f.release()
If we do find a multi-user solution, would you be interested in merging it in?
As a note, we tried to use SoftFileLock
but the fact that the file can persist when python crashes makes it unusable to create a more stable driver.
Using the help from the kernel through fcntl
makes FileLock
the preferred solution.
... hence not able to run tests if using pypi tarball.
Latest update breaks pip install on python2.7 and breaks installing tox
for testing so rather a critical issue!
Collecting filelock<4,>=3.0.0 (from tox)
Downloading https://files.pythonhosted.org/packages/ce/e1/7d404a13ed831b178a5af635c8b4923fcdff269925cb5839949edf11bd19/filelock-3.0.11.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-CYvThf/filelock/setup.py", line 42, in <module>
license_ = open("LICENSE.rst").read()
IOError: [Errno 2] No such file or directory: 'LICENSE.rst'
https://fangpenlin.com/posts/2012/08/26/good-logging-practice-in-python/ explains why this is not a good practice.
To avoid the case where lock files accidentally get left and lock things forever, we could have the process that acquires the lock set an expiration time on the lock. and if it's expired, the other process can assume that the process died or something and delete and re-acquire the lock.
"This package contains a single Python module that implements platform independent file locking as a means of inter-process communication"
I’m not clear about the following:
"The FileLock is platform dependent while the SoftFileLock is not. Use the FileLock if all instances of your application are running on the same host and a SoftFileLock otherwise."
Platform dependence usually means that the code is dependent on a particular OS. This is not consistent with the second sentence.
Hi,
Does this work across independent processes? So lock a file in one process, then tomorrow spawn another, independent process and find the file locked?
setup.py currently says "Python :: 3"
Also, in the first line of setup.py python3 should be changed to #!/usr/bin/env python
to make it universal.
Is there anything preventing the version 2.0.8 to be published on Pypi?
$ virtualenv env2
$ env2/bin/pip install filelock
$ ls env2
bin include lib LICENSE.rst local pip-selfcheck.json README.rst share
These LICENSE.rst
and README.rst
appear because they are listed in filelock’s setup.py
as data_files
. It’s better to list them in a MANIFEST.in
instead, so that they are included in a source distribution but are not actually installed.
The filelock
wheels on pypi.org are not universal:
https://pypi.org/simple/filelock/
e.g. filelock-3.0.12-py3-none-any.whl
It would be nice to have universal wheels. They can be built with
python3 setup.py bdist_wheel --universal
This will produce wheels named:
filelock-3.0.12-py2.py3-none-any.whl
Instead of the current name:
filelock-3.0.12-py3-none-any.whl
Would it be possible to release wheel files for pypy too alongside the source distribution? Or alternatively, consider adding a pyproject.toml
specifying build requirements 👍
In the method acquire()
: If the call to to _acquire()
succeeds I still have to wait a single polll interval.
For an application that has no hopes of waiting for a lock to become usable (in my case, avoiding the simultaneous execution of two server processes on the same state file), I'd need locks that fail immediately when held by another process.
The trivial timeout value I'd like to pass, 0
, has a different meaning, though: "poll indefinitely"; the None
value that would lend itself to the desired behavior is also used differently (meaning "use the default for that lock").
My current workaround is to just pass 0.001
, and it works. (Given the user story there is "start the server, see that it fails immediately", that's immediate enough).
It would be nice to have a documented way of getting immediate-return behavior. For API stability reasons, neither redefining 0
or None
are viable. One way to introduce the desired behavior is to have a filelock.RETURN_IMMEDIATELY
value (an object()
, a string or even 0.000000001
) that can be passed in as a timeout argument to indicate that return or error should show immediately.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.