IT박스

파이썬에서 파일 잠금

itboxs 2020. 7. 2. 08:04
반응형

파이썬에서 파일 잠금


파이썬으로 작성하려면 파일을 잠 가야합니다. 여러 Python 프로세스에서 한 번에 액세스합니다. 온라인에서 일부 솔루션을 찾았지만 대부분 유닉스 기반 또는 Windows 기반이기 때문에 대부분 내 목적에 실패합니다.


좋아, 그래서 내가 쓴 코드로가는 결국 내 웹 사이트에, 여기 링크, archive.org에서보기 죽은 ( GitHub의에도 가능합니다 ). 다음과 같은 방식으로 사용할 수 있습니다.

from filelock import FileLock

with FileLock("myfile.txt"):
    # work with the file as it is now locked
    print("Lock acquired.")

크로스 플랫폼 파일 잠금 모듈이 있습니다 : Portalocker

Kevin이 말했듯이 여러 프로세스에서 한 번에 파일에 쓰는 것은 가능한 한 피하고 싶은 일입니다.

문제를 데이터베이스에 삽입 할 수 있다면 SQLite를 사용할 수 있습니다. 동시 액세스를 지원하고 자체 잠금을 처리합니다.


다른 솔루션은 많은 외부 코드 기반을 인용합니다. 직접 수행하고 싶다면 Linux / DOS 시스템에서 해당 파일 잠금 도구를 사용하는 크로스 플랫폼 솔루션에 대한 코드가 있습니다.

try:
    # Posix based file locking (Linux, Ubuntu, MacOS, etc.)
    import fcntl, os
    def lock_file(f):
        fcntl.lockf(f, fcntl.LOCK_EX)
    def unlock_file(f):
        fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
    # Windows file locking
    import msvcrt, os
    def file_size(f):
        return os.path.getsize( os.path.realpath(f.name) )
    def lock_file(f):
        msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
    def unlock_file(f):
        msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))


# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
    # Open the file with arguments provided by user. Then acquire
    # a lock on that file object (WARNING: Advisory locking).
    def __init__(self, path, *args, **kwargs):
        # Open the file and acquire a lock on the file before operating
        self.file = open(path,*args, **kwargs)
        # Lock the opened file
        lock_file(self.file)

    # Return the opened file object (knowing a lock has been obtained).
    def __enter__(self, *args, **kwargs): return self.file

    # Unlock the file and close the file object.
    def __exit__(self, exc_type=None, exc_value=None, traceback=None):        
        # Flush to make sure all buffered contents are written to file.
        self.file.flush()
        os.fsync(self.file.fileno())
        # Release the lock on the file.
        unlock_file(self.file)
        self.file.close()
        # Handle exceptions that may have come up during execution, by
        # default any exceptions are raised to the user.
        if (exc_type != None): return False
        else:                  return True        

이제 일반적으로 명령문을 사용하는 블록에서 AtomicOpen사용할 수 있습니다 .withopen

경고 : 종료 가 호출 되기 전에 Windows에서 실행 중이고 Python이 충돌 하면 잠금 동작이 무엇인지 잘 모르겠습니다.

경고 : 여기에 제공된 잠금은 절대적인 것이 아니라 권고입니다. 잠재적으로 경쟁중인 모든 프로세스는 "AtomicOpen"클래스를 사용해야합니다.


lockfile을 선호합니다 — 플랫폼 독립적 인 파일 잠금


잠금은 플랫폼 및 장치에 따라 다르지만 일반적으로 몇 가지 옵션이 있습니다.

  1. Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
  2. Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
  3. Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
  4. There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.

For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.

If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).

Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.


I have been looking at several solutions to do that and my choice has been oslo.concurrency

It's powerful and relatively well documented. It's based on fasteners.

Other solutions:


Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.

Your best bet is have a separate process that coordinates read/write access to that file.


Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:

import os

def my_lock(f):
    if os.name == "posix":
        # Unix or OS X specific locking here
    elif os.name == "nt":
        # Windows specific locking here
    else:
        print "Unknown operating system, lock unavailable"

I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.

Here is the code:

def errlogger(error):

    while True:
        if not exists('errloglock'):
            lock = open('errloglock', 'w')
            if exists('errorlog'): log = open('errorlog', 'a')
            else: log = open('errorlog', 'w')
            log.write(str(datetime.utcnow())[0:-7] + ' ' + error + '\n')
            log.close()
            remove('errloglock')
            return
        else:
            check = stat('errloglock')
            if time() - check.st_ctime > 0.01: remove('errloglock')
            print('waiting my turn')

EDIT--- After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:

lock = open('errloglock', 'w')

to just after:

remove('errloglock')

so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.

Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:

lock.close()

which I had immediately following the open statement, so I have removed it in this edit.


The scenario is like that: The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.

Here is my working code:

from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
    lock.acquire()
    status = lock.path + ' is locked.'
    print status
else:
    status = lock.path + " is already locked."
    print status

return status

I found a simple and worked(!) implementation from grizzled-python.

Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.


You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.

If you simply want to lock a file here's how it works:

import uuid
from pylocker import Locker

#  create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())

# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')

# aquire the lock
with FL as r:
    # get the result
    acquired, code, fd  = r

    # check if aquired.
    if fd is not None:
        print fd
        fd.write("I have succesfuly aquired the lock !")

# no need to release anything or to close the file descriptor, 
# with statement takes care of that. let's print fd and verify that.
print fd

참고URL : https://stackoverflow.com/questions/489861/locking-a-file-in-python

반응형