bauman / python-idzip Goto Github PK
View Code? Open in Web Editor NEWSeekable, gzip compatible, compression format
License: MIT License
Seekable, gzip compatible, compression format
License: MIT License
python-idzip version 0.3
Sample code to reproduce the issue
import idzip
with open("/tmp/r.txt", mode="wb") as f:
zipfile = idzip.IdzipFile(fileobj=f, mode="wb")
zipfile.write(b"\x00ed")
zipfile.close()
Stacktrace
Traceback (most recent call last):
File "/tmp/trial.py", line 4, in <module>
zipfile = idzip.IdzipFile(fileobj=f, mode="wb")
File "/home/arun/src/gh/stardict/.venv/lib/python3.6/site-packages/idzip/api.py", line 39, in __init__
self._impl = self._make_writer(fileobj, sync_size=sync_size, mtime=mtime)
File "/home/arun/src/gh/stardict/.venv/lib/python3.6/site-packages/idzip/api.py", line 53, in _make_writer
return IdzipWriter(filespec, sync_size=sync_size, mtime=mtime)
File "/home/arun/src/gh/stardict/.venv/lib/python3.6/site-packages/idzip/compressor.py", line 266, in __init__
"`output` must be a file-like object supporting "
TypeError: `output` must be a file-like object supporting write, tell, flush, and close!
Exception ignored in: <bound method IOStreamWrapperMixin.__del__ of <idzip.compressor.IdzipWriter object at 0x7f6039c2d4a8>>
Traceback (most recent call last):
File "/home/arun/src/gh/stardict/.venv/lib/python3.6/site-packages/idzip/_stream.py", line 22, in __del__
if not self.closed:
File "/home/arun/src/gh/stardict/.venv/lib/python3.6/site-packages/idzip/_stream.py", line 7, in closed
return self.stream.closed
File "/home/arun/src/gh/stardict/.venv/lib/python3.6/site-packages/idzip/compressor.py", line 293, in stream
return self.output
AttributeError: 'IdzipWriter' object has no attribute 'output'
Profiling of randomly accessing a large number of positions from a large file shows that more time is spent linearly searching for _Member
objects than actually performing complex computation on them. I propose to optimize IdzipReader._select_member
to detect when the position requested is within the set of parsed members and use a binary search to select the correct member in O(logn)
time rather than O(n)
time.
Is it at any point reasonable to just load all _Member
s in a single pass?
I noticed while compressing a large file with a sloppily written loop that if I did not use a chunk size of MAX_MEMBER_SIZE
or an integral fraction thereof, the content of the file may be corrupted.
import idzip
src = '...'
dest = '...'
with open(src, 'rb') as infh, idzip.open(dest, 'wb') as outfh:
# chunk_size = idzip.MAX_MEMBER_SIZE
chunk_size = 2 ** 28
chunk = infh.read(chunk_size)
while chunk:
outfh.write(chunk)
chunk = infh.read(chunk_size)
I think the issue is in how IdzipWriter.write
tries to handle large buffers gracefully, but I need to investigate further.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.