pytorch / data Goto Github PK
View Code? Open in Web Editor NEWA PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
License: BSD 3-Clause "New" or "Revised" License
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.
License: BSD 3-Clause "New" or "Revised" License
This issue is generated from the TODO line
https://github.com/pytorch/data/blob/f102d25f9f444de3380c6d49bf7aaf52c213bb1f/build/lib/torchdata/datapipes/iter/transform/bucketbatcher.py#L10
cc @ejguan
Currently we used import
in each DataPipe class to make lazy importing happens like
data/torchdata/datapipes/iter/load/iopath.py
Lines 21 to 29 in 4802a35
As more potential libraries used in TorchData to support different functionalities, we could add a methods to support lazy import module to global namespace. Then, we don't need to duplicate the import
inside each class used the same third-party module.
Features needed:
from ... import ... as ...
import xxx.yyy
This is the initial draft. I will complete it shortly.
State of Iterator is attached to each IterDataPipe instance. This is super useful for:
Implementation Options:
_iterator
as the place holder for __iter__
calls.__next__
. (My Preference)
__iter__
) is not picklable -> Help multiprocessing and snapshotting)__iter__
return self
(Forker(self)
may be another option, not 100% sure)__next__
call to do a fast forward. The state of iteration is attached to DataPipe instance, rather than a temporary instance created from __iter__
, which we couldn't track the internal state. (We can easily track states like RNG, iteration number, buffer, etc. as they are going to be attached to self
instance)DataLoader trigger Error if there are two DataPipe instance with same id in the graph. (Another option is DataLoader do an automatically fork)
Users should use Forker for each DataPipe want to have single DataPipe twice in the graph.
Currently, when iteration on DataPipe starts and Error is raised, the traceback would report at each __iter__
method pointing to the DataPipe Class file.
It's hard to figure out which part of DataPipe is broken, especially when multiple same DataPipe calls exist in the pipeline.
As normally developer would iterate over the sequence of DataPipe for debugging, we can't rely on DataLoader to handle this case.
I am not sure how to reference self
object from each Iterator
instance. https://docs.python.org/3/reference/expressions.html?highlight=generator#generator-iterator-methods
(I guess this is also one thing we need to think about singleton iterator should be able to reference back to the object)
As #29 is landed, all csv-related DataPipes are located in https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/util/plain_text_reader.py.
But, we forget to remove the DataPipe in https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/util/csvparser.py
Discussed with @NivekT about the wrapper class for all streams:
Pros:
__del__
method to close the file stream automatically when ref count becomes 0 for wrapper. It would eliminate all warnings.OnDiskCache
, I would prefer a unified API to read stream, otherwise I have to handle all different cases)
read()
to read everything into memorystream=True
for large file, the requests.Response
doesn't support read
. It only supports iter_content
or __iter__
to read chunk by chunk.Cons:
Reference: #35 (comment), #65 (comment)
cc: @VitalyFedyunin
Based on the internal tests, several signals needs to be removed from our ignore list.
Issue generate from TODO line
Since IoPath has features to download different sources: HTTP, S3, Google, and etc, we should add a IoPathDownloader
to handle all different sources.
We should have some checks/tests that flag when a DataPipe has an attribute/method that shares the same name as existing functional datapipes. Those names are the ones defined inside the decorator @functional_datapipe('NAME')
, such as map
, batch
, zip
, and etc.
For example, Batcher
(or BatcherIterDataPipe
) has the functional datapipe name batch
. However, currently there is nothing to prevent other IterDataPipes to use batch
as the name of an attribute or method.
The change in the following PR is a good example.
If this feature is not implemented, then a DataPipe can have multiple attributes/methods with the same name, potentially causing confusion and bugs.
Ideally, we should be able to flag this issue during development (within IDEs).
If we cannot automatically prevent this during development, we can have a check in register_datapipe_as_function
or a CI check that ensures all attributes and methods are compliant.
mypy
also should be able to flag this issue if the .pyi
file has a complete set of method interfaces for all built-in DataPipes (including those in TorchData). This becomes trickier for user-defined DataPipes.
per title.
This would help us to notify domains about the BC breaking changes.
Reference:
cc: @NivekT
Issue generate from TODO line
Line 4 in f102d25
https://github.com/pytorch/data/blob/main/torchdata/__init__.py#L4
Seems like there's some messed up import paths? some things are torch.data where as others are torchdata?
(3.7.11) tristanr@tristanr-arch2 ~> pip install -e git+https://github.com/pytorch/data#egg=torchdata
Obtaining torchdata from git+https://github.com/pytorch/data#egg=torchdata
Cloning https://github.com/pytorch/data to ./venvs/3.7.11/src/torchdata
Running command git clone --filter=blob:none -q https://github.com/pytorch/data /home/tristanr/venvs/3.7.11/src/torchdata
Resolved https://github.com/pytorch/data to commit 2d94ebc6e95d4bd475a98e947781e58410386a10
Preparing metadata (setup.py) ... done
Requirement already satisfied: requests in ./venvs/3.7.11/lib/python3.7/site-packages (from torchdata) (2.26.0)
Requirement already satisfied: torch in ./venvs/3.7.11/lib/python3.7/site-packages (from torchdata) (1.10.0)
Requirement already satisfied: certifi>=2017.4.17 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (2021.10.8)
Requirement already satisfied: charset-normalizer~=2.0.0 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (2.0.7)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (1.26.7)
Requirement already satisfied: idna<4,>=2.5 in ./venvs/3.7.11/lib/python3.7/site-packages (from requests->torchdata) (3.3)
Requirement already satisfied: typing-extensions in ./venvs/3.7.11/lib/python3.7/site-packages (from torch->torchdata) (3.10.0.2)
Installing collected packages: torchdata
Attempting uninstall: torchdata
Found existing installation: torchdata 0.2.0
Uninstalling torchdata-0.2.0:
Successfully uninstalled torchdata-0.2.0
Running setup.py develop for torchdata
Successfully installed torchdata-0.1.0a0+2d94ebc
(3.7.11) tristanr@tristanr-arch2 ~> python -c "import torchdata"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/__init__.py", line 2, in <module>
from . import datapipes
File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/__init__.py", line 3, in <module>
from . import iter
File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/iter/__init__.py", line 24, in <module>
from torchdata.datapipes.iter.load.online import (
File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/iter/load/online.py", line 10, in <module>
from torchdata.datapipes.utils import StreamWrapper
File "/home/tristanr/venvs/3.7.11/src/torchdata/torchdata/datapipes/utils/__init__.py", line 2, in <module>
from torch.utils.data.datapipes.utils.common import StreamWrapper
ImportError: cannot import name 'StreamWrapper' from 'torch.utils.data.datapipes.utils.common' (/home/tristanr/venvs/3.7.11/lib/python3.7/site-packages/torch/utils/data/datapipes/utils/common.py)
Steps to reproduce the behavior:
pip install -e git+https://github.com/pytorch/data#egg=torchdata
python -c "import torchdata"
No import error
PyTorch version: 1.10.0+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 11.1.0
Clang version: Could not collect
CMake version: version 3.22.0
Libc version: glibc-2.33
Python version: 3.7.11 (default, Nov 22 2021, 11:26:35) [GCC 11.1.0] (64-bit runtime)
Python platform: Linux-5.14.16-arch1-1-x86_64-with-arch
Is CUDA available: False
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Versions of relevant libraries:
[pip3] botorch==0.5.1
[pip3] gpytorch==1.5.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.21.4
[pip3] torch==1.10.0
[pip3] torchdata==0.1.0a0+2d94ebc
[pip3] torchmetrics==0.6.0
[pip3] torchvision==0.11.1
[pip3] torchx==0.1.2.dev0
[conda] blas 1.0 mkl
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-include 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] mypy_extensions 0.4.3 py39h06a4308_0
[conda] numpy 1.20.3 py39hf144106_0
[conda] numpy-base 1.20.3 py39h74d4b33_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
Should we change the name of FileLoader
to FileOpener
?
We split the file-loading functionality into three steps:
FileLister
FileLoader
(I personally feel like the name is incorrect.)dp.map(fn=lambda x: x.read(), input_col=1)
That would remove the need to import these from three different sources.
Before:
from torch.utils.data import IterDataPipe
from torch.utils.data.datapipes.iter import (
Demultiplexer,
Filter,
Mapper,
TarArchiveReader,
Shuffler,
)
from torchdata.datapipes.iter import KeyZipper
After:
from torchdata.datapipes.iter import (
KeyZipper,
IterDataPipe,
Demultiplexer,
Filter,
Mapper,
TarArchiveReader,
Shuffler,
)
I can send a PR if we want this.
Vision Team found a series required DataPipe to revamp the Dataset, some of them are general-purpose in https://github.com/pytorch/vision/blob/main/torchvision/prototype/datasets/utils/_internal.py
The behavior seems to differ from AWS and MacOS, and likely different Python environment.
It may possibly related to the write order of test files being different on different OS/system.
Specific tests that may have issues:
This issue is generated from the TODO line
https://github.com/pytorch/data/blob/f102d25f9f444de3380c6d49bf7aaf52c213bb1f/build/lib/torchdata/datapipes/iter/util/hashchecker.py#L39
cc @VitalyFedyunin
This issue is generated from the TODO line
https://github.com/pytorch/data/blob/f102d25f9f444de3380c6d49bf7aaf52c213bb1f/build/lib/torchdata/datapipes/iter/transform/bucketbatcher.py#L56
This issue is generated from the TODO line
Line 19 in f102d25
iopath
provides an API that can replace open
and support saving to multiple destinations (including S3), we should probably reimplement SaverIterDataPipe
to rely on iopath
instead.
We could even then rename the functional API so that it's called save
instead of save_to_disk
.
Add a new loader similar to the iopath loader that uses fsspec.
https://github.com/pytorch/data/blob/main/torchdata/datapipes/iter/load/iopath.py
https://filesystem-spec.readthedocs.io/en/latest/
It would be nice to have fsspec in addition to iopath for loading data from general data sources. A lot of projects already use it and support it which makes it a good to add to torchdata as well for uniform support.
PyTorch Lighting, Tensorboard and TorchX have support for fsspec already. It's quite easy to add support for a new storage provider and has many commons ones available already. Internally there's a Manifold provider which is used with many PyTorch/STL projects.
For common storage providers such as s3 there's generally already support for that in most projects though for custom / less used storage providers a user would have to implement support for each different system. iopath does provide a similar abstraction but it seems like fsspec generally has more OSS adoption so would be nice to have a unified interface across pytorch projects
Is the pypi package torchdata associated with this project? I naively ran pip install torchdata
when trying to install this package and got some unexpectedly unrelated project.
https://pypi.org/project/torchdata/
That doesn't seem to have any github sources corresponding to it so wondering what's going on with that package.
We could support different DataPipe with same functionality using a same functional API with a router datapipe.
Like open
, we can support:
Option 1:
Based on the input (type), we could route it to corresponding DataPipe with extra argument to functional_datapipe
register_funtional_api("open", router_fn=...)
@functional_datapipe("open", route_class=...)
class HTTPReader(IterDataPipe):
...
Option 2:
Implement a dispatcher toward functional_API. This may be more suitable but needed to be designed carefully.
Resources should either be closed or the warnings should be caught.
Self-explanatory. I believe the modes should be the same for ease of exchangeability.
PyTorch FileLoader
uses b
by default, while IoPathFileLoader
uses r
This issue is generated from the TODO line
Line 70 in f102d25
Close all streams within DataPipe after reading (if DataPipe's output isn't a stream). This is applicable to DataPipes such as CSVParserIterDataPipe
and JsonParserIterDataPipe
.
This features prevents streams that has been exhausted from remaining open. Since the end of the stream has been reached, the user must reset or reopen the stream before they can be reused. It makes sense to close them and the users can re-open them outside the DataPipe if desired.
One alternative is to only close streams that are not seek-able. The users still have to re-open/reset those streams outside of the DataPipe.
Note that if the DataPipe returns a stream (e.g. TarArchiveReader
), the behavior will be different (it won't be closed) because it is uncertain when that output stream will be read.
per title
Currently multiple stacked KeyZipper
would create a recursive data structure:
dp = KeyZipper(dp, ref_dp1, lambda x: x)
dp = KeyZipper(dp, ref_dp2, lambda x: x[0])
dp = KeyZipper(dp, ref_dp3, lambda x: x[0][0])
This is super annoying if we are using same key for each KeyZipper
. At the end, it yields `(((dp, ref_dp1), ref_dp2), ref_dp3)
We should either accept multiple reference DataPipe for KeyZipper to preserve same key, or have some expand or collate function to convert result to (dp, (ref_dp1, ref_dp2, ref_dp3))
Annotation includes:
source_datapipe
Frequently TODO lines left unattended, we should consider requirement to bind TODO lines to issues (like #TODO(issue_id))
I've been looking at how we might go about supporting torchdata within TorchX and with components. I was wondering what the serialization options were for transforms and what that might look like.
There's a couple of common patterns that would be nice to support:
For the general transforms and handling arbitrary user data we were wondering how we might go about serializing the data pipes and transforms for use in a pipeline with TorchX.
There's a couple of options here:
Has there been any thought about how to support this well? Is there extra work that should be done here to make this better?
Are DataPipes guaranteed to be pickle safe and is there anything that needs to be done to support that?
I was also wondering if there's multiprocessing based datapipes and how that works since this seems comparable. I did see https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py but didn't see any examples on how to use that to achieve a traditional PyTorch dataloader style workers.
P.S. should this be on the pytorch discussion forums instead? it's half feature request half questions so wasn't sure where best to put it
cc @kiukchung
This issue was discovered as part of #40. The TarArchiveReader
implementation is likely wrong:
TarArchiveReader
immediately after HTTPReader
because the HTTP stream does not support the operation seek
:file_url = "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
http_reader_dp = HttpReader(IterableWrapper([file_url]))
tar_dp = http_reader_dp.read_from_tar()
for fname, stream in tar_dp:
print(f"{fname}: {stream.read()}")
It returns an error that looks something like this:
Traceback (most recent call last):
File "/Users/ktse/data/test/test_stream.py", line 66, in <module>
for fname, stream in tar_dp:
File "/Users/.../data/torchdata/datapipes/iter/util/tararchivereader.py", line 62, in __iter__
raise e
File "/Users/.../data/torchdata/datapipes/iter/util/tararchivereader.py", line 48, in __iter__
tar = tarfile.open(fileobj=cast(Optional[IO[bytes]], data_stream), mode=self.mode)
File "/Users/.../miniconda3/envs/pytorch/lib/python3.9/tarfile.py", line 1609, in open
saved_pos = fileobj.tell()
io.UnsupportedOperation: seek
Currently, you can work around by downloading the file in advance (or caching it with OnDiskCacheHolderIterDataPipe
). In those cases, TarArchiveReader
works as intended.
TarArchiveReader
also doesn't work with GDriveReader
because of the return typeamazon_review_url = "https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM"
gdrive_reader_dp = OnlineReader(IterableWrapper([amazon_review_url]))
tar_dp = gdrive_reader_dp.read_from_tar()
This is because validate_pathname_binary_tuple
requires BufferedIOBase
. Perhaps it should accept HTTP response as well?
data/torchdata/datapipes/utils/common.py
Lines 66 to 76 in 85d8bbe
test/test_stream.py:None (test/test_stream.py)
test_stream.py:79: in <module>
for fname, stream in tar_dp:
../torchdata/datapipes/iter/util/tararchivereader.py:43: in __iter__
validate_pathname_binary_tuple(data)
../torchdata/datapipes/utils/common.py:74: in validate_pathname_binary_tuple
raise TypeError(
E TypeError: pathname binary tuple should have BufferedIOBase based binary type, but got <class 'urllib3.response.HTTPResponse'>
per title
We have many tests for existing DataPipes (both in PyTorch Core and TorchData). However, over time, they have become less organized. Moreover, as the testing requirements expand, older DataPipes may not have tests to cover the newly added requirements.
This issue aims to track the status of tests for all DataPipes.
We want to ensure test coverage for all DataPipe is complete to reduce bugs and unexpected behavior.
We also should create some testing templates for IterDataPipe
and MapDataPipe
that can be widely applied.
IterDataPipe
TrackerX - Done
NA - Not Applicable
Blank - Not Done/Unclear
Test definitions:
Functional - unit test to ensure that the DataPipe works properly with various input arguments
Reset - DataPipe can be reset/restart after being read
__len__
- the __len__
method is implemented whenever possible (or explicitly not implemented)
Serializable - DataPipe is serializable
Graph (future) - can be traversed as part of a DataPipe graph
Snapshot (future) - can be saved/loaded as a checkpoint/snapshot
Name | Module | Functional Test | Reset | __len__ |
Serializable (Pickable) | Graph | Snapshot |
---|---|---|---|---|---|---|---|
Batcher | Core | X | X | X | X | ||
Collator | Core | X | X | X | X | ||
Concater | Core | X | X | X | X | ||
Demultiplexer | Core | X | X | X | X | ||
FileLister | Core | X | X | X | X | ||
FileOpener | Core | X | X | X | X | ||
Filter | Core | X | X | X | X | ||
Forker | Core | X | X | X | X | ||
Grouper | Core | X | X | X | |||
IterableWrapper | Core | X | X | X | X | ||
Mapper | Core | X | X | X | X | ||
Multiplexer | Core | X | X | X | X | ||
RoutedDecoder | Core | X | X | X | X | ||
Sampler | Core | X | X | X | X | ||
Shuffler | Core | X | X | X | X | ||
StreamReader | Core | X | X | X | X | ||
UnBatcher | Core | X | X | X | |||
Zipper | Core | X | X | X | X | ||
BucketBatcher | Data | X | X | X | X | ||
CSVDictParser | Data | X | X | X | X | ||
CSVParser | Data | X | X | X | X | ||
Cycler | Data | X | X | X | X | ||
DataFrameMaker | Data | X | X | X | X | ||
Decompressor | Data | X | X | X | X | ||
Enumerator | Data | X | X | X | X | ||
FlatMapper | Data | X | X | X | X | ||
FSSpecFileLister | Data | X | X | X | X | ||
FSSpecFileOpener | Data | X | X | X | X | ||
FSSpecSaver | Data | X | X | X | X | ||
GDriveReader | Data | X | X | X | X | ||
HashChecker | Data | X | X | X | X | ||
Header | Data | X | X | X | X | ||
HttpReader | Data | X | X | X | X | ||
InMemoryCacheHolder | Data | X | X | X | X | ||
IndexAdder | Data | X | X | X | X | ||
IoPathFileLister | Data | X | X | X | X | ||
IoPathFileOpener | Data | X | X | X | X | ||
IoPathSaver | Data | X | X | X | X | ||
IterKeyZipper | Data | X | X | X | X | ||
JsonParser | Data | X | X | X | X | ||
LineReader | Data | X | X | X | X | ||
MapKeyZipper | Data | X | X | X | X | ||
OnDiskCacheHolder | Data | X | X | X | X | ||
OnlineReader | Data | X | X | X | X | ||
ParagraphAggregator | Data | X | X | X | X | ||
ParquetDataFrameLoader | Data | X | X | X | X | ||
RarArchiveLoader | Data | X | X | X | X | ||
Rows2Columnar | Data | X | X | X | X | ||
SampleMultiplexer | Data | X | X | X | X | ||
Saver | Data | X | X | X | X | ||
TarArchiveLoader | Data | X | X | X | X | ||
UnZipper | Data | X | X | X | X | ||
XzFileLoader | Data | X | X | X | X | ||
ZipArchiveLoader | Data | X | X | X | X |
MapDataPipe
TrackerX - Done
NA - Not Applicable
Blank - Not Done/Unclear
Name | Module | Functional Test | __len__ |
Serializable (Pickable) | Graph | Snapshot |
---|---|---|---|---|---|---|
Batcher | Core | X | X | |||
Concater | Core | X | X | |||
Mapper | Core | X | X | X | ||
SequenceWrapper | Core | X | X | X | ||
Shuffler | Core | X | X | |||
Zipper | Core | X | X |
AssertionError: Lists differ: [('1.json', ['foo', {'bar': ['baz', None, 1.[61 chars] 2})] != [('2.json', {'__complex__': True, 'real': 1,[61 chars]]}])]
at
Line 637 in f83b8a1
Create .pyi files for DataPipes to provide information for IDEs
As @ejguan mentioned, codegen is needed in order for all the comments and argument types to be automatically attached to the .pyi file. We also need to account for the fact that operations exist in both Core and TorchData.
These files will allow IDEs to provide more information to users when they are using DataPipes, and will enable features such as code autocompletion.
This issue is generated from the TODO line
Line 301 in f102d25
python setup.py clean
couldn't remove the package from my environment no mater I install the package in develop mode or release mode.
In order to remove it, I have to use pip uninstall torchdata
.
python setup.py develop
), I have to re-install it using release mode then pip uninstall
.The script of setup.py probably has a bug to clean
.
cc: @NivekT
per title
This issue is generated from the TODO line
https://github.com/pytorch/data/blob/f102d25f9f444de3380c6d49bf7aaf52c213bb1f/build/lib/torchdata/datapipes/iter/util/ziparchivereader.py#L12
cc @VitalyFedyunin
CSVParser
has it
but JsonParser
and LineReader
don't. For me this often leads to something like this:
dp = dp.parse_json_files().map(lambda data: data[1])
This means I will get warned to not use lambda
's, but that hardly justifies a separate function.
Some of __init__.py
imports are getting out of control, we should consider using https://pycqa.github.io/isort/ as hard linter and/or suggestion.
This issue is generated from the TODO line
data/examples/vision/imagefolder.py
Line 80 in f102d25
Raise warning if users want to do in-memory cache over file handlers.
Using HashCheckerIterDataPipe for implementing a SST2 dataset within torchtext causes test failures for unittest_linux_py3.6
and for all python versions on windows platform.
HashCheckerIterDataPipe
is used: code pointerI believe there may be changes to how io.seek()
works from python 3.6 to 3.7 that could be causing the failures in unittest_linux_py3.6
and unittest_windows_py3.6
. I'm not really sure why the other windows unit tests are failing.
Steps to reproduce the behavior:
unittest_linux_py3.6
and unittest_windows_py3.6
self = <torchdata.datapipes.iter.util.hashchecker.HashCheckerIterDataPipe object at 0x7f937f867ba8>
def __iter__(self):
for file_name, stream in self.source_datapipe:
if self.hash_type == "sha256":
hash_func = hashlib.sha256()
else:
hash_func = hashlib.md5()
while True:
# Read by chunk to avoid filling memory
chunk = stream.read(1024 ** 2)
if not chunk:
break
hash_func.update(chunk)
# TODO(VitalyFedyunin): this will not work (or work crappy for non-seekable steams like http)
if self.rewind:
> stream.seek(0)
E io.UnsupportedOperation: seek
env/lib/python3.6/site-packages/torchdata-0.1.0a0+7772406-py3.6.egg/torchdata/datapipes/iter/util/hashchecker.py:51: UnsupportedOperation
unittest_windows_py*
self = <torchdata.datapipes.iter.util.hashchecker.HashCheckerIterDataPipe object at 0x000001929F2B5548>
def __iter__(self):
for file_name, stream in self.source_datapipe:
if self.hash_type == "sha256":
hash_func = hashlib.sha256()
else:
hash_func = hashlib.md5()
while True:
# Read by chunk to avoid filling memory
chunk = stream.read(1024 ** 2)
if not chunk:
break
hash_func.update(chunk)
# TODO(VitalyFedyunin): this will not work (or work crappy for non-seekable steams like http)
if self.rewind:
stream.seek(0)
if file_name not in self.hash_dict:
> raise RuntimeError("Unspecified hash for file {}".format(file_name))
E RuntimeError: Unspecified hash for file C:\Users\circleci\.torchtext\cache\SST2\SST-2\train.tsv
env\lib\site-packages\torchdata-0.1.0a0+7772406-py3.7.egg\torchdata\datapipes\iter\util\hashchecker.py:54: RuntimeError
Expect all tests to pass
Tests pass on devserver environment but fails on CircleCI.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.