Giter Club home page Giter Club logo

fastestimator's People

Contributors

aashokvardhan avatar arichow avatar carolinefavart avatar chm123 avatar dwijayds avatar fastestimatordev avatar gauravkumar272333 avatar geez0219 avatar hanskrupakar avatar jphong89 avatar nikhilnanda21 avatar nishanksingla avatar purujitb avatar rajesh1226 avatar tortoiseham avatar vbvg2008 avatar vivek305 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fastestimator's Issues

Unable to use LeNet tensorflow model

I tried running the example from https://www.fastestimator.org/examples/r1.2/image_classification/mnist/mnist on google colab
when i reach Model Construction

from fastestimator.architecture.tensorflow import LeNet

model = fe.build(model_fn=LeNet, optimizer_fn="adam")

It throws

ValueError                                Traceback (most recent call last)

<ipython-input-8-f81106ddd095> in <module>()
      1 from fastestimator.architecture.tensorflow import LeNet
      2 
----> 3 model = fe.build(model_fn=LeNet, optimizer_fn="adam")

1 frames

/usr/local/lib/python3.7/dist-packages/fastestimator/network.py in build(model_fn, optimizer_fn, weights_path, model_name, mixed_precision)
    896     # create optimizer
    897     for idx, (model, optimizer_def, weight, name) in enumerate(zip(models, optimizer_fn, weights_path, model_name)):
--> 898         models[idx] = trace_model(_fe_compile(model, optimizer_def, weight, name, mixed_precision),
    899                                   model_idx=idx if len(models) > 1 else -1,
    900                                   model_fn=model_fn,

/usr/local/lib/python3.7/dist-packages/fastestimator/network.py in _fe_compile(model, optimizer_fn, weight, name, mixed_precision)
    929         framework = "torch"
    930     else:
--> 931         raise ValueError("unrecognized model format: {}".format(type(model)))
    932     # torch multi-gpu handling
    933     if framework == "torch" and torch.cuda.device_count() > 1:

ValueError: unrecognized model format: <class 'tensorflow.python.keras.engine.sequential.Sequential'>

https://colab.research.google.com/drive/1TZuVYFUV7JQsKO961IduXVtd0JGtFQ5w?usp=sharing

An error extracting .zip data in UNet example under win10

Hi!

I'm not exactly sure if I'm posting on the correct place, please let me know if it is appropriate or not; greatly appreciate it!

My system is win10, with python 3.7 virtual environment.

I'm a new user to FastEstimator; when I was trying to run UNet lung segmentation example notebook, I get a keyerror when extracting the data; I was wondering if you could take a look, thanks a lot!

batch_size = 4
epochs = 25
max_train_steps_per_epoch = None
max_eval_steps_per_epoch = None
save_dir = tempfile.mkdtemp()
data_dir = None

csv = montgomery.load_data(root_dir=data_dir)

Extracting file ...
Traceback (most recent call last):

File "C:\anaconda_install\envs\fe_env\lib\sre_parse.py", line 1015, in parse_template
this = chr(ESCAPES[this][1])

KeyError: '\l'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File "", line 8, in
csv = montgomery.load_data(root_dir=data_dir)

File "C:\anaconda_install\envs\fe_env\lib\site-packages\fastestimator\dataset\data\montgomery.py", line 75, in load_data
df['mask_left'] = df['image'].str.replace('CXR_png', os.path.join('ManualMask', 'leftMask'))

File "C:\anaconda_install\envs\fe_env\lib\site-packages\pandas\core\strings.py", line 1954, in wrapper
return func(self, *args, **kwargs)

File "C:\anaconda_install\envs\fe_env\lib\site-packages\pandas\core\strings.py", line 2777, in replace
self._parent, pat, repl, n=n, case=case, flags=flags, regex=regex

File "C:\anaconda_install\envs\fe_env\lib\site-packages\pandas\core\strings.py", line 726, in str_replace
return _na_map(f, arr, dtype=str)

File "C:\anaconda_install\envs\fe_env\lib\site-packages\pandas\core\strings.py", line 131, in _na_map
return _map_object(f, arr, na_mask=True, na_value=na_result, dtype=dtype)

File "C:\anaconda_install\envs\fe_env\lib\site-packages\pandas\core\strings.py", line 216, in _map_object
result = lib.map_infer_mask(arr, f, mask.view(np.uint8), convert)

File "pandas_libs\lib.pyx", line 2287, in pandas._libs.lib.map_infer_mask

File "C:\anaconda_install\envs\fe_env\lib\site-packages\pandas\core\strings.py", line 714, in
f = lambda x: compiled.sub(repl=repl, string=x, count=n)

File "C:\anaconda_install\envs\fe_env\lib\re.py", line 309, in _subx
template = _compile_repl(template, pattern)

File "C:\anaconda_install\envs\fe_env\lib\re.py", line 300, in _compile_repl
return sre_parse.parse_template(repl, pattern)

File "C:\anaconda_install\envs\fe_env\lib\sre_parse.py", line 1018, in parse_template
raise s.error('bad escape %s' % this, len(this))

error: bad escape \l

ImportError inside Docker container

Requirements

  1. Docker

Steps to reproduce

  1. Run Python docker image:
docker run -it python:3.8 /bin/bash

Also can be reproduced inside ubuntu:18.04 after

apt-get install python3.8
apt-get install python-dev python-pip
  1. Install FastEstimator:
pip install fastestimator tensorflow==2.4.1
  1. Run Python:
python
  1. Import FastEstimator:
>>> import fastestimator
2021-06-03 14:35:36.543734: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-06-03 14:35:36.543761: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.8/site-packages/fastestimator/__init__.py", line 15, in <module>        
    from fastestimator import architecture, backend, dataset, layers, op, schedule, summary, trace, util, xai
  File "/usr/local/lib/python3.8/site-packages/fastestimator/op/__init__.py", line 15, in <module>     
    from fastestimator.op import numpyop, tensorop
  File "/usr/local/lib/python3.8/site-packages/fastestimator/op/numpyop/__init__.py", line 15, in <module>
    from fastestimator.op.numpyop import meta, multivariate, univariate
  File "/usr/local/lib/python3.8/site-packages/fastestimator/op/numpyop/multivariate/__init__.py", line 17, in <module>
    from fastestimator.op.numpyop.multivariate.center_crop import CenterCrop
  File "/usr/local/lib/python3.8/site-packages/fastestimator/op/numpyop/multivariate/center_crop.py", line 18, in <module>
    from albumentations.augmentations.transforms import CenterCrop as CenterCropAlb
ImportError: cannot import name 'CenterCrop' from 'albumentations.augmentations.transforms' (/usr/local/lib/python3.8/site-packages/albumentations/augmentations/transforms.py)

I suppose this error occurs due to the latest release of albumentations because pinning to albumentations==0.5.2 fixes it.

Attempting to Inference on a CPU only machine fails

Experimenting with Fast Estimator.

I had trained a model using FE on a GPU instance(sagemaker).

I downloaded the saved .pt file and I am trying to create a python file for inferencing. The code is based on the Unet example on the Fastestimator site.

Since my laptop is CPU only, I am using the cpu docker image ( fastestimator/fastestimator:nightly-cpu).
I get the following error:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

for

model = fe.build(model_fn=lambda: UNet(input_size=(1, 512, 512)),
                            optimizer_fn=lambda x: torch.optim.Adam(params=x, lr=0.0001),
                            model_name="model_name",
                            weights_path=weights_path)

Could you let me know if I am missing anything ?

Improper use of cpu_count() for spawning sub processes

image

Use of multiprocessing.cpu_count() leads to inconsistent/unexpected results(slowdown) as .cpu_count() returns the total CPU's in the system and doesn't guarantee that the sub processes created will get allocated to the available CPU's.
There are two ways to handle this:

  1. Create a pool of sub processes using multiprocessing. pool(), and use/reuse the processes using .map()
  2. Use os.sched_getaffinity(0) to get the actual count of processors that could be used by the parent proc for spawning sub processes

The current implementation is known to have failed and produce inconsistent results on different OS platforms/architectures

Link to issues when cpu_count() is used instead of actually cpu affinity
https://bugs.python.org/issue23530

M1 installation fails

The requirement of an old scipy (1.4.1) that requires an old numpy is not compatible with the M1 architecture that requires a much higher numpy (1.21). It would work with scipy 1.8.0.

It would be nice to have it working in M1s as they are selling quite well.

Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.