Giter Club home page Giter Club logo

course22's Introduction

Welcome to fastai

CI PyPI Conda (channel only) docs

Installing

You can use fastai without any installation by using Google Colab. In fact, every page of this documentation is also available as an interactive notebook - click “Open in colab” at the top of any page to open it (be sure to change the Colab runtime to “GPU” to have it run fast!) See the fast.ai documentation on Using Colab for more information.

You can install fastai on your own machines with conda (highly recommended), as long as you’re running Linux or Windows (NB: Mac is not supported). For Windows, please see the “Running on Windows” for important notes.

We recommend using miniconda (or miniforge). First install PyTorch using the conda line shown here, and then run:

conda install -c fastai fastai

To install with pip, use: pip install fastai.

If you plan to develop fastai yourself, or want to be on the cutting edge, you can use an editable install (if you do this, you should also use an editable install of fastcore to go with it.) First install PyTorch, and then:

git clone https://github.com/fastai/fastai
pip install -e "fastai[dev]"

Learning fastai

The best way to get started with fastai (and deep learning) is to read the book, and complete the free course.

To see what’s possible with fastai, take a look at the Quick Start, which shows how to use around 5 lines of code to build an image classifier, an image segmentation model, a text sentiment model, a recommendation system, and a tabular model. For each of the applications, the code is much the same.

Read through the Tutorials to learn how to train your own models on your own datasets. Use the navigation sidebar to look through the fastai documentation. Every class, function, and method is documented here.

To learn about the design and motivation of the library, read the peer reviewed paper.

About fastai

fastai is a deep learning library which provides practitioners with high-level components that can quickly and easily provide state-of-the-art results in standard deep learning domains, and provides researchers with low-level components that can be mixed and matched to build new approaches. It aims to do both things without substantial compromises in ease of use, flexibility, or performance. This is possible thanks to a carefully layered architecture, which expresses common underlying patterns of many deep learning and data processing techniques in terms of decoupled abstractions. These abstractions can be expressed concisely and clearly by leveraging the dynamism of the underlying Python language and the flexibility of the PyTorch library. fastai includes:

  • A new type dispatch system for Python along with a semantic type hierarchy for tensors
  • A GPU-optimized computer vision library which can be extended in pure Python
  • An optimizer which refactors out the common functionality of modern optimizers into two basic pieces, allowing optimization algorithms to be implemented in 4–5 lines of code
  • A novel 2-way callback system that can access any part of the data, model, or optimizer and change it at any point during training
  • A new data block API
  • And much more…

fastai is organized around two main design goals: to be approachable and rapidly productive, while also being deeply hackable and configurable. It is built on top of a hierarchy of lower-level APIs which provide composable building blocks. This way, a user wanting to rewrite part of the high-level API or add particular behavior to suit their needs does not have to learn how to use the lowest level.

Layered API

Migrating from other libraries

It’s very easy to migrate from plain PyTorch, Ignite, or any other PyTorch-based library, or even to use fastai in conjunction with other libraries. Generally, you’ll be able to use all your existing data processing code, but will be able to reduce the amount of code you require for training, and more easily take advantage of modern best practices. Here are migration guides from some popular libraries to help you on your way:

Windows Support

Due to python multiprocessing issues on Jupyter and Windows, num_workers of Dataloader is reset to 0 automatically to avoid Jupyter hanging. This makes tasks such as computer vision in Jupyter on Windows many times slower than on Linux. This limitation doesn’t exist if you use fastai from a script.

See this example to fully leverage the fastai API on Windows.

We recommend using Windows Subsystem for Linux (WSL) instead – if you do that, you can use the regular Linux installation approach, and you won’t have any issues with num_workers.

Tests

To run the tests in parallel, launch:

nbdev_test

For all the tests to pass, you’ll need to install the dependencies specified as part of dev_requirements in settings.ini

pip install -e .[dev]

Tests are written using nbdev, for example see the documentation for test_eq.

Contributing

After you clone this repository, make sure you have run nbdev_install_hooks in your terminal. This install Jupyter and git hooks to automatically clean, trust, and fix merge conflicts in notebooks.

After making changes in the repo, you should run nbdev_prepare and make additional and necessary changes in order to pass all the tests.

Docker Containers

For those interested in official docker containers for this project, they can be found here.

course22's People

Contributors

harukadoyu avatar jph00 avatar lucasvw avatar misteroak avatar samruddhikhandale avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

course22's Issues

BUG: lesson1 notebook ddg image fetch returns 403

from fastcore.all import *
import time

def search_images(term, max_images=200):
    url = 'https://duckduckgo.com/'
    res = urlread(url,data={'q':term})
    searchObj = re.search(r'vqd=([\d-]+)\&', res)
    requestUrl = url + 'i.js'
    headers = {
        'dnt': '1',
        'accept-encoding': 'gzip, deflate, sdch, br',
        'x-requested-with': 'XMLHttpRequest',
        'accept-language': 'en-GB,en-US;q=0.8,en;q=0.6,ms;q=0.4',
        'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36',
        'accept': 'application/json, text/javascript, */*; q=0.01',
        'referer': 'https://duckduckgo.com/',
        'authority': 'duckduckgo.com',
    }

    params = (
        ('l', 'wt-wt'),
        ('o', 'json'),
        ('q', term),
        ('vqd', searchObj.group(1)),
        ('f', ',,,'),
        ('p', '2')
    )

    urls,data = set(),{'next':1}
    while len(urls)<max_images and 'next' in data:
        res = requests.get(requestUrl, headers=headers, params=params)
        data = json.loads(res.text)

        urls.update(L(data['results']).itemgot('image'))
        requestUrl = url + data['next']
        time.sleep(0.2)
    return L(urls)[:max_images]
  • And this is the error:

image

Lesson 1, Step 3 potentially uses photo from training set

Given that we obtain bird.jpg from search_images('bird photos', max_size=1) and we obtain the training set from search_images('bird photo') it's possible that bird.jpg will be used in the training set (depending on whether duckduckgo returns it both times and on the 80% probability of this having been chosen for the training set).

It would be better to use a completely new picture to evaluate our model's performance.

lesson 1 notebook - dependency errors when installing latest packages

For the first notebook (is_it_a_bird) I get the following error when installing fastai and duckduckgo_search:

!pip install -Uqq fastai duckduckgo_search
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-io 0.21.0 requires tensorflow-io-gcs-filesystem==0.21.0, which is not installed.
explainable-ai-sdk 1.3.2 requires xai-image-widget, which is not installed.
dask-cudf 21.10.1 requires cupy-cuda114, which is not installed.
beatrix-jupyterlab 3.1.6 requires google-cloud-bigquery-storage, which is not installed.
tensorflow 2.6.2 requires numpy~=1.19.2, but you have numpy 1.20.3 which is incompatible.
tensorflow 2.6.2 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.
tensorflow 2.6.2 requires typing-extensions~=3.7.4, but you have typing-extensions 3.10.0.2 which is incompatible.
tensorflow 2.6.2 requires wrapt~=1.12.1, but you have wrapt 1.13.3 which is incompatible.
tensorflow-transform 1.5.0 requires absl-py<0.13,>=0.9, but you have absl-py 0.15.0 which is incompatible.
tensorflow-transform 1.5.0 requires numpy<1.20,>=1.16, but you have numpy 1.20.3 which is incompatible.
tensorflow-transform 1.5.0 requires pyarrow<6,>=1, but you have pyarrow 6.0.1 which is incompatible.
tensorflow-transform 1.5.0 requires tensorflow!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<2.8,>=1.15.2, but you have tensorflow 2.6.2 which is incompatible.
tensorflow-serving-api 2.7.0 requires tensorflow<3,>=2.7.0, but you have tensorflow 2.6.2 which is incompatible.
gcsfs 2021.11.1 requires fsspec==2021.11.1, but you have fsspec 2022.2.0 which is incompatible.
flake8 4.0.1 requires importlib-metadata<4.3; python_version < "3.8", but you have importlib-metadata 4.11.3 which is incompatible.
featuretools 1.6.0 requires numpy>=1.21.0, but you have numpy 1.20.3 which is incompatible.
dask-cudf 21.10.1 requires dask==2021.09.1, but you have dask 2022.2.0 which is incompatible.
dask-cudf 21.10.1 requires distributed==2021.09.1, but you have distributed 2022.2.0 which is incompatible.
apache-beam 2.34.0 requires dill<0.3.2,>=0.3.1.1, but you have dill 0.3.4 which is incompatible.
apache-beam 2.34.0 requires httplib2<0.20.0,>=0.8, but you have httplib2 0.20.2 which is incompatible.
apache-beam 2.34.0 requires pyarrow<6.0.0,>=0.15.1, but you have pyarrow 6.0.1 which is incompatible.
aioitertools 0.10.0 requires typing_extensions>=4.0; python_version < "3.10", but you have typing-extensions 3.10.0.2 which is incompatible.
aiobotocore 2.1.2 requires botocore<1.23.25,>=1.23.24, but you have botocore 1.24.20 which is incompatible.

However, I am still able to run the remaining cells in the notebook and complete the training of the model. But it might not be obvious to other users that you can still do the rest of the exercise.

broken hyperlinks pointing to non-existent .md pages

hi! I noticed that the majority of hyperlinks across the course.fast.ai homepage are broken, due to the fact that they are pointing to non-existent .md files. For example:

https://course.fast.ai/Resources/book.md - Broken, linked on the page
https://course.fast.ai/Resources/book.html - Not broken, but only linked on the side navigation bar

Possible Error

In the notebook named "How does a neural net really work?", there is a point where the parameters of a parabola are found using gradients.
There is a cell with this content

for i in range(10):
loss = quad_mae(abc)
loss.backward()
with torch.no_grad(): abc -= abc.grad*0.01
print(f'step={i}; loss={loss:.2f}')

If you run this loop for more than 10 iterations the loss starts growing again.

In the text, it's said that this is because the learning rate must be progressively decreased in practice.
In my opinion, this is because every time that loss.backward() is exectuted the gradients are "accumulated" rather than recomputed. If the gradients are reset to zero after each iteration, it converges to a minimum:
------------------------------------- Proposed code -------------------------------------------
for i in range(10):
loss = quad_mae(abc)
loss.backward()
with torch.no_grad():
abc -= abc.grad*0.01
abc.grad.fill_(0) #New line
print(f'step={i}; loss={loss:.2f}')

Let me conclude by congratulating you for this very clear explanation

Regards

Problem with Chapter 1 in Colab

Trying to work through Ch1 in Colab, the training of the cat recognition model has taken up to 40 minutes per epoch and some error messages about deprecated variable names.
image
Trying to test the model with an uploaded file or even the default I get a message stating no such directory or file.
image

Issue Chapter 2, From Model to Production

Hi,

In the video tell us that use Microsoft Azure to download images is a big issue cause "the key". The solution is use other method "search_images_ddg". However, in the notebook this new method i not show.

image

Could you fix it?

Best regards,

Anibal

code repository

where can i find the code repository for the code provided in the book, thanks!

Chapter 4, MNIST Basics Math Error (Quick Fix)

Hi

Under the subheading "Computing Metrics Using Broadcasting" there is an incorrect average calculation

image

The correct average needs to account for the number of examples in each category for the validations sets (see below):
image

Part 1 image result crash

In the part 1 of the course when you try to download image from duckduckgo it returns a different result than in video. and there is a problem it gives exception.

I bypassed with downloading the 2nd result.

01_intro - Image.open() issue

Hi, there is an issue with loading image to predict in 01_intro and the same issue in Is it a bird? - but probably with any other image notebook I would encounter the same issue.

When I try to run the code, I get an error related to Image.open() part of the code - from what I understood, since I create an image in the first line in the following code, it cannot be "opened" inside .predict() method.

img = PILImage.create(uploader.data[0])
is_cat,_,probs = learn.predict(img)
print(f"Is this a cat?: {is_cat}.")
print(f"Probability it's a cat: {probs[1].item():.6f}")

I have made no changes to the code, I have installed the latest fastai, I run all the cells in the notebook, one after the other.

Here is the screenshot of what happened:

image

And here is the whole error:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Input In [10], in <cell line: 2>()
      1 img = PILImage.create(uploader.data[0])
----> 2 is_cat,_,probs = learn.predict(img)
      3 print(f"Is this a cat?: {is_cat}.")
      4 print(f"Probability it's a cat: {probs[1].item():.6f}")

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\learner.py:321, in Learner.predict(self, item, rm_type_tfms, with_input)
    319 def predict(self, item, rm_type_tfms=None, with_input=False):
    320     dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
--> 321     inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
    322     i = getattr(self.dls, 'n_inp', -1)
    323     inp = (inp,) if i==1 else tuplify(inp)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\learner.py:308, in Learner.get_preds(self, ds_idx, dl, with_input, with_decoded, with_loss, act, inner, reorder, cbs, **kwargs)
    306 if with_loss: ctx_mgrs.append(self.loss_not_reduced())
    307 with ContextManagers(ctx_mgrs):
--> 308     self._do_epoch_validate(dl=dl)
    309     if act is None: act = getcallable(self.loss_func, 'activation')
    310     res = cb.all_tensors()

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\learner.py:244, in Learner._do_epoch_validate(self, ds_idx, dl)
    242 if dl is None: dl = self.dls[ds_idx]
    243 self.dl = dl
--> 244 with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\learner.py:199, in Learner._with_events(self, f, event_type, ex, final)
    198 def _with_events(self, f, event_type, ex, final=noop):
--> 199     try: self(f'before_{event_type}');  f()
    200     except ex: self(f'after_cancel_{event_type}')
    201     self(f'after_{event_type}');  final()

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\learner.py:205, in Learner.all_batches(self)
    203 def all_batches(self):
    204     self.n_iter = len(self.dl)
--> 205     for o in enumerate(self.dl): self.one_batch(*o)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\load.py:127, in DataLoader.__iter__(self)
    125 self.before_iter()
    126 self.__idxs=self.get_idxs() # called in context of main process (not workers/subprocesses)
--> 127 for b in _loaders[self.fake_l.num_workers==0](self.fake_l):
    128     # pin_memory causes tuples to be converted to lists, so convert them back to tuples
    129     if self.pin_memory and type(b) == list: b = tuple(b)
    130     if self.device is not None: b = to_device(b, self.device)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\dataloader.py:628, in _BaseDataLoaderIter.__next__(self)
    625 if self._sampler_iter is None:
    626     # TODO(https://github.com/pytorch/pytorch/issues/76750)
    627     self._reset()  # type: ignore[call-arg]
--> 628 data = self._next_data()
    629 self._num_yielded += 1
    630 if self._dataset_kind == _DatasetKind.Iterable and \
    631         self._IterableDataset_len_called is not None and \
    632         self._num_yielded > self._IterableDataset_len_called:

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\dataloader.py:671, in _SingleProcessDataLoaderIter._next_data(self)
    669 def _next_data(self):
    670     index = self._next_index()  # may raise StopIteration
--> 671     data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
    672     if self._pin_memory:
    673         data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\utils\data\_utils\fetch.py:43, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)
     41         raise StopIteration
     42 else:
---> 43     data = next(self.dataset_iter)
     44 return self.collate_fn(data)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\load.py:138, in DataLoader.create_batches(self, samps)
    136 if self.dataset is not None: self.it = iter(self.dataset)
    137 res = filter(lambda o:o is not None, map(self.do_item, samps))
--> 138 yield from map(self.do_batch, self.chunkify(res))

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\basics.py:230, in chunked(it, chunk_sz, drop_last, n_chunks)
    228 if not isinstance(it, Iterator): it = iter(it)
    229 while True:
--> 230     res = list(itertools.islice(it, chunk_sz))
    231     if res and (len(res)==chunk_sz or not drop_last): yield res
    232     if len(res)<chunk_sz: return

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\load.py:153, in DataLoader.do_item(self, s)
    152 def do_item(self, s):
--> 153     try: return self.after_item(self.create_item(s))
    154     except SkipItemException: return None

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\load.py:160, in DataLoader.create_item(self, s)
    159 def create_item(self, s):
--> 160     if self.indexed: return self.dataset[s or 0]
    161     elif s is None:  return next(self.it)
    162     else: raise IndexError("Cannot index an iterable dataset numerically - must use `None`.")

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\core.py:458, in Datasets.__getitem__(self, it)
    457 def __getitem__(self, it):
--> 458     res = tuple([tl[it] for tl in self.tls])
    459     return res if is_indexer(it) else list(zip(*res))

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\core.py:458, in <listcomp>(.0)
    457 def __getitem__(self, it):
--> 458     res = tuple([tl[it] for tl in self.tls])
    459     return res if is_indexer(it) else list(zip(*res))

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\core.py:417, in TfmdLists.__getitem__(self, idx)
    415 res = super().__getitem__(idx)
    416 if self._after_item is None: return res
--> 417 return self._after_item(res) if is_indexer(idx) else res.map(self._after_item)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\data\core.py:377, in TfmdLists._after_item(self, o)
--> 377 def _after_item(self, o): return self.tfms(o)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\transform.py:208, in Pipeline.__call__(self, o)
--> 208 def __call__(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\transform.py:158, in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
    156 for f in tfms:
    157     if not is_enc: f = f.decode
--> 158     x = f(x, **kwargs)
    159 return x

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\transform.py:81, in Transform.__call__(self, x, **kwargs)
     79 @property
     80 def name(self): return getattr(self, '_name', _get_name(self))
---> 81 def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
     82 def decode  (self, x, **kwargs): return self._call('decodes', x, **kwargs)
     83 def __repr__(self): return f'{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}'

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\transform.py:91, in Transform._call(self, fn, x, split_idx, **kwargs)
     89 def _call(self, fn, x, split_idx=None, **kwargs):
     90     if split_idx!=self.split_idx and self.split_idx is not None: return x
---> 91     return self._do_call(getattr(self, fn), x, **kwargs)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\transform.py:97, in Transform._do_call(self, f, x, **kwargs)
     95     if f is None: return x
     96     ret = f.returns(x) if hasattr(f,'returns') else None
---> 97     return retain_type(f(x, **kwargs), x, ret)
     98 res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
     99 return retain_type(res, x)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastcore\dispatch.py:120, in TypeDispatch.__call__(self, *args, **kwargs)
    118 elif self.inst is not None: f = MethodType(f, self.inst)
    119 elif self.owner is not None: f = MethodType(f, self.owner)
--> 120 return f(*args, **kwargs)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\vision\core.py:125, in PILBase.create(cls, fn, **kwargs)
    123 if isinstance(fn,bytes): fn = io.BytesIO(fn)
    124 if isinstance(fn,Image.Image) and not isinstance(fn,cls): return cls(fn)
--> 125 return cls(load_image(fn, **merge(cls._open_args, kwargs)))

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\fastai\vision\core.py:98, in load_image(fn, mode)
     96 def load_image(fn, mode=None):
     97     "Open and load a `PIL.Image` and convert to `mode`"
---> 98     im = Image.open(fn)
     99     im.load()
    100     im = im._new(im.im)

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\Image.py:3101, in open(fp, mode, formats)
   3098     fp = io.BytesIO(fp.read())
   3099     exclusive_fp = True
-> 3101 prefix = fp.read(16)
   3103 preinit()
   3105 accept_warnings = []

File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\PIL\Image.py:517, in Image.__getattr__(self, name)
    515     deprecate("Image categories", 10, "is_animated", plural=True)
    516     return self._category
--> 517 raise AttributeError(name)

AttributeError: read

The HuggingFace Spaces Pets repository may need update

If I try to look at the app I get a Runtime error
https://huggingface.co/spaces/jph00/pets

Maybe a dependency issue?

Here is the log

Container logs:

mq92j 2023-03-16T09:49:18.819Z /home/user/.local/lib/python3.8/site-packages/gradio/inputs.py:256: UserWarning: Usage of gradio.inputs is deprecated, and will not be supported in the future, please import your component from gradio.components
mq92j 2023-03-16T09:49:18.819Z   warnings.warn(
mq92j 2023-03-16T09:49:18.819Z /home/user/.local/lib/python3.8/site-packages/gradio/deprecation.py:40: UserWarning: `optional` parameter is deprecated, and it has no effect
mq92j 2023-03-16T09:49:18.819Z   warnings.warn(value)
mq92j 2023-03-16T09:49:18.819Z /home/user/.local/lib/python3.8/site-packages/gradio/outputs.py:196: UserWarning: Usage of gradio.outputs is deprecated, and will not be supported in the future, please import your components from gradio.components
mq92j 2023-03-16T09:49:18.819Z   warnings.warn(
mq92j 2023-03-16T09:49:18.819Z /home/user/.local/lib/python3.8/site-packages/gradio/deprecation.py:40: UserWarning: The 'type' parameter has been deprecated. Use the Number component instead.
mq92j 2023-03-16T09:49:18.819Z   warnings.warn(value)
mq92j 2023-03-16T09:49:18.875Z /home/user/.local/lib/python3.8/site-packages/gradio/interface.py:313: UserWarning: Currently, only the 'default' theme is supported.
mq92j 2023-03-16T09:49:18.875Z   warnings.warn("Currently, only the 'default' theme is supported.")
mq92j 2023-03-16T09:49:19.412Z IMPORTANT: You are using gradio version 3.1.1, however version 3.14.0 is available, please upgrade.
mq92j 2023-03-16T09:49:19.412Z --------
mq92j 2023-03-16T09:49:19.413Z Caching examples at: '/home/user/app/gradio_cached_examples/12/log.csv'
mq92j 2023-03-16T09:49:19.439Z Traceback (most recent call last):
mq92j 2023-03-16T09:49:19.439Z   File "app.py", line 26, in <module>
mq92j 2023-03-16T09:49:19.439Z     intf = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples)
mq92j 2023-03-16T09:49:19.439Z   File "/home/user/.local/lib/python3.8/site-packages/gradio/interface.py", line 599, in __init__
mq92j 2023-03-16T09:49:19.440Z     self.examples_handler = Examples(
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/gradio/examples.py", line 154, in __init__
mq92j 2023-03-16T09:49:19.440Z     self.cache_interface_examples()
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/gradio/examples.py", line 189, in cache_interface_examples
mq92j 2023-03-16T09:49:19.440Z     raise e
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/gradio/examples.py", line 185, in cache_interface_examples
mq92j 2023-03-16T09:49:19.440Z     prediction = self.process_example(example_id)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/gradio/examples.py", line 205, in process_example
mq92j 2023-03-16T09:49:19.440Z     predictions = self.fn(*processed_input)
mq92j 2023-03-16T09:49:19.440Z   File "app.py", line 17, in classify_image
mq92j 2023-03-16T09:49:19.440Z     pred,idx,probs = learn.predict(img)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 321, in predict
mq92j 2023-03-16T09:49:19.440Z     inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 308, in get_preds
mq92j 2023-03-16T09:49:19.440Z     self._do_epoch_validate(dl=dl)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 244, in _do_epoch_validate
mq92j 2023-03-16T09:49:19.440Z     with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 199, in _with_events
mq92j 2023-03-16T09:49:19.440Z     try: self(f'before_{event_type}');  f()
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 205, in all_batches
mq92j 2023-03-16T09:49:19.440Z     for o in enumerate(self.dl): self.one_batch(*o)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 235, in one_batch
mq92j 2023-03-16T09:49:19.440Z     self._with_events(self._do_one_batch, 'batch', CancelBatchException)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 199, in _with_events
mq92j 2023-03-16T09:49:19.440Z     try: self(f'before_{event_type}');  f()
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 216, in _do_one_batch
mq92j 2023-03-16T09:49:19.440Z     self.pred = self.model(*self.xb)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
mq92j 2023-03-16T09:49:19.440Z     return forward_call(*input, **kwargs)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
mq92j 2023-03-16T09:49:19.440Z     input = module(input)
mq92j 2023-03-16T09:49:19.440Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
mq92j 2023-03-16T09:49:19.441Z     return forward_call(*input, **kwargs)
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/fastai/vision/learner.py", line 177, in forward
mq92j 2023-03-16T09:49:19.441Z     def forward(self,x): return self.model.forward_features(x) if self.needs_pool else self.model(x)
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/timm/models/convnext.py", line 397, in forward_features
mq92j 2023-03-16T09:49:19.441Z     x = self.stem(x)
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
mq92j 2023-03-16T09:49:19.441Z     return forward_call(*input, **kwargs)
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/container.py", line 141, in forward
mq92j 2023-03-16T09:49:19.441Z     input = module(input)
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
mq92j 2023-03-16T09:49:19.441Z     return forward_call(*input, **kwargs)
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/timm/models/layers/norm.py", line 67, in forward
mq92j 2023-03-16T09:49:19.441Z     if self._fast_norm:
mq92j 2023-03-16T09:49:19.441Z   File "/home/user/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1185, in __getattr__
mq92j 2023-03-16T09:49:19.441Z     raise AttributeError("'{}' object has no attribute '{}'".format(
mq92j 2023-03-16T09:49:19.441Z AttributeError: 'LayerNorm2d' object has no attribute '_fast_norm'

No Free Option for Paperspace Gradient

Not sure if this is the place to file comments about resources for the course, but just wanted to relay my experience. I came here via the book > website > https://course.fast.ai/#how-do-i-get-started. It says:

If you don’t have a Paperspace account yet, sign up with [this link](https://console.paperspace.com/signup?R=lg6rnx) to get $10 credit – and we get a credit too.

Happy to support any team that offers resources for free. I registered and tried to create a notebook, and no matter which "free" option I chooose, it asks for a credit card. It's not even clear whether they just want the card or if they are going to charge you. Tried Kaggle and Colob, neither asked me for a credit card to at least create a notebook. I've already dealt with surprise bills from AWS, so not interested in finding out whether that's possible with Paperspace. It's really too bad as well, since the interface for Paperspace looked really good. But I'm not giving them my credit card number until I can at least try out their service.

Just wanted to relay this to the author's since they are recommending the service. Feel free to close this issue as is appropriate and I appreciate the time and attention.

How to make the code run using the local gpu

I run the notebook on my PC, when I use learn.fintune(), I found that the time consumed is far higher than mentioned in the notes, I found that this is due to training using cpu instead of gpu, because when I run the code, the memory use of GPU doesn't increase. How can I solve the problem?

Typo - Part 2 overview

Throughout the course, we’ll PyTorch to implement our models, and will create our own...

I think the word use might be missing before PyTorch.

Minor typo in course22 welcome page

As I couldn't find the relevant file to fix and push the changes myself I'll open an issue here.

There is a tiny typo in The software you will be using section of the course's website.
The following sentence is missing an is:

The fastai library is one of the most popular libraries for adding this higher-level functionality on top of PyTorch

Cheers and thank you all for the amazing work!

second cell of first lesson is broken

On the first page of the interactive notebook for the class: https://www.kaggle.com/code/jdlong/is-it-a-bird-creating-a-model-from-your-own-data/edit

the second cell installs fastai if it's missing:

import os
iskaggle = os.environ.get('KAGGLE_KERNEL_RUN_TYPE', '')

if iskaggle:
    !pip install -Uqq fastai

this fails pretty miserably with the following error:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
tensorflow-io 0.21.0 requires tensorflow-io-gcs-filesystem==0.21.0, which is not installed.
explainable-ai-sdk 1.3.2 requires xai-image-widget, which is not installed.
tensorflow 2.6.2 requires numpy~=1.19.2, but you have numpy 1.20.3 which is incompatible.
tensorflow 2.6.2 requires six~=1.15.0, but you have six 1.16.0 which is incompatible.
tensorflow 2.6.2 requires typing-extensions~=3.7.4, but you have typing-extensions 3.10.0.2 which is incompatible.
tensorflow 2.6.2 requires wrapt~=1.12.1, but you have wrapt 1.13.3 which is incompatible.
tensorflow-transform 1.5.0 requires absl-py<0.13,>=0.9, but you have absl-py 0.15.0 which is incompatible.
tensorflow-transform 1.5.0 requires numpy<1.20,>=1.16, but you have numpy 1.20.3 which is incompatible.
tensorflow-transform 1.5.0 requires pyarrow<6,>=1, but you have pyarrow 6.0.1 which is incompatible.
tensorflow-transform 1.5.0 requires tensorflow!=2.0.*,!=2.1.*,!=2.2.*,!=2.3.*,!=2.4.*,!=2.5.*,!=2.6.*,<2.8,>=1.15.2, but you have tensorflow 2.6.2 which is incompatible.
tensorflow-serving-api 2.7.0 requires tensorflow<3,>=2.7.0, but you have tensorflow 2.6.2 which is incompatible.
flake8 4.0.1 requires importlib-metadata<4.3; python_version < "3.8", but you have importlib-metadata 4.11.3 which is incompatible.
apache-beam 2.34.0 requires dill<0.3.2,>=0.3.1.1, but you have dill 0.3.4 which is incompatible.
apache-beam 2.34.0 requires httplib2<0.20.0,>=0.8, but you have httplib2 0.20.2 which is incompatible.
apache-beam 2.34.0 requires pyarrow<6.0.0,>=0.15.1, but you have pyarrow 6.0.1 which is incompatible.
aioitertools 0.10.0 requires typing_extensions>=4.0; python_version < "3.10", but you have typing-extensions 3.10.0.2 which is incompatible.
aiobotocore 2.1.2 requires botocore<1.23.25,>=1.23.24, but you have botocore 1.24.20 which is incompatible.

while that error looks fairly bad, it seems like the code following that still runs, including the fastai bits. So maybe it's not critical? Still a pretty harsh intro to the stack for a total newbie.

Collaborative Deep Learning Implementation

Jeremy - In the deep learning implementation of collaborative filtering the input is concatenated embedding of user and items, however my understanding is that the model is not learning the embedding matrix here, instead it's learning the weights (176 * 100) in the first layer and (100 * 1) in the second layer. Am I missing something? Appreciate your inputs.

bad link

in page: https://course.fast.ai the link in the text Start watching [lesson 1](https://course.fast.ai/Lessons/lesson1.md) now! returns 404. This link is present ~7 times (!) on that page.

Possibly the proper link is https://course.fast.ai/Lessons/lesson1.html ... note the html extension instead of md

after some browsing I see this bad link is everywhere. For example https://course.fast.ai/Resources/testimonials.html so better run that regex on the whole site

Clean folder missing

I dont see the clean folder in the master branch any more. May have gotten deleted accidentally.

tensor creation KeyError: 5

Screenshot 2023-02-20 at 12 29 57 PM

For lesson 05-linear-model-and-neural-net-from-scratch.ipynb the above command gives an exception (KeyError: 5) on kaggle. Need to use .values as df.Survived.values for this to work correctly. Kindly update.

Thanks

TypeError for format string in "Is it a bird?" notebook

I'm following the lesson 1 notebook Is it a bird? Creating a model from your own data linked to in the resources page. I've copied this in kaggle and ran it myself. The notebook works and seems to finish, however I get quite a lot of error messages about formatting strings. This seems to be an issue with the __repr__(x:Image.Image) where there's not an argument for 0x%X. I assume this should be supplied or removed from the format string.

The output is:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/IPython/core/formatters.py in __call__(self, obj)
    700                 type_pprinters=self.type_printers,
    701                 deferred_pprinters=self.deferred_printers)
--> 702             printer.pretty(obj)
    703             printer.flush()
    704             return stream.getvalue()

/opt/conda/lib/python3.7/site-packages/IPython/lib/pretty.py in pretty(self, obj)
    392                         if cls is not object \
    393                                 and callable(cls.__dict__.get('__repr__')):
--> 394                             return _repr_pprint(obj, self, cycle)
    395 
    396             return _default_pprint(obj, self, cycle)

/opt/conda/lib/python3.7/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
    698     """A pprint that just redirects to the normal repr function."""
    699     # Find newlines and replace them with p.break_()
--> 700     output = repr(obj)
    701     lines = output.splitlines()
    702     with p.group():

/opt/conda/lib/python3.7/site-packages/fastai/vision/core.py in __repr__(x)
     26 @patch
     27 def __repr__(x:Image.Image):
---> 28     return "<%s.%s image mode=%s size=%dx%d at 0x%X>" % (x.__class__.__module__, x.__class__.__name__, x.mode, x.size[0], x.size[1])
     29 
     30 # %% ../nbs/07_vision.core.ipynb 11

TypeError: not enough arguments for format string

Pet Classifier Broken

Hi Jeremy,

I just wanted to let you know that the Hugging Face Pet Breed Classifier is broken. I think it is a problem with Gradio

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.