Giter Club home page Giter Club logo

walkwithfastai.github.io's Introduction

Walk with fastai

Something is Coming...

Find out more

What is this project?

Welcome to Walk with fastai! This project was started by me (Zachary Mueller) as a way to collect interesting techniques dotted throughout the fast.ai forums, my own course materials, and the fantastic work of others into one centralized place.

The goal is to have something much more than a "fastai recipe book," where instead authors can explore in-depth into topics involving the usage of the fastai library.

There is a central framework and theme for each super chapter, as you can see in the Table of Contents menu at the top of this screen. Topics are broken down by their super topic, such as:

  • Vision
    • Classification
      • Single Label

These collection of articles can be as simple as showing a few lines of code for a use-case, to as advanced as introducing and explaining a topic.

At the end of the day, I want this site to be a nice resource for folks to look up quickly how to do XYZ technique inside the library, that perhaps isn't 100% integrated or supported in the library's current status.

walkwithfastai.github.io's People

Contributors

arampacha avatar carbocation avatar d-miketa avatar dependabot[bot] avatar deven367 avatar dreamflasher avatar frankfletcher avatar isaac-flath avatar jansobus avatar muellerzr avatar sakibsadmanshajib avatar slm37102 avatar vrodriguezf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

walkwithfastai.github.io's Issues

BBoxBlock issue with PyTorch 1.8.x

There's an issue when running nbs/course2020/vision/06_Object_Detection.ipynb using PyTorch 1.8.x.

Creating a DataBlock (with BBoxBlock and BBoxLblBlock as blocks) using the getters specified will cause some issues.

Running DataBlock.summary() will give the following error:
TypeError: no implementation found for 'torch.Tensor.getitem' on types that implement torch_function: [TensorMultiCategory, TensorBBox]

Something strange in timm.py

Hi,

Thank you for your amazing work with wwf!

I think that there's something strange in L49, because the third input of create_timm_model is cut, but in that line is default_split.

There isn't an error because in L29 the parameter cut is forced to be None

Best

Course 2020 > 07 Binary Segmentation Can't be Run

Hi all,

Actually two issues with running https://walkwithfastai.com/Binary_Segmentation in Google Colab.

First Issue:

The download is broken: one the google drive URL no longer works directly. You need to download it then upload it because google asks if you are sure you want to.

Second Issue:

So once I manually uploaded the zip file, I've run the notebook and it gets stuck on loading the images and gives a very unhelpful error on this line dls = binary.dataloaders(path/'images_data_crop', bs=8)

The error looks like this:

AttributeError: read

Was curious if any one else has run into this?

Model can't predict the data because of device mismatch

in file https://github.com/walkwithfastai/walkwithfastai.github.io/blob/ffcc0b98c2b4d62777e042cb722a5f47c4f40702/nbs/03_tab.ae.ipynb

Section: Getting the compressed representations

outs = []
for batch in dl:
    with torch.no_grad():
        learn.model.eval()
        learn.model.cuda()
        out = learn.model(*batch[:2], encode=True).cpu().numpy()
        outs.append(out)
outs = np.concatenate(outs)
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-29-42e3a106393f> in <module>()
      4         learn.model.eval()
      5         learn.model.cuda()
----> 6         out = learn.model(*batch[:2], encode=True).cpu().numpy()
      7         outs.append(out)
      8 outs = np.concatenate(outs)

6 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

<ipython-input-23-b5be2cb63886> in forward(self, x_cat, x_cont, encode)
     28             x_cat = self.noise(x_cat)
     29             x_cont = self.noise(x_cont)
---> 30         encoded = super().forward(x_cat, x_cont)
     31         if encode: return encoded # return the representation
     32         decoded_trunk = self.decoder(encoded)

/usr/local/lib/python3.7/dist-packages/fastai/tabular/model.py in forward(self, x_cat, x_cont)
     47     def forward(self, x_cat, x_cont=None):
     48         if self.n_emb != 0:
---> 49             x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)]
     50             x = torch.cat(x, 1)
     51             x = self.emb_drop(x)

/usr/local/lib/python3.7/dist-packages/fastai/tabular/model.py in <listcomp>(.0)
     47     def forward(self, x_cat, x_cont=None):
     48         if self.n_emb != 0:
---> 49             x = [e(x_cat[:,i]) for i,e in enumerate(self.embeds)]
     50             x = torch.cat(x, 1)
     51             x = self.emb_drop(x)

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
    158         return F.embedding(
    159             input, self.weight, self.padding_idx, self.max_norm,
--> 160             self.norm_type, self.scale_grad_by_freq, self.sparse)
    161 
    162     def extra_repr(self) -> str:

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
   2041         # remove once script supports set_grad_enabled
   2042         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2043     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
   2044 
   2045 

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking arugment for argument index in method wrapper_index_select)

Running models takes hours

Since the issue yesterday and today, my models just running resnet50 on an image set is taking 4 hours an epoch. It used to take about 4 minutes. I'm running GPU and higher RAM on colabs. I've verified GPU is running. Is something up?

tab_ae.ipynb; could not do one pass in your dataloader

where: 03_tab.ae.ipynb,

cell/command:
to = TabularPandasIdentity(df, [Categorify, FillMissing, Normalize], cat_names, cont_names, splits=RandomSplitter(seed=32)(df))
dls = to.dataloaders(bs=1024)

error: = .iLoc requires numeric indexers, got [None]

cannot do _one_pass

Object Detection is unable to build a dataloader

https://walkwithfastai.com/Object_Detection

Using the current version of fastai, attempting to show a batch produces this error:

RuntimeError: Error when trying to collate the data into batches with fa_collate, at least two tensors in the batch are not the same size.

Mismatch found on axis 1 of the batch and is of type `TensorBBox`:
	Item at index 0 has shape: torch.Size([6, 4])
	Item at index 1 has shape: torch.Size([1, 4])

Please include a transform in `after_item` that ensures all data of type TensorBBox is the same size

If I install the versions listed on the page (fastai==2.1.10 fastcore==1.3.13 wwf==0.0.8), I get an initial import error:
ModuleNotFoundError: No module named 'torchvision.models.utils'
... for which there is a workaround, which then leaves me with this error for pascal.summary:

TypeError: no implementation found for 'torch.Tensor.__getitem__' on types that implement __torch_function__: [<class 'fastai.torch_core.TensorMultiCategory'>, <class 'fastai.vision.core.TensorBBox'>]

bbox_to_activ in metrics.py

def bbox_to_activ(bboxes, anchors, flatten=True):
"Return the target of the model on `anchors` for the `bboxes`."
if flatten:
t_centers = (bboxes[...,:2] - anchors[...,:2]) / anchors[...,2:]
t_sizes = torch.log(bboxes[...,2:] / anchors[...,2:] + 1e-8)
return torch.cat([t_centers, t_sizes], -1).div_(bboxes.new_tensor([[0.1, 0.1, 0.2, 0.2]]))
else: return [activ_to_bbox(act,anc) for act,anc in zip(acts, anchors)]
return res

Shouldn't line 23 be

else: return [bbox_to_activ(bbox, anc) for bbox, anc in zip(bboxes, anchors)]

Ch7 Super Resolution unet_config not defined

currently using unet_config . An obsolete method for learner params

#Problem:
cfg = unet_config(blur=True, norm_type=NormType.Weight, self_attention=True,
y_range=y_range)

def create_gen_learner():
return unet_learner(dls_gen, bbone, loss_func=loss_gen, config=cfg)


#Fix: remove cfg lne & Feed the params in directly to the learner
def create_gen_learner():
return unet_learner(dls_gen, bbone, loss_func=loss_gen,
blur=True, norm_type=NormType.Weight, self_attention=True,
y_range=y_range)

Bug in wwf.utils.state_version

It uses now deprecated pkg_resources package (see googleapis/python-api-core#27). It should use importlib.metadata (https://docs.python.org/3/library/importlib.metadata.html#distribution-versions) backported as importlib_metadata for Python < 3.7 (Colab uses 3.6.9).

For example running on Colab 03_tab.ae.ipynb installs wwt package which requires fastai 2 (and install 2.0.16) but state_version reports 1.0.61.

Running on Colab, 03_tab.stats.ipynb executing state_versions(['fastai', 'fastcore', 'wwf']) throws
ContextualVersionConflict: (fastai 1.0.61 (/usr/local/lib/python3.6/dist-packages), Requirement.parse('fastai>=2.0.0'), {'wwf'}) for the same reason.

unsupported operand type(s) for *: 'TensorMultiCategory' and 'TensorImage'

Hi, I'm trying to use the wwf lib for my own dataset. I am using the following versions in colab:

  • fastai : 2.4.1

  • fastcore : 1.3.25

  • wwf : 0.0.16

I can create the dataset with no problems. The output from the DataBlock.summary() looks fine to me:

Setting-up type transforms pipelines
Collecting items from /content/gdrive/MyDrive/train_images_temp
Found 718 items
2 datasets of sizes 575,143
Setting up Pipeline: PILBase.create
Setting up Pipeline: get_bbox -> TensorBBox.create
Setting up Pipeline: get_lbl -> MultiCategorize -- {'vocab': None, 'sort': True, 'add_na': True}

Building one sample
Pipeline: PILBase.create
starting from
/content/gdrive/MyDrive/train_images_temp/A03_05Da_26.tiff
applying PILBase.create gives
PILImage mode=RGB size=256x256
Pipeline: get_bbox -> TensorBBox.create
starting from
/content/gdrive/MyDrive/train_images_temp/A03_05Da_26.tiff
applying get_bbox gives
[[54, 7, 104, 57]]
applying TensorBBox.create gives
TensorBBox of size 1x4
Pipeline: get_lbl -> MultiCategorize -- {'vocab': None, 'sort': True, 'add_na': True}
starting from
/content/gdrive/MyDrive/train_images_temp/A03_05Da_26.tiff
applying get_lbl gives
[mitosis]
applying MultiCategorize -- {'vocab': None, 'sort': True, 'add_na': True} gives
TensorMultiCategory([1])

Final sample: (PILImage mode=RGB size=256x256, TensorBBox([[ 54., 7., 104., 57.]]), TensorMultiCategory([1]))

Collecting items from /content/gdrive/MyDrive/train_images_temp
Found 718 items
2 datasets of sizes 575,143
Setting up Pipeline: PILBase.create
Setting up Pipeline: get_bbox -> TensorBBox.create
Setting up Pipeline: get_lbl -> MultiCategorize -- {'vocab': None, 'sort': True, 'add_na': True}
Setting up after_item: Pipeline: BBoxLabeler -> PointScaler -> ToTensor
Setting up before_batch: Pipeline: bb_pad
Setting up after_batch: Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1}

Building one batch
Applying item_tfms to the first sample:
Pipeline: BBoxLabeler -> PointScaler -> ToTensor
starting from
(PILImage mode=RGB size=256x256, TensorBBox of size 1x4, TensorMultiCategory([1]))
applying BBoxLabeler gives
(PILImage mode=RGB size=256x256, TensorBBox of size 1x4, TensorMultiCategory([1]))
applying PointScaler gives
(PILImage mode=RGB size=256x256, TensorBBox of size 1x4, TensorMultiCategory([1]))
applying ToTensor gives
(TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1]))

Adding the next 3 samples

Applying before_batch to the list of samples
Pipeline: bb_pad
starting from
[(TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1])), (TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1])), (TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1])), (TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1]))]
applying bb_pad gives
[(TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1])), (TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1])), (TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1])), (TensorImage of size 3x256x256, TensorBBox of size 1x4, TensorMultiCategory([1]))]

Collating items in a batch

Applying batch_tfms to the batch built
Pipeline: IntToFloatTensor -- {'div': 255.0, 'div_mask': 1}
starting from
(TensorImage of size 4x3x256x256, TensorBBox of size 4x1x4, TensorMultiCategory of size 4x1)
applying IntToFloatTensor -- {'div': 255.0, 'div_mask': 1} gives
(TensorImage of size 4x3x256x256, TensorBBox of size 4x1x4, TensorMultiCategory of size 4x1)

I am creating the RetinaNet as described in 06_Object_Detection.ipynb:

encoder = create_body(resnet34, pretrained=True)
model = RetinaNet(encoder, get_c(dls), chs=128, final_bias=-4)

ratios = [1/2,1,2]
scales = [1,2**(-1/3), 2**(-2/3)]
crit = RetinaNetFocalLoss(scales=scales, ratios=ratios)

# function to split model into encoder and head`
def _retinanet_split(m): return L(m.encoder,nn.Sequential(m.c5top6, m.p6top7, m.merges, m.smoothers, m.classifier, m.box_regressor)).map(params)

learn = Learner(dls, model, loss_func=crit, splitter=_retinanet_split)
learn.freeze()

However, when I do the following:

# run learning rate finder
learn.lr_find()
learn.recorder.plot()

I am getting the following TypeError message:

/usr/local/lib/python3.7/dist-packages/wwf/vision/object_detection/metrics.py in _focal_loss(self, clas_pred, clas_tgt)
    118         encoded_tgt = encode_class(clas_tgt, clas_pred.size(1))
    119         ps = torch.sigmoid(clas_pred.detach())
--> 120         weights = encoded_tgt * (1-ps) + (1-encoded_tgt) * ps
    121         alphas = (1-encoded_tgt) * self.alpha + encoded_tgt * (1-self.alpha)
    122         weights.pow_(self.gamma).mul_(alphas)

TypeError: unsupported operand type(s) for *: 'TensorMultiCategory' and 'TensorImage'

When I look into the fastai source in 00_torch_core.ipynb I find that multiplication is only defined for TensorImage and TensorMask but not TensorCategory:

#export
for o in Tensor.__getitem__, Tensor.__ne__,Tensor.__eq__,Tensor.add,Tensor.sub,Tensor.mul,Tensor.div,Tensor.__rsub__,Tensor.__radd__,Tensor.matmul,Tensor.bmm:
    TensorBase.register_func(o, TensorMask, TensorImageBase)
    TensorBase.register_func(o, TensorImageBase, TensorMask)

TensorMask.register_func(torch.einsum, str, TensorImageBase, TensorMask)
TensorMask.register_func(torch.einsum, str, TensorMask, TensorImageBase)

Can anyone help me out here? Is there perhaps a quick workaround to fix it?

Thanks in advance ;)

MobileNet-V3 Compatibility with timm

MobileNet-V3 has a Conv2D layer after the pooling layer, which cuts off the last bit of the model through the default function

Here's the last bit of the actual body:
Screenshot 2020-11-05 at 8 48 59 AM

The model is cut correctly if you pass in cut=-1 though. However, I don't think most users would be aware of this beforehand. So, maybe something like this would be nice:

def create_timm_body(arch:str, pretrained=True, cut=None, n_in=3):
    "Creates a body from any model in the `timm` library."
    if 'mobilenetv3' in arch and cut is None:
        raise ValueError(f"Due to the special architecture of MobNet-V3, you need to pass cut=-1 to use it")
    model = create_model(arch, pretrained=pretrained, num_classes=0, global_pool='', exportable=True)
    _update_first_layer(model, n_in, pretrained)
    if cut is None:
        ll = list(enumerate(model.children()))
        cut = next(i for i,o in reversed(ll) if has_pool_type(o))
    if isinstance(cut, int): return nn.Sequential(*list(model.children())[:cut])
    elif callable(cut): return cut(model)
    else: raise NamedError("cut must be either integer or function")

Or you could also hard-code it and pass a warning message that cut is being overriden if the user is passing in a value

[03_Multi_Label.ipynb] KeyError: "Labels 'blow_down' were not included in the training dataset"

When running the 03_Multi_Label notebook, I got this exception when running fit_one_cycle:
KeyError: "Labels 'blow_down' were not included in the training dataset"

When checking the tags, I see only 1 element has this 'blow_down' label so I guess it was bad luck to have this element in the validation set.

This particular tag is just a single instance:
df['blow_down'] = df['tags'].str.contains('blow_down') df['blow_down'].value_counts() False 999 True 1

A simple workaround is to ignore this line when running the dataloaders, but there are maybe better ways to handle this.

1.8 pytorch problem in 06_Object_Detection

Executing
dls = pascal.dataloaders(path/'train')
results in
Could not do one pass in your dataloader, there is something wrong in it

Summary gets

/usr/local/lib/python3.7/dist-packages/fastai/vision/data.py in clip_remove_empty(bbox, label)
33 bbox = torch.clamp(bbox, -1, 1)
34 empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) <= 0.)
---> 35 return (bbox[~empty], label[~empty])
36
37 # Cell

TypeError: no implementation found for 'torch.Tensor.getitem' on types that implement torch_function: [TensorMultiCategory, TensorBBox]

Fastai wwf,timm trained (exported)model, loading in sagemaker endpoint error

I am trying to deploy fastai trained model (build using wwf(walk with fastai), and time), to sagemaker endpoint.
I have also included in required libraries as requirements.txt.
But the model is not being able to loaded by load_learner:

This is my model-fn

![Screenshot 2021-05-26 160613](https://user-images.githubusercontent.com/62832721/119653690-7ba81580-bddc-11eb-84be-fa6757e9825b.png)

# loads the model into memory from disk and returns it
def model_fn(model_dir):
  logger.info('model_fn')
  path_model = Path(model_dir)
  logger.debug(f'Loading model from path: {str(path_model/EXPORT_MODEL_NAME)}')
  print(path_model/EXPORT_MODEL_NAME)
  defaults.device = torch.device('cpu')
  print("Trying to Load Model::::")
  learn = load_learner(path_model/EXPORT_MODEL_NAME, cpu=True)
  print('MODEL-LOADED')
  logger.info('model loaded successfully')
  return learn

So, in the screenshot it can be seen that "MODEL-LOADED" is not printed.
Comparing, for a simple fastai-trained model, as a demo, that is working.

So kindly help me with this.

Thanks alot.

07_Binary_Segmentation.ipynb gives an error ```NameError: name 'p2c' is not defined```

When running 07_Binary_Segmentation.ipynb, the cell dls = binary.dataloaders(path/'images_data_crop', bs=8) gives an error

<ipython-input-22-cd6c82b0e03c> in <lambda>(o)
----> 1 get_y = lambda o: get_msk(o, p2c)

NameError: name 'p2c' is not defined

I could solve this issue by changing the above cell vals = n_codes(lbl_names) to p2c = n_codes(lbl_names).

03_Cross_Validation - type object 'Normalize' has no attribute 'from_tab'

The normalize object does not have "from_tab" attribute.


AttributeError Traceback (most recent call last)
in <cell line: 1>()
----> 1 procs = [Categorify(classes), Normalize.from_tab(means, stds), FillMissing(fill_strategy=FillStrategy.median, fill_vals=fill_vals, na_dict=na_dict)]

AttributeError: type object 'Normalize' has no attribute 'from_tab'

fastai : 2.7.12
fastcore : 1.5.29
scikit-learn : 1.2.2
iterative-stratification : 0.1.7

RuntimeError: solve: MAGMA library not found in compilation. Please rebuild with MAGMA.

To be fair. I do not know if this is wwf, fastai, Kaggle, Pytorch or what. But it was ran once for me. I then re-ran it later and started to get:
"Could not do one pass in your dataloader, there is something wrong in it"
and show bunch gives me:
RuntimeError: solve: MAGMA library not found in compilation. Please rebuild with MAGMA.

Here is my notebook. I'm completely baffled as I just ran it and it works in Colab.
https://www.kaggle.com/crained/fastai-timm/

grid_sampler(): expected input and grid to have same dtype, but input has float and grid has c10::Half

Hi,

I am trying to run an image regression task.

dls = ImageDataLoaders.from_df(train, path = Path('./train'),fn_col='fn',label_col='wind_speed',y_block=RegressionBlock,seed=2020,item_tfms=Resize(224),batch_tfms=aug_transforms(size=224),bs=15) dls.show_batch()

Above code works fine.

But when I run the below code, training finishes smoothly but start of validation causes the error -

grid_sampler(): expected input and grid to have same dtype, but input has float and grid has c10::Half

May be related to issues with pytorch

`archs = ['efficientnet']

prediction_array = np.zeros(shape=(test.shape[0],1))

for arch in archs:
print('Model ',arch)
cbs=SaveModelCallback(every_epoch=False,fname='/content/drive/My Drive/wind-speed/models/effnet')
learn = timm_learner(dls, 'efficientnet_b3',loss_func=MSELossFlat(),metrics=[rmse],cbs=cbs)
#learn = cnn_learner(dls,arch,loss_func=MSELossFlat(),metrics=[rmse],cbs=cbs).to_fp16()
learn.fine_tune(5)
tdl = learn.dls.test_dl(test)
preds = learn.get_preds(dl=tdl)
prediction_array += preds[0].numpy()
print('Prediction completed !!!')`

Model efficientnet

0.00% [0/1 00:00<00:00]
epoch train_loss valid_loss _rmse time

0.00% [0/937 00:00<00:00]

RuntimeError Traceback (most recent call last)
in ()
10 learn = timm_learner(dls, 'efficientnet_b3',loss_func=MSELossFlat(),metrics=[rmse],cbs=cbs)
11 #learn = cnn_learner(dls,arch,loss_func=MSELossFlat(),metrics=[rmse],cbs=cbs).to_fp16()
---> 12 learn.fine_tune(5)
13 tdl = learn.dls.test_dl(test)
14 preds = learn.get_preds(dl=tdl)

28 frames
/usr/local/lib/python3.6/dist-packages/fastai/callback/schedule.py in fine_tune(self, epochs, base_lr, freeze_epochs, lr_mult, pct_start, div, **kwargs)
155 "Fine tune with freeze for freeze_epochs then with unfreeze from epochs using discriminative LR"
156 self.freeze()
--> 157 self.fit_one_cycle(freeze_epochs, slice(base_lr), pct_start=0.99, **kwargs)
158 base_lr /= 2
159 self.unfreeze()

/usr/local/lib/python3.6/dist-packages/fastai/callback/schedule.py in fit_one_cycle(self, n_epoch, lr_max, div, div_final, pct_start, wd, moms, cbs, reset_opt)
110 scheds = {'lr': combined_cos(pct_start, lr_max/div, lr_max, lr_max/div_final),
111 'mom': combined_cos(pct_start, *(self.moms if moms is None else moms))}
--> 112 self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd)
113
114 # Cell

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in fit(self, n_epoch, lr, wd, cbs, reset_opt)
209 self.opt.set_hypers(lr=self.lr if lr is None else lr)
210 self.n_epoch = n_epoch
--> 211 self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
212
213 def _end_cleanup(self): self.dl,self.xb,self.yb,self.pred,self.loss = None,(None,),(None,),None,None

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in _do_fit(self)
200 for epoch in range(self.n_epoch):
201 self.epoch=epoch
--> 202 self._with_events(self._do_epoch, 'epoch', CancelEpochException)
203
204 def fit(self, n_epoch, lr=None, wd=None, cbs=None, reset_opt=False):

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in _do_epoch(self)
195 def _do_epoch(self):
196 self._do_epoch_train()
--> 197 self._do_epoch_validate()
198
199 def _do_fit(self):

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in _do_epoch_validate(self, ds_idx, dl)
191 if dl is None: dl = self.dls[ds_idx]
192 self.dl = dl
--> 193 with torch.no_grad(): self._with_events(self.all_batches, 'validate', CancelValidException)
194
195 def _do_epoch(self):

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in with_events(self, f, event_type, ex, final)
158
159 def with_events(self, f, event_type, ex, final=noop):
--> 160 try: self(f'before
{event_type}'); f()
161 except ex: self(f'after_cancel
{event_type}')
162 self(f'after_{event_type}'); final()

/usr/local/lib/python3.6/dist-packages/fastai/learner.py in all_batches(self)
164 def all_batches(self):
165 self.n_iter = len(self.dl)
--> 166 for o in enumerate(self.dl): self.one_batch(*o)
167
168 def _do_one_batch(self):

/usr/local/lib/python3.6/dist-packages/fastai/data/load.py in iter(self)
101 for b in _loadersself.fake_l.num_workers==0:
102 if self.device is not None: b = to_device(b, self.device)
--> 103 yield self.after_batch(b)
104 self.after_iter()
105 if hasattr(self, 'it'): del(self.it)

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in call(self, o)
196 self.fs.append(t)
197
--> 198 def call(self, o): return compose_tfms(o, tfms=self.fs, split_idx=self.split_idx)
199 def repr(self): return f"Pipeline: {' -> '.join([f.name for f in self.fs if f.name != 'noop'])}"
200 def getitem(self,i): return self.fs[i]

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
148 for f in tfms:
149 if not is_enc: f = f.decode
--> 150 x = f(x, **kwargs)
151 return x
152

/usr/local/lib/python3.6/dist-packages/fastai/vision/augment.py in call(self, b, split_idx, **kwargs)
33 def call(self, b, split_idx=None, **kwargs):
34 self.before_call(b, split_idx=split_idx)
---> 35 return super().call(b, split_idx=split_idx, **kwargs) if self.do else b
36
37 # Cell

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in call(self, x, **kwargs)
71 @Property
72 def name(self): return getattr(self, '_name', _get_name(self))
---> 73 def call(self, x, **kwargs): return self._call('encodes', x, **kwargs)
74 def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
75 def repr(self): return f'{self.name}:\nencodes: {self.encodes}decodes: {self.decodes}'

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in _call(self, fn, x, split_idx, **kwargs)
81 def _call(self, fn, x, split_idx=None, **kwargs):
82 if split_idx!=self.split_idx and self.split_idx is not None: return x
---> 83 return self._do_call(getattr(self, fn), x, **kwargs)
84
85 def _do_call(self, f, x, **kwargs):

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in do_call(self, f, x, **kwargs)
88 ret = f.returns(x) if hasattr(f,'returns') else None
89 return retain_type(f(x, **kwargs), x, ret)
---> 90 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
91 return retain_type(res, x)
92

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in (.0)
88 ret = f.returns(x) if hasattr(f,'returns') else None
89 return retain_type(f(x, **kwargs), x, ret)
---> 90 res = tuple(self.do_call(f, x, **kwargs) for x_ in x)
91 return retain_type(res, x)
92

/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in do_call(self, f, x, **kwargs)
87 if f is None: return x
88 ret = f.returns(x) if hasattr(f,'returns') else None
---> 89 return retain_type(f(x, **kwargs), x, ret)
90 res = tuple(self.do_call(f, x, **kwargs) for x
in x)
91 return retain_type(res, x)

/usr/local/lib/python3.6/dist-packages/fastcore/dispatch.py in call(self, *args, **kwargs)
116 elif self.inst is not None: f = MethodType(f, self.inst)
117 elif self.owner is not None: f = MethodType(f, self.owner)
--> 118 return f(*args, **kwargs)
119
120 def get(self, inst, owner):

/usr/local/lib/python3.6/dist-packages/fastai/vision/augment.py in encodes(self, x)
397 return x.affine_coord(self.mat, coord_func, sz=self.size, mode=mode, pad_mode=self.pad_mode, align_corners=self.align_corners)
398
--> 399 def encodes(self, x:TensorImage): return self._encode(x, self.mode)
400 def encodes(self, x:TensorMask): return self._encode(x, self.mode_mask)
401 def encodes(self, x:(TensorPoint, TensorBBox)): return self._encode(x, self.mode, reverse=True)

/usr/local/lib/python3.6/dist-packages/fastai/vision/augment.py in _encode(self, x, mode, reverse)
395 def _encode(self, x, mode, reverse=False):
396 coord_func = None if len(self.coord_fs)==0 or self.split_idx else partial(compose_tfms, tfms=self.coord_fs, reverse=reverse)
--> 397 return x.affine_coord(self.mat, coord_func, sz=self.size, mode=mode, pad_mode=self.pad_mode, align_corners=self.align_corners)
398
399 def encodes(self, x:TensorImage): return self._encode(x, self.mode)

/usr/local/lib/python3.6/dist-packages/fastai/vision/augment.py in affine_coord(x, mat, coord_tfm, sz, mode, pad_mode, align_corners)
319 coords = affine_grid(mat, x.shape[:2] + size, align_corners=align_corners)
320 if coord_tfm is not None: coords = coord_tfm(coords)
--> 321 return TensorImage(_grid_sample(x, coords, mode=mode, padding_mode=pad_mode, align_corners=align_corners))
322
323 @patch

/usr/local/lib/python3.6/dist-packages/fastai/vision/augment.py in _grid_sample(x, coords, mode, padding_mode, align_corners)
304 else:
305 x = F.interpolate(x, scale_factor=1/d, mode='area')
--> 306 return F.grid_sample(x, coords, mode=mode, padding_mode=padding_mode, align_corners=align_corners)
307
308 # Cell

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in grid_sample(input, grid, mode, padding_mode, align_corners)
3361 return handle_torch_function(
3362 grid_sample, tens_ops, input, grid, mode=mode, padding_mode=padding_mode,
-> 3363 align_corners=align_corners)
3364 if mode != 'bilinear' and mode != 'nearest':
3365 raise ValueError("nn.functional.grid_sample(): expected mode to be "

/usr/local/lib/python3.6/dist-packages/torch/overrides.py in handle_torch_function(public_api, relevant_args, *args, **kwargs)
1061 # Use public_api instead of implementation so torch_function
1062 # implementations can do equality/identity comparisons.
-> 1063 result = overloaded_arg.torch_function(public_api, types, args, kwargs)
1064
1065 if result is not NotImplemented:

/usr/local/lib/python3.6/dist-packages/fastai/torch_core.py in torch_function(self, func, types, args, kwargs)
323 convert=False
324 if _torch_handled(args, self._opt, func): convert,types = type(self),(torch.Tensor,)
--> 325 res = super().torch_function(func, types, args=args, kwargs=kwargs)
326 if convert: res = convert(res)
327 if isinstance(res, TensorBase): res.set_meta(self, as_copy=True)

/usr/local/lib/python3.6/dist-packages/torch/tensor.py in torch_function(cls, func, types, args, kwargs)
993
994 with _C.DisableTorchFunction():
--> 995 ret = func(*args, **kwargs)
996 return _convert(ret, cls)
997

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in grid_sample(input, grid, mode, padding_mode, align_corners)
3389 align_corners = False
3390
-> 3391 return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
3392
3393

RuntimeError: grid_sampler(): expected input and grid to have same dtype, but input has float and grid has c10::Half

03_Cross_Validation - splits w/ inconsistent #samples

The cross validation loop attempts to split without all the image data. The data should include both the train images and the test images.

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-44-7f86ded87910> in <module>()
      2 tst_preds = []
      3 skf = StratifiedKFold(n_splits=10, shuffle=True)
----> 4 for _, val_idx in kf.split(np.array(train_imgs), train_labels):
      5   splits = IndexSplitter(val_idx)
      6   split = splits(train_imgs)

2 frames
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
    210     if len(uniques) > 1:
    211         raise ValueError("Found input variables with inconsistent numbers of"
--> 212                          " samples: %r" % [int(l) for l in lengths])
    213 
    214 

ValueError: Found input variables with inconsistent numbers of samples: [9025, 12954]

image

Lesson 7 - Audio model to use in C++ framework (ONNX or ScriptModule via libtorch)

Hello Zachary,

Could you please help me to export the audio model from FastAudio to ONNX or ScriptModule format and how to run inference from the exported Model. (I will be able to generate a spectrogram from an audio file and re-sample it to right format using some C++ audio tools... Hopefully!).

(My question in the inference would be how to make the right dimension and pass it to the model)

Thanks,

Victor

Lesson 5 - EfficientNet and Custom Pretrained Models errors on Google Colab

I went to https://walkwithfastai.com/EfficientNet_and_Custom_Weights and clicked on "Open in Colab" button. In Colab I set up hardware acceleration to GPU and started executing the cells.

The notebook runs as expected up to the instruction:

body = create_timm_body('efficientnet_b3a', pretrained=True)

which generates the error:

NameError: name '_update_first_layer' is not defined

It's probably an issue of versions. In the website it shows:

  • fastai: 2.1.10
  • fastcore: 1.3.13
  • wwf: 0.0.7
  • timm: 0.3.2

but on Colab it gets:

  • fastai : 2.3.1
  • fastcore : 1.3.20
  • wwf : 0.0.14
  • timm : 0.4.5

Probably the newer fastai version masks the _update_first_layer function. In fact adding a cell with the function definition pasted from here allows the notebook to run as expected.

No module named 'timm.models.layers.se'

Following is my code
from fastai.vision.all import *
from wwf.vision.timm import *
learn = load_learner('models/predict.pkl')

  • gets following error
  • No module named 'timm.models.layers.se'
    Full error report

ModuleNotFoundError Traceback (most recent call last)
in
----> 1 learn = load_learner('models/predict.pkl')

~/anaconda3/envs/fastjul/lib/python3.8/site-packages/fastai/learner.py in load_learner(fname, cpu, pickle_module)
382 "Load a Learner object in fname, optionally putting it on the cpu"
383 distrib_barrier()
--> 384 res = torch.load(fname, map_location='cpu' if cpu else None, pickle_module=pickle_module)
385 if hasattr(res, 'to_fp32'): res = res.to_fp32()
386 if cpu: res.dls.cpu()

~/anaconda3/envs/fastjul/lib/python3.8/site-packages/torch/serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
605 opened_file.seek(orig_position)
606 return torch.jit.load(opened_file)
--> 607 return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
608 return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
609

~/anaconda3/envs/fastjul/lib/python3.8/site-packages/torch/serialization.py in _load(zip_file, map_location, pickle_module, pickle_file, **pickle_load_args)
880 unpickler = UnpicklerWrapper(data_file, **pickle_load_args)
881 unpickler.persistent_load = persistent_load
--> 882 result = unpickler.load()
883
884 torch._utils._validate_loaded_sparse_tensors()

~/anaconda3/envs/fastjul/lib/python3.8/site-packages/torch/serialization.py in find_class(self, mod_name, name)
873 def find_class(self, mod_name, name):
874 mod_name = load_module_mapping.get(mod_name, mod_name)
--> 875 return super().find_class(mod_name, name)
876
877 # Load the data (which may in turn use persistent_load to load tensors)

ModuleNotFoundError: No module named 'timm.models.layers.se'

AttributeError: 'SqueezeExcite' object has no attribute 'gate'

My model was working fine, till today, but something has happened now after the load_learner, it is giving the following error, while making prediction:

  1053         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/timm/models/efficientnet_blocks.py in forward(self, x)
    120         x = self.act1(x)
    121 
--> 122         x = self.se(x)
    123 
    124         x = self.conv_pw(x)

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
   1049         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1050                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051             return forward_call(*input, **kwargs)
   1052         # Do not call functions when jit is used
   1053         full_backward_hooks, non_full_backward_hooks = [], []

/usr/local/lib/python3.7/dist-packages/timm/models/efficientnet_blocks.py in forward(self, x)
     45         x_se = self.act1(x_se)
     46         x_se = self.conv_expand(x_se)
---> 47         return x * self.gate(x_se)
     48 
     49 

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
   1129                 return modules[name]
   1130         raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1131             type(self).__name__, name))
   1132 
   1133     def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:

AttributeError: 'SqueezeExcite' object has no attribute 'gate'

Thanks for any help, as right now my model is in production, but I don't know how to solve it.

Cannot pull in other efficientnet versions

Hi.
I'm pulling in:
from wwf.vision.timm import *

When I ran your example:
timm.list_models('efficientnet')[:10]

I see various models but for some reason even though I see b4 when I try to run it I get:
Pretrained model URL is invalid, using random initialization.
<fastai.learner.Learner at 0x7fe259fdd9b0>

But I don't get that with any b3 models.

This is my learn:
learn = timm_learner(dls, 'efficientnet_b4', loss_func=LabelSmoothingCrossEntropy(), metrics=[accuracy], opt_func=ranger)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.