wandb / edu Goto Github PK
View Code? Open in Web Editor NEWEducational materials on deep learning by Weights & Biases
Home Page: http://wandb.ai
License: GNU General Public License v2.0
Educational materials on deep learning by Weights & Biases
Home Page: http://wandb.ai
License: GNU General Public License v2.0
Matching the questions on slides
Need to refactor it into a base class and two dataloaders for classification and for image tasks
Line 262 in e2e3347
Based on new art collateral
Ideas
this way, folks are always logged in before they get to the meat of the code
in particular, how much can i change the sizes of the convs before the adaptive pooling layer complains? how much can i change the target size of the adaptive pooling layer? how deep can i go before MNIST images are too small?
This was written when I had half as much experience with Lighting as I have now, and before the most recent integration.
I should rethink it, with emphasis on the following:
right now, the installs are run regardless of environment, but they should only be run in Colab.
just need to move the !pip install
commands into the appropriate if
branch and then apply %%capture
to the cell
On the one hand, it will reduce code duplication across Colabs, but on the other, it really needs to be done well if it's going to be used everywhere -- have to make sure it's e.g. DDP compatible, the logging could be done better (more callbacks?), and want to use PL best practices as much as possible.
See projects/constrained_emotion_classifier.ipynb
for an example. This is better than the name-sensitive style being used elsewhere.
beginning to be deprecated
waiting on the branches to be correct
Doesn't need to be provided for convolutional networks.
Should fix and possibly update the docs example.
Adding more content and at the same time de-emphasizing existing content, but we don't want to delete that content, so let's make a nested structure with folders and subfolders
-- 00_{topic}/
| exercises.ipynb
-- extras/
| {subtopic}.ipynb
-- 01_{topic}/
Retain only the components that are easily autograde-able or could be converted to autograding.
crib the HTML from the notebooks in the examples repo.
Should be as easy as calling print(model)
inside the right hook, which might be on_train_start
, with the LoggedLitModule
Caught on a dilemma with saving PyTorch models for viewing in Netron
.pt
file results in unreliable performance by Netron, who can't really be expected to handle all the possible choices in both major libraries, and so prefers ONNX and lets them handle the conversion, butAdapativeAvgPool2d
layer, important for CNNs that are easy to play with. Seems like a pretty fundamental limitation w.r.t adaptive layers.In particular, for the MLP and CNN in PyTorch, I want to emphasize reusability and enable easy extension, and so I'm using ModuleList
s and custom Module
s. Neither is really gonna play nicely with Netron.
For now, I'm sticking with .pt
files, which save but aren't visualized well (the ModuleList
s aren't expanded, and that's where all the action is!).
Would like to drop reliance on the W&B Hub, and Colab seems like the least-bad choice.
matplotlib inline
backend for interactive charts is frustrating but not a showstopper.torch.nn.Identity
serves the same purpose, but is less jank.
See cnn.ipynb
.
The profiling tool may only be compatible with Chrome.
Networks that are being pruned with torch.nn.prune
break certain assumptions in e.g. the parameter counting, as do I believe the quantized networks.
These should be resolved (incorporating fixes from the relevant notebooks, when possible) so that the utils are more robust.
it's interesting, but not autogradable.
this needs to be installed locally, but is not a pip package
Right now, doesn't appropriately test whether there are values above 1 or below 0 because the examples don't add up to 1, which is what most folks test for.
np.array([-1, 2])
and/or np.array([-0.5, 1.1, 0.2, 0.2])
would do it.
_Note_:
renders differently between platforms because Markdown is an incomplete spec, resulting in some very wonky formatting in places; use _Note:_
instead.
based off of README in ml-class
Colab defaults to 2-space but much of the code is in 4-space -- and other Jupyter instances don't like 2-space indentation
In calculus exercises: is_little_o
, identity
, constant
The SVD material is interesting, but hard to make concrete and compelling with the constraints we have (unless I come up with a slick "LA-as-programming" explanation of kernels and maybe also eigenvalues, which is tougher).
I should move it into a separate notebook.
should make the dataloaders more configurable for the AbstractMNISTDataModule
-- can probably fix pin_memory
to True
, but should allow configuration of num_workers
(with a default of 2
or nproc
, depending on how far we want to go)
Should be ## Setup Code
, # Section X
, etc.
This is the core idea of the lecture slides, but there aren't enough exercises for it. They require a certain amount of creativity, but here's a few possibilities:
repeat
. Use matrix multiplication (an outer product?) to copy the input k
times.Ideas:
Given the gradient and parameters, apply one gradient descent step and return the new parameters.
Check that:
should move to /content
before git clone
Put the links
Logger is used elsewhere in the Lightning API and so I should avoid the name collision
The Binder setup needs to be tweaked now that we're in v2. The Dockerfile is no longer in the right place, which is going to be a PITA to fix. That Dockerfile also needs to be updated.
The utils
I wrote for the lightning/mlp/
notebooks are useful more broadly in the lightning
material.
They should perhaps be moved up to the lightning/
folder. This will require changing some code in the extant lightning
colabs.
in v2, this is pointing to the v2 branch
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.