Comments (3)
Hi there, and thanks for your enthusiasm about this book!
The code is all open source, so no worries about forking it. It's actually nice that you are trying to make it accessible to more readers. Thanks!
I also included a note about your project here in the README.
If you could clarify that this is a fork of this project and link the original GitHub repository at the top of the forked README, that would be very appreciated (just to avoid any potential confusion with readers and the publisher).
Note that I also updated the figures in to link them but not directly include them in the notebooks but link them from my website where I'll keep updating them.
from llms-from-scratch.
I’m not bot. TT
from llms-from-scratch.
Thanks!!I will do my best.
from llms-from-scratch.
Related Issues (20)
- In 3.3.1, there seems to be a missing image between "The attention weights and context vector calculation are summarized in the figure below:" and "The code below walks through the figure above step by step." HOT 1
- RuntimeError: size mismatch - ch05/03_bonus_pretraining_on_gutenberg HOT 2
- book feedback HOT 1
- Inconsistencies between the code in the book and the notebooks (2.6 Data sampling with a sliding window) HOT 7
- Output of the cell without variable specified (Embedding Layers and Linear Layers) HOT 1
- Wrong number of token ids specified in the notebook (2.7 Creating token embeddings) HOT 1
- Incorrect description of function torch.arange() (2.8 Encoding word positions) HOT 1
- Inconsistencies in output for dropout section (3.5.2 Masking additional attention weights with dropout) HOT 1
- Probably a typo in multi-head attention description (3.6.1 Stacking multiple single-head attention layers) HOT 1
- Solution for Exercise 3.2 is included in the notebook with main code (3.6.1 Stacking multiple single-head attention layers) HOT 1
- Question about implementation of CausalAttention class (3.5.3 Implementing a compact causal self-attention class) HOT 6
- Inconsistencies in unsqueeze operation description in the book and in notebook and its necessity (3.6.2 Implementing multi-head attention with weight splits) HOT 4
- Solution for Exercise 3.3 is included in the notebook with main code (3.6.2 Implementing multi-head attention with weight splits) HOT 1
- Inconsistencies in MHA Wrapper Implementation Between Chapter 3 Main Content and Bonus Material HOT 1
- Chapter 5 - Context Size and the DataLoaders HOT 2
- Feedback: Stripe output from notebook HOT 2
- About endoftext in ch05/03_bonus_pretraining_on_gutenberg/pretraining_simple.py HOT 14
- Contributions for Chinese simplified version HOT 4
- {Q} : Replacing the LlamaDecoderLayer Class hugging Face With New LongNet
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from llms-from-scratch.