huggingface / datablations Goto Github PK
View Code? Open in Web Editor NEWScaling Data-Constrained Language Models
Home Page: https://arxiv.org/abs/2305.16264
License: Apache License 2.0
Scaling Data-Constrained Language Models
Home Page: https://arxiv.org/abs/2305.16264
License: Apache License 2.0
In plotstables/contours.ipynb
you keep using names, but it doesn't exist
Hi,
I am reading your paper and I have noticed that figure 4 and figure 15 are exactly the same. Are they meant to be the same? I believe that figure 4 shows the result of models trained on a different training set rather than OSCAR corpus (because you mentioned in Appendix I that 'To ensure our findings are not dataset-dependent, we train models with the same configurations from Figure 4 on the OSCAR corpus'). I wonder if you placed a wrong figure here.
Appreciate your work! I believe this work will definitely give researchers and engineers more insights when training and developing LLMs.
Matthew
When I check the plotting script of Figure 4 in your repo, I notice that the smoothing methods are quite different for training loss and validation loss. Besides, with regard to the validation loss, I find that you first apply the savgol_filter and then the exponential moving average. My question are as follows.
Appendix(the code snippet):
def get_data_csv(model, model_id, smooth_data=False):
if LOSS == "training":
if not(os.path.exists(f"{model}_{LOSS}.csv")):
!wget https://huggingface.co/datasets/datablations/scripts/raw/main/tb{LOSS}/{model}_{LOSS}.csv
else:
if not(os.path.exists(f"{model}.csv")):
!wget https://huggingface.co/datasets/datablations/scripts/raw/main/tb{LOSS}/{model}.csv
path = f"{model}.csv" if LOSS == 'validation' else f"{model}_{LOSS}.csv"
df = pd.read_csv(path, index_col=0)
if LOSS == "training":
df['value'] = smooth(df["value"].values.tolist(), weight=0.999)
elif smooth_data:
df[["value"]] = df[["value"]].apply(savgol_filter, window_length=3, polyorder=2)
df['value'] = smooth(df["value"].values.tolist(), weight=0.85)
return df.step.values, df.value.values
In the contour there are many sizes of models, is there a place that says how many flops each of those used? (in the other it is always 3 sizes so the one table that specifies it is enough). Or is there some easy formula you can deduce flops from parameters?
Sorry for bothering you... You've got some really useful data lying around here.
"Scaling Data-Constrained Language Models" is a very nice paper, and I learn a lot from this paper.
However, I have a question about this paper:
In the abstract and Figure 1, it recommends we should train 4 epochs.
But Figure 3 shows that we should choose 59 epochs.
So my question is why the optimal epoch is not 4 epochs in Figure 3.
Thanks in advance.
Dear authors, I am recently reading your amazing working of this paper, however I encounter a calculation problem while trying to predict loss.
For figure 5 left, I choose 8.7B parameters, 187B tokens and 0.14 unique tokens as an example and follow this program:
import math
alpha = 0.3526596
beta = 0.3526596
A = math.exp(6.255414)
B = math.exp(7.3049974)
E = math.exp(0.6254804)
a = beta / (alpha + beta)
b = alpha / (alpha + beta)
G = pow((alpha * A) / (beta * B), 1/(alpha + beta))
# C = 9.3e21
# D_opt = (1 / G) * pow(C / 6, b)
# N_opt = G * pow(C / 6, a)
# print(f"D_opt: {'{:.2e}'.format(D_opt)}, N_opt: {'{:.2e}'.format(N_opt)}")
N = 8.7e9
D = 178e9
U_d = D / 7
U_n = min(pow(U_d*G, beta/alpha)*G, N)
R_n = max(N / U_n - 1, 0)
R_d = D / U_d - 1
R_nn = 5.309743
R_dd = 15.387756
L = E + (A / pow(U_n + R_nn * U_n * (1 - math.exp(-R_n / R_nn)), alpha)) + (B / pow(U_d + U_d*R_dd*(1 - math.exp(-R_d / R_dd)), beta))
print(f"Predicted loss: {L}")
The final output is 2.237666982866588 which is not close to 2.38 reference to the figure. I think there must be something wrong with the paramters, but I can not find it out. Could you please tell me what's wrong with my understanding of the scaling law?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.