laiyao1 / chipformer Goto Github PK
View Code? Open in Web Editor NEWThe release for ICML 2023 paper
The release for ICML 2023 paper
Dear Authors,
In your paper, particularly the table 10 in the appendix, you state that you use a GCN to encode the circuit tokens within your model, however I do not see that anywhere in your code. Could you please clarify?
Hello, I have met a critical problem during pre-training of the chipformer model. When I use the adaptec1_small.pkl as the training set and run "python3 run_dt_place.py" to start the training process, the reported training losses and accuracy are both Nan and the reward sum decreases. It is quite an abnormal training process. I have no changes to the initial codes and use the data from https://drive.google.com/drive/folders/1F7075SvjccYk97i2UWhahN_9krBvDCmr. Could you help me identify this abnormal phenomenon?
epoch 40 iter 15: train loss nan. lr 9.241834e-05. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.39it/s]
epoch 41 iter 15: train loss nan. lr 6.000000e-05. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.48it/s]
epoch 42 iter 15: train loss nan. lr 6.000000e-05. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.49it/s]
epoch 43 iter 15: train loss nan. lr 8.786797e-05. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.37it/s]
epoch 44 iter 15: train loss nan. lr 2.244066e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.46it/s]
epoch 45 iter 15: train loss nan. lr 3.817385e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.31it/s]
epoch 46 iter 15: train loss nan. lr 5.165868e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.36it/s]
epoch 47 iter 15: train loss nan. lr 5.918590e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.55it/s]
epoch 48 iter 15: train loss nan. lr 5.868500e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.29it/s]
epoch 49 iter 15: train loss nan. lr 5.029378e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.45it/s]
epoch 50 iter 15: train loss nan. lr 3.632038e-04. acc nan: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 16/16 [00:06<00:00, 2.40it/s]
len self.net_min_max_ord 69
T_rewards [-29786.0]
T_scores [-2.255142857142857]
Thank you!
Dear Authors,
The paper says allowing macros still to be movable when performing optimization-based placement as Table 1 can reach the optimal solution. Is there any detailed descriptions or codes that can illustrate the layout generation?
Thanks a lot!
Dear Lai Yao,
We could start to run the codes and obtain some results in Table 2. We are grateful for your help. However, some points need to be clarified and some implementations differ from those described in the paper.
The adaptec1_small.pkl is given to us. I wonder whether the codes of how to form the adaptec1_small.pkl could also be public to us, allowing us to check whether the calculated mask and the other information are correct.
We find that in the training data, the meta data is negative. However, the meta data means the width and the height of the macros. I am confused why the meta data is negative. (That is why I seek if it is possible to get the codes for constructing the training dataset)
The paper says the topology information would be encoded with the graph VGAE. However, we cannot find a part about the implementation. The circuit information is simply the combination of the size and degree of the macro. We wonder why there are some differences between the implementations and the papers:-)
I'm looking forward to hearing back from you!
Thank you.
在使用 python odt.py --benchmark=adaptec1
进行微调的时候发现在 odt.py 文件的 236 行使用了 args.model_type 这个参数,但是似乎并没有在其他位置找到相关的参数,因此运行之后会报错。请问该如何解决这个问题?
Dear Authors,
How long does this method need to train?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.