An offline RL placement method ChiPFormer, significantly improving the design quality and efficiency.
Yao Lai, Jinxin Liu, Zhentao Tang, Bin Wang, Jianye Hao, Ping Luo. "ChiPFormer: Transferable Chip Placement via Offline Decision Transformer." International Conference on Machine Learning, ICML (2023): 18346-18364.
paper | dataset | website | video(English) | video(Mandarin)
Download the offline placement dataset from Google Drive. (We provide the placement data including 12 benchmarks: adaptec1-4, bigblue1-4, ibm01-04, but you can just download a single benchmark file for training.) (24/03/02 Updated)
For a quick start, you can directly unzip the adaptec1_small.pkl data in this repo.
tar -zxvf adaptec1_small.pkl.tar.gz
Our test dataset includes ISPD05 and ICCAD04 datasets. We provide the adaptec1 dataset for a quick start. For all benchmarks, they can also be downloaded from the placement_bench.zip file in Google Drive.
Our ChiPFormer includes the pretraining and finetuning two parts.
- For pretraining:
python run_dt_place.py
The dataset file for training can be modified in create_dataset.py. The saved models are in the folder save_models.
- For finetuning:
python odt.py --benchmark=adaptec1
The model path for funetuning can be modified in odt.py. Typically, the optimal model files obtained from pre-training are used to fine-tune.
(2024/05/25 updated)
For using VAGE, you need to firstly train the model, and use this model to evaluate the given benchmarks.
python graph_train.py
Then the trained model would be saved in the directory save_graph_models/
.
python graph_eval.py
The circuit embeddings are saved in the file circuit_g_token*.pkl
.
Then, you should modify the file mingpt/place_db.py
for the line
def __init__(self, benchmark = None, offset = 0, is_graph = False):
to
def __init__(self, benchmark = None, offset = 0, is_graph = True):
We also provide the example model and embedding file which you can directly use.
For run_dt_place.py:
- seed Random seed.
- context_length Maximum length of decision transformer.
- epochs Maximum training epochs.
- batch_size Batch size.
- cuda GPU label for use.
- is_eval_only Whether to evaluate function. (In evaluation function, it will place all macros rather than the maximum length number of macros)
- test_all_macro Whether to place all existing macros.
For odt.py:
- replay_size Replay buffer size for finetuning.
- traj_len Maximum length of decision transformer.
- batch_size Batch size.
- benchmark Circuit benchmark for finetuning.
- max_online_iters Maximum number of iterations for finetuning.
- eval_interval Evaluation every how many iterations.
- exploration_rtg Return-to-go value for exploration.
- is_fifo Whether to use fifo buffer or priority queue buffer.
- cuda GPU label for use.
The code refers to the following open source repos:
If you find our paper/code useful in your research, please cite
@inproceedings{lai2023chipformer,
author = {Lai, Yao and Liu, Jinxin and Tang, Zhentao and Wang, Bin and Hao, Jianye and Luo, Ping},
title = {ChiPFormer: Transferable Chip Placement via Offline Decision Transformer},
booktitle = {International Conference on Machine Learning, {ICML} 2023, 23-29 July
2023, Honolulu, Hawaii, {USA}},
series = {Proceedings of Machine Learning Research},
volume = {202},
pages = {18346--18364},
publisher = {{PMLR}},
year = {2023},
url = {https://proceedings.mlr.press/v202/lai23c.html},
}