Giter Club home page Giter Club logo

chatglm-maths's People

Contributors

yongzhuo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

chatglm-maths's Issues

python3 c00_toy_lora_train_6b.py RuntimeError: Internal: [MASK] is already defined.

python3 c00_toy_lora_train_6b.py
/data/chatglm-ppo/chatglm-maths
/opt/conda/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cuda_cu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
generator_calculate_line: ('13+75=', '13+75=88')
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/c00_toy_lora_train_6b.py:340 in │
│ │
│ 337 │
│ 338 # The argument trust_remote_code is to be used with Auto classes. It has no effect her │
│ 339 chatglm_config = ChatGLMConfig.from_pretrained(pretrained_model_name_or_path) │
│ ❱ 340 tokenizer = ChatGLMTokenizer.from_pretrained(pretrained_model_name_or_path) │
│ 341 text = ("1、2", "3、4") │
│ 342 x_encode = tokenizer.encode(text[0]) │
│ 343 y_encode = tokenizer.encode(text[1]) │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1811 in │
│ from_pretrained │
│ │
│ 1808 │ │ │ else: │
│ 1809 │ │ │ │ logger.info(f"loading file {file_path} from cache at {resolved_vocab_fil │
│ 1810 │ │ │
│ ❱ 1811 │ │ return cls._from_pretrained( │
│ 1812 │ │ │ resolved_vocab_files, │
│ 1813 │ │ │ pretrained_model_name_or_path, │
│ 1814 │ │ │ init_configuration, │
│ │
│ /opt/conda/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:1965 in │
│ _from_pretrained │
│ │
│ 1962 │ │ │
│ 1963 │ │ # Instantiate tokenizer. │
│ 1964 │ │ try: │
│ ❱ 1965 │ │ │ tokenizer = cls(*init_inputs, **init_kwargs) │
│ 1966 │ │ except OSError: │
│ 1967 │ │ │ raise OSError( │
│ 1968 │ │ │ │ "Unable to load vocabulary from file. " │
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/tokenization_chatglm.py:225 in init
│ │
│ 222 │ │ self.mask_token = mask_token │
│ 223 │ │ self.gMASK_token = gmask_token │
│ 224 │ │ │
│ ❱ 225 │ │ self.sp_tokenizer = SPTokenizer(vocab_file) │
│ 226 │ │ │
│ 227 │ │ """ Initialisation """ │
│ 228 │
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/tokenization_chatglm.py:39 in init
│ │
│ 36 │ │ self.special_tokens = ["[MASK]", "[gMASK]", "[sMASK]", "<unused_0>", "", "< │
│ 37 │ │ self.max_blank_length = max_blank_length │
│ 38 │ │ self.byte_fallback = byte_fallback │
│ ❱ 39 │ │ self.text_tokenizer = self._build_text_tokenizer(encode_special_tokens=False) │
│ 40 │ │ self.special_text_tokenizer = self._build_text_tokenizer(encode_special_tokens=T │
│ 41 │ │
│ 42 │ @staticmethod
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/tokenization_chatglm.py:72 in │
│ _build_text_tokenizer │
│ │
│ 69 │ │
│ 70 │ def _build_text_tokenizer(self, encode_special_tokens=False): │
│ 71 │ │ tokenizer = TextTokenizer(self.vocab_file) │
│ ❱ 72 │ │ self._configure_tokenizer( │
│ 73 │ │ │ tokenizer, self.special_tokens, self.max_blank_length, self.byte_fallback, e │
│ 74 │ │ ) │
│ 75 │ │ return tokenizer │
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/tokenization_chatglm.py:68 in │
│ _configure_tokenizer │
│ │
│ 65 │ │ │ │ text_tokenizer.proto.pieces.append( │
│ 66 │ │ │ │ │ sp_model.ModelProto.SentencePiece(piece="<0x{:02X}>".format(i), scor │
│ 67 │ │ │ │ ) │
│ ❱ 68 │ │ text_tokenizer.refresh() │
│ 69 │ │
│ 70 │ def _build_text_tokenizer(self, encode_special_tokens=False): │
│ 71 │ │ tokenizer = TextTokenizer(self.vocab_file) │
│ │
│ /opt/conda/lib/python3.10/site-packages/icetk/text_tokenizer.py:31 in refresh │
│ │
│ 28 │ │
│ 29 │ def refresh(self): │
│ 30 │ │ self.sp = spm.SentencePieceProcessor() │
│ ❱ 31 │ │ self.sp.Load(model_proto=self.proto.SerializeToString()) │
│ 32 │ │ self.num_tokens = self.sp.vocab_size() │
│ 33 │ │
│ 34 │ def add_special_tokens(self, tokens): │
│ │
│ /opt/conda/lib/python3.10/site-packages/sentencepiece/init.py:904 in Load │
│ │
│ 901 │ if model_file and model_proto: │
│ 902 │ │ raise RuntimeError('model_file and model_proto must be exclusive.') │
│ 903 │ if model_proto: │
│ ❱ 904 │ │ return self.LoadFromSerializedProto(model_proto) │
│ 905 │ return self.LoadFromFile(model_file) │
│ 906 │
│ 907 │
│ │
│ /opt/conda/lib/python3.10/site-packages/sentencepiece/init.py:250 in LoadFromSerializedProto │
│ │
│ 247 │ swig_destroy = _sentencepiece.delete_SentencePieceProcessor │
│ 248 │ │
│ 249 │ def LoadFromSerializedProto(self, serialized): │
│ ❱ 250 │ │ return _sentencepiece.SentencePieceProcessor_LoadFromSerializedProto(self, seria │
│ 251 │ │
│ 252 │ def SetEncodeExtraOptions(self, extra_option): │
│ 253 │ │ return _sentencepiece.SentencePieceProcessor_SetEncodeExtraOptions(self, extra_o │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: Internal: [MASK] is already defined.

麻烦问下大佬这是什么问题?

python3 c00_toy_lora_train_6b.py

训练样本是否需要[CLS]和<|endofpiece|>?

huggingface inference格式是:
[CLS] 凯旋门位于意大利米兰市古城堡旁。1807年为纪念拿破仑军队攻克米兰城而建,门高25米,顶上矗立两武士青铜古兵车铸像。 [gMASK] <|endoftext|> <|startofpiece|> 门上有许多精美的雕刻,其中最引人注目的是一对屹立在门顶上的巨型青铜华表,华表上有两只威风凛凛的雄狮,一只象征拿破仑,一只象征米兰人民。 <|endofpiece|>

但是作者在最新更新的readme里面训练时没有[CLS]<|endoftext|>, 请问这两个token在训练时有必要吗?

chatglm-maths在测试的时候时这种格式吗?[CLS] ... [gMASK] <|endoftext|> <|startofpiece|> ... <|endofpiece|>

RuntimeError: probability tensor contains either `inf`, `nan` or element < 0

▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒ Traceback (most recent call last) ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
▒ /ChatGLM-6B/ChatGLM_math/chatglm_maths/t10_toy_trl_train_ppo.py:215 in ▒
▒ ▒
▒ ▒
▒ 212 ▒ # get model response ▒
▒ 213 ▒ # print(query_tensor) ▒
▒ 214 ▒ ▒
▒ ▒ 215 ▒ response_tensor = respond_to_batch_new(model_ref, query_tensor, txt_len=MAX_LEN, top ▒
▒ 216 ▒ # define a reward for response ▒
▒ 217 ▒ # (this could be any reward such as human feedback or output from another model) ▒
▒ 218 ▒ response_ids = response_tensor.detach().cpu().numpy().tolist() ▒
▒ ▒
▒ /ChatGLM-6B/ChatGLM_math/chatglm_maths/t10_toy_trl_train_ppo.py:62 in ▒
▒ respond_to_batch_new ▒
▒ ▒
▒ 59 ▒ ▒ next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p= ▒
▒ 60 ▒ ▒ # Sample ▒
▒ 61 ▒ ▒ probs = F.softmax(next_token_logits, dim=-1) ▒
▒ ▒ 62 ▒ ▒ next_token = torch.multinomial(probs, num_samples=1).squeeze(1) ▒
▒ 63 ▒ ▒ start_ids = torch.cat([start_ids, next_token.unsqueeze(-1)], dim=-1) ▒
▒ 64 ▒ ▒ # EOS ▒
▒ 65 ▒ ▒ if next_token.detach().cpu().numpy()[0] == tokenizer.eos_token_id: ▒
▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒
RuntimeError: probability tensor contains either inf, nan or element < 0

p10_toy_trl_predict_ppo.py loading problem

File "p10_toy_trl_predict_ppo.py", line 55, in load_model_state
model.load_state_dict(torch.load(path_model, map_location=torch.device(device)))
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ChatGLMForCausalLMWithValueHead:
Missing key(s) in state_dict: "pretrained_model.transformer.word_embeddings.weight", "pretrained_model.transformer.layers.0.input_layernorm.weight", "pretrained_model.transformer.layers.0.input_layernorm.bias", "pretrained_model.transformer.layers.0.attention.query_key_value.weight", "pretrained_model.transformer.layers.0.attention.query_key_value.bias", "pretrained_model.transformer.layers.0.attention.dense.weight", "pretrained_model.transformer.layers.0.attention.dense.bias", "pretrained_model.transformer.layers.0.post_attention_layernorm.weight", "pretrained_model.transformer.layers.0.post_attention_layernorm.bias", "pretrained_model.transformer.layers.0.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.0.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.0.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.0.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.1.input_layernorm.weight", "pretrained_model.transformer.layers.1.input_layernorm.bias", "pretrained_model.transformer.layers.1.attention.query_key_value.weight", "pretrained_model.transformer.layers.1.attention.query_key_value.bias", "pretrained_model.transformer.layers.1.attention.dense.weight", "pretrained_model.transformer.layers.1.attention.dense.bias", "pretrained_model.transformer.layers.1.post_attention_layernorm.weight", "pretrained_model.transformer.layers.1.post_attention_layernorm.bias", "pretrained_model.transformer.layers.1.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.1.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.1.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.1.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.2.input_layernorm.weight", "pretrained_model.transformer.layers.2.input_layernorm.bias", "pretrained_model.transformer.layers.2.attention.query_key_value.weight", "pretrained_model.transformer.layers.2.attention.query_key_value.bias", "pretrained_model.transformer.layers.2.attention.dense.weight", "pretrained_model.transformer.layers.2.attention.dense.bias", "pretrained_model.transformer.layers.2.post_attention_layernorm.weight", "pretrained_model.transformer.layers.2.post_attention_layernorm.bias", "pretrained_model.transformer.layers.2.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.2.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.2.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.2.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.3.input_layernorm.weight", "pretrained_model.transformer.layers.3.input_layernorm.bias", "pretrained_model.transformer.layers.3.attention.query_key_value.weight", "pretrained_model.transformer.layers.3.attention.query_key_value.bias", "pretrained_model.transformer.layers.3.attention.dense.weight", "pretrained_model.transformer.layers.3.attention.dense.bias", "pretrained_model.transformer.layers.3.post_attention_layernorm.weight", "pretrained_model.transformer.layers.3.post_attention_layernorm.bias", "pretrained_model.transformer.layers.3.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.3.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.3.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.3.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.4.input_layernorm.weight", "pretrained_model.transformer.layers.4.input_layernorm.bias", "pretrained_model.transformer.layers.4.attention.query_key_value.weight", "pretrained_model.transformer.layers.4.attention.query_key_value.bias", "pretrained_model.transformer.layers.4.attention.dense.weight", "pretrained_model.transformer.layers.4.attention.dense.bias", "pretrained_model.transformer.layers.4.post_attention_layernorm.weight", "pretrained_model.transformer.layers.4.post_attention_layernorm.bias", "pretrained_model.transformer.layers.4.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.4.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.4.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.4.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.5.input_layernorm.weight", "pretrained_model.transformer.layers.5.input_layernorm.bias", "pretrained_model.transformer.layers.5.attention.query_key_value.weight", "pretrained_model.transformer.layers.5.attention.query_key_value.bias", "pretrained_model.transformer.layers.5.attention.dense.weight", "pretrained_model.transformer.layers.5.attention.dense.bias", "pretrained_model.transformer.layers.5.post_attention_layernorm.weight", "pretrained_model.transformer.layers.5.post_attention_layernorm.bias", "pretrained_model.transformer.layers.5.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.5.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.5.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.5.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.6.input_layernorm.weight", "pretrained_model.transformer.layers.6.input_layernorm.bias", "pretrained_model.transformer.layers.6.attention.query_key_value.weight", "pretrained_model.transformer.layers.6.attention.query_key_value.bias", "pretrained_model.transformer.layers.6.attention.dense.weight", "pretrained_model.transformer.layers.6.attention.dense.bias", "pretrained_model.transformer.layers.6.post_attention_layernorm.weight", "pretrained_model.transformer.layers.6.post_attention_layernorm.bias", "pretrained_model.transformer.layers.6.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.6.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.6.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.6.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.7.input_layernorm.weight", "pretrained_model.transformer.layers.7.input_layernorm.bias", "pretrained_model.transformer.layers.7.attention.query_key_value.weight", "pretrained_model.transformer.layers.7.attention.query_key_value.bias", "pretrained_model.transformer.layers.7.attention.dense.weight", "pretrained_model.transformer.layers.7.attention.dense.bias", "pretrained_model.transformer.layers.7.post_attention_layernorm.weight", "pretrained_model.transformer.layers.7.post_attention_layernorm.bias", "pretrained_model.transformer.layers.7.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.7.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.7.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.7.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.8.input_layernorm.weight", "pretrained_model.transformer.layers.8.input_layernorm.bias", "pretrained_model.transformer.layers.8.attention.query_key_value.weight", "pretrained_model.transformer.layers.8.attention.query_key_value.bias", "pretrained_model.transformer.layers.8.attention.dense.weight", "pretrained_model.transformer.layers.8.attention.dense.bias", "pretrained_model.transformer.layers.8.post_attention_layernorm.weight", "pretrained_model.transformer.layers.8.post_attention_layernorm.bias", "pretrained_model.transformer.layers.8.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.8.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.8.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.8.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.9.input_layernorm.weight", "pretrained_model.transformer.layers.9.input_layernorm.bias", "pretrained_model.transformer.layers.9.attention.query_key_value.weight", "pretrained_model.transformer.layers.9.attention.query_key_value.bias", "pretrained_model.transformer.layers.9.attention.dense.weight", "pretrained_model.transformer.layers.9.attention.dense.bias", "pretrained_model.transformer.layers.9.post_attention_layernorm.weight", "pretrained_model.transformer.layers.9.post_attention_layernorm.bias", "pretrained_model.transformer.layers.9.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.9.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.9.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.9.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.10.input_layernorm.weight", "pretrained_model.transformer.layers.10.input_layernorm.bias", "pretrained_model.transformer.layers.10.attention.query_key_value.weight", "pretrained_model.transformer.layers.10.attention.query_key_value.bias", "pretrained_model.transformer.layers.10.attention.dense.weight", "pretrained_model.transformer.layers.10.attention.dense.bias", "pretrained_model.transformer.layers.10.post_attention_layernorm.weight", "pretrained_model.transformer.layers.10.post_attention_layernorm.bias", "pretrained_model.transformer.layers.10.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.10.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.10.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.10.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.11.input_layernorm.weight", "pretrained_model.transformer.layers.11.input_layernorm.bias", "pretrained_model.transformer.layers.11.attention.query_key_value.weight", "pretrained_model.transformer.layers.11.attention.query_key_value.bias", "pretrained_model.transformer.layers.11.attention.dense.weight", "pretrained_model.transformer.layers.11.attention.dense.bias", "pretrained_model.transformer.layers.11.post_attention_layernorm.weight", "pretrained_model.transformer.layers.11.post_attention_layernorm.bias", "pretrained_model.transformer.layers.11.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.11.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.11.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.11.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.12.input_layernorm.weight", "pretrained_model.transformer.layers.12.input_layernorm.bias", "pretrained_model.transformer.layers.12.attention.query_key_value.weight", "pretrained_model.transformer.layers.12.attention.query_key_value.bias", "pretrained_model.transformer.layers.12.attention.dense.weight", "pretrained_model.transformer.layers.12.attention.dense.bias", "pretrained_model.transformer.layers.12.post_attention_layernorm.weight", "pretrained_model.transformer.layers.12.post_attention_layernorm.bias", "pretrained_model.transformer.layers.12.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.12.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.12.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.12.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.13.input_layernorm.weight", "pretrained_model.transformer.layers.13.input_layernorm.bias", "pretrained_model.transformer.layers.13.attention.query_key_value.weight", "pretrained_model.transformer.layers.13.attention.query_key_value.bias", "pretrained_model.transformer.layers.13.attention.dense.weight", "pretrained_model.transformer.layers.13.attention.dense.bias", "pretrained_model.transformer.layers.13.post_attention_layernorm.weight", "pretrained_model.transformer.layers.13.post_attention_layernorm.bias", "pretrained_model.transformer.layers.13.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.13.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.13.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.13.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.14.input_layernorm.weight", "pretrained_model.transformer.layers.14.input_layernorm.bias", "pretrained_model.transformer.layers.14.attention.query_key_value.weight", "pretrained_model.transformer.layers.14.attention.query_key_value.bias", "pretrained_model.transformer.layers.14.attention.dense.weight", "pretrained_model.transformer.layers.14.attention.dense.bias", "pretrained_model.transformer.layers.14.post_attention_layernorm.weight", "pretrained_model.transformer.layers.14.post_attention_layernorm.bias", "pretrained_model.transformer.layers.14.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.14.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.14.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.14.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.15.input_layernorm.weight", "pretrained_model.transformer.layers.15.input_layernorm.bias", "pretrained_model.transformer.layers.15.attention.query_key_value.weight", "pretrained_model.transformer.layers.15.attention.query_key_value.bias", "pretrained_model.transformer.layers.15.attention.dense.weight", "pretrained_model.transformer.layers.15.attention.dense.bias", "pretrained_model.transformer.layers.15.post_attention_layernorm.weight", "pretrained_model.transformer.layers.15.post_attention_layernorm.bias", "pretrained_model.transformer.layers.15.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.15.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.15.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.15.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.16.input_layernorm.weight", "pretrained_model.transformer.layers.16.input_layernorm.bias", "pretrained_model.transformer.layers.16.attention.query_key_value.weight", "pretrained_model.transformer.layers.16.attention.query_key_value.bias", "pretrained_model.transformer.layers.16.attention.dense.weight", "pretrained_model.transformer.layers.16.attention.dense.bias", "pretrained_model.transformer.layers.16.post_attention_layernorm.weight", "pretrained_model.transformer.layers.16.post_attention_layernorm.bias", "pretrained_model.transformer.layers.16.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.16.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.16.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.16.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.17.input_layernorm.weight", "pretrained_model.transformer.layers.17.input_layernorm.bias", "pretrained_model.transformer.layers.17.attention.query_key_value.weight", "pretrained_model.transformer.layers.17.attention.query_key_value.bias", "pretrained_model.transformer.layers.17.attention.dense.weight", "pretrained_model.transformer.layers.17.attention.dense.bias", "pretrained_model.transformer.layers.17.post_attention_layernorm.weight", "pretrained_model.transformer.layers.17.post_attention_layernorm.bias", "pretrained_model.transformer.layers.17.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.17.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.17.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.17.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.18.input_layernorm.weight", "pretrained_model.transformer.layers.18.input_layernorm.bias", "pretrained_model.transformer.layers.18.attention.query_key_value.weight", "pretrained_model.transformer.layers.18.attention.query_key_value.bias", "pretrained_model.transformer.layers.18.attention.dense.weight", "pretrained_model.transformer.layers.18.attention.dense.bias", "pretrained_model.transformer.layers.18.post_attention_layernorm.weight", "pretrained_model.transformer.layers.18.post_attention_layernorm.bias", "pretrained_model.transformer.layers.18.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.18.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.18.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.18.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.19.input_layernorm.weight", "pretrained_model.transformer.layers.19.input_layernorm.bias", "pretrained_model.transformer.layers.19.attention.query_key_value.weight", "pretrained_model.transformer.layers.19.attention.query_key_value.bias", "pretrained_model.transformer.layers.19.attention.dense.weight", "pretrained_model.transformer.layers.19.attention.dense.bias", "pretrained_model.transformer.layers.19.post_attention_layernorm.weight", "pretrained_model.transformer.layers.19.post_attention_layernorm.bias", "pretrained_model.transformer.layers.19.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.19.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.19.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.19.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.20.input_layernorm.weight", "pretrained_model.transformer.layers.20.input_layernorm.bias", "pretrained_model.transformer.layers.20.attention.query_key_value.weight", "pretrained_model.transformer.layers.20.attention.query_key_value.bias", "pretrained_model.transformer.layers.20.attention.dense.weight", "pretrained_model.transformer.layers.20.attention.dense.bias", "pretrained_model.transformer.layers.20.post_attention_layernorm.weight", "pretrained_model.transformer.layers.20.post_attention_layernorm.bias", "pretrained_model.transformer.layers.20.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.20.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.20.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.20.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.21.input_layernorm.weight", "pretrained_model.transformer.layers.21.input_layernorm.bias", "pretrained_model.transformer.layers.21.attention.query_key_value.weight", "pretrained_model.transformer.layers.21.attention.query_key_value.bias", "pretrained_model.transformer.layers.21.attention.dense.weight", "pretrained_model.transformer.layers.21.attention.dense.bias", "pretrained_model.transformer.layers.21.post_attention_layernorm.weight", "pretrained_model.transformer.layers.21.post_attention_layernorm.bias", "pretrained_model.transformer.layers.21.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.21.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.21.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.21.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.22.input_layernorm.weight", "pretrained_model.transformer.layers.22.input_layernorm.bias", "pretrained_model.transformer.layers.22.attention.query_key_value.weight", "pretrained_model.transformer.layers.22.attention.query_key_value.bias", "pretrained_model.transformer.layers.22.attention.dense.weight", "pretrained_model.transformer.layers.22.attention.dense.bias", "pretrained_model.transformer.layers.22.post_attention_layernorm.weight", "pretrained_model.transformer.layers.22.post_attention_layernorm.bias", "pretrained_model.transformer.layers.22.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.22.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.22.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.22.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.23.input_layernorm.weight", "pretrained_model.transformer.layers.23.input_layernorm.bias", "pretrained_model.transformer.layers.23.attention.query_key_value.weight", "pretrained_model.transformer.layers.23.attention.query_key_value.bias", "pretrained_model.transformer.layers.23.attention.dense.weight", "pretrained_model.transformer.layers.23.attention.dense.bias", "pretrained_model.transformer.layers.23.post_attention_layernorm.weight", "pretrained_model.transformer.layers.23.post_attention_layernorm.bias", "pretrained_model.transformer.layers.23.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.23.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.23.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.23.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.24.input_layernorm.weight", "pretrained_model.transformer.layers.24.input_layernorm.bias", "pretrained_model.transformer.layers.24.attention.query_key_value.weight", "pretrained_model.transformer.layers.24.attention.query_key_value.bias", "pretrained_model.transformer.layers.24.attention.dense.weight", "pretrained_model.transformer.layers.24.attention.dense.bias", "pretrained_model.transformer.layers.24.post_attention_layernorm.weight", "pretrained_model.transformer.layers.24.post_attention_layernorm.bias", "pretrained_model.transformer.layers.24.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.24.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.24.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.24.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.25.input_layernorm.weight", "pretrained_model.transformer.layers.25.input_layernorm.bias", "pretrained_model.transformer.layers.25.attention.query_key_value.weight", "pretrained_model.transformer.layers.25.attention.query_key_value.bias", "pretrained_model.transformer.layers.25.attention.dense.weight", "pretrained_model.transformer.layers.25.attention.dense.bias", "pretrained_model.transformer.layers.25.post_attention_layernorm.weight", "pretrained_model.transformer.layers.25.post_attention_layernorm.bias", "pretrained_model.transformer.layers.25.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.25.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.25.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.25.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.26.input_layernorm.weight", "pretrained_model.transformer.layers.26.input_layernorm.bias", "pretrained_model.transformer.layers.26.attention.query_key_value.weight", "pretrained_model.transformer.layers.26.attention.query_key_value.bias", "pretrained_model.transformer.layers.26.attention.dense.weight", "pretrained_model.transformer.layers.26.attention.dense.bias", "pretrained_model.transformer.layers.26.post_attention_layernorm.weight", "pretrained_model.transformer.layers.26.post_attention_layernorm.bias", "pretrained_model.transformer.layers.26.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.26.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.26.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.26.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.layers.27.input_layernorm.weight", "pretrained_model.transformer.layers.27.input_layernorm.bias", "pretrained_model.transformer.layers.27.attention.query_key_value.weight", "pretrained_model.transformer.layers.27.attention.query_key_value.bias", "pretrained_model.transformer.layers.27.attention.dense.weight", "pretrained_model.transformer.layers.27.attention.dense.bias", "pretrained_model.transformer.layers.27.post_attention_layernorm.weight", "pretrained_model.transformer.layers.27.post_attention_layernorm.bias", "pretrained_model.transformer.layers.27.mlp.dense_h_to_4h.weight", "pretrained_model.transformer.layers.27.mlp.dense_h_to_4h.bias", "pretrained_model.transformer.layers.27.mlp.dense_4h_to_h.weight", "pretrained_model.transformer.layers.27.mlp.dense_4h_to_h.bias", "pretrained_model.transformer.final_layernorm.weight", "pretrained_model.transformer.final_layernorm.bias", "pretrained_model.lm_head.weight".
Unexpected key(s) in state_dict: "transformer.word_embeddings.weight", "transformer.layers.0.input_layernorm.weight", "transformer.layers.0.input_layernorm.bias", "transformer.layers.0.attention.rotary_emb.inv_freq", "transformer.layers.0.attention.query_key_value.weight", "transformer.layers.0.attention.query_key_value.bias", "transformer.layers.0.attention.dense.weight", "transformer.layers.0.attention.dense.bias", "transformer.layers.0.post_attention_layernorm.weight", "transformer.layers.0.post_attention_layernorm.bias", "transformer.layers.0.mlp.dense_h_to_4h.weight", "transformer.layers.0.mlp.dense_h_to_4h.bias", "transformer.layers.0.mlp.dense_4h_to_h.weight", "transformer.layers.0.mlp.dense_4h_to_h.bias", "transformer.layers.1.input_layernorm.weight", "transformer.layers.1.input_layernorm.bias", "transformer.layers.1.attention.rotary_emb.inv_freq", "transformer.layers.1.attention.query_key_value.weight", "transformer.layers.1.attention.query_key_value.bias", "transformer.layers.1.attention.dense.weight", "transformer.layers.1.attention.dense.bias", "transformer.layers.1.post_attention_layernorm.weight", "transformer.layers.1.post_attention_layernorm.bias", "transformer.layers.1.mlp.dense_h_to_4h.weight", "transformer.layers.1.mlp.dense_h_to_4h.bias", "transformer.layers.1.mlp.dense_4h_to_h.weight", "transformer.layers.1.mlp.dense_4h_to_h.bias", "transformer.layers.2.input_layernorm.weight", "transformer.layers.2.input_layernorm.bias", "transformer.layers.2.attention.rotary_emb.inv_freq", "transformer.layers.2.attention.query_key_value.weight", "transformer.layers.2.attention.query_key_value.bias", "transformer.layers.2.attention.dense.weight", "transformer.layers.2.attention.dense.bias", "transformer.layers.2.post_attention_layernorm.weight", "transformer.layers.2.post_attention_layernorm.bias", "transformer.layers.2.mlp.dense_h_to_4h.weight", "transformer.layers.2.mlp.dense_h_to_4h.bias", "transformer.layers.2.mlp.dense_4h_to_h.weight", "transformer.layers.2.mlp.dense_4h_to_h.bias", "transformer.layers.3.input_layernorm.weight", "transformer.layers.3.input_layernorm.bias", "transformer.layers.3.attention.rotary_emb.inv_freq", "transformer.layers.3.attention.query_key_value.weight", "transformer.layers.3.attention.query_key_value.bias", "transformer.layers.3.attention.dense.weight", "transformer.layers.3.attention.dense.bias", "transformer.layers.3.post_attention_layernorm.weight", "transformer.layers.3.post_attention_layernorm.bias", "transformer.layers.3.mlp.dense_h_to_4h.weight", "transformer.layers.3.mlp.dense_h_to_4h.bias", "transformer.layers.3.mlp.dense_4h_to_h.weight", "transformer.layers.3.mlp.dense_4h_to_h.bias", "transformer.layers.4.input_layernorm.weight", "transformer.layers.4.input_layernorm.bias", "transformer.layers.4.attention.rotary_emb.inv_freq", "transformer.layers.4.attention.query_key_value.weight", "transformer.layers.4.attention.query_key_value.bias", "transformer.layers.4.attention.dense.weight", "transformer.layers.4.attention.dense.bias", "transformer.layers.4.post_attention_layernorm.weight", "transformer.layers.4.post_attention_layernorm.bias", "transformer.layers.4.mlp.dense_h_to_4h.weight", "transformer.layers.4.mlp.dense_h_to_4h.bias", "transformer.layers.4.mlp.dense_4h_to_h.weight", "transformer.layers.4.mlp.dense_4h_to_h.bias", "transformer.layers.5.input_layernorm.weight", "transformer.layers.5.input_layernorm.bias", "transformer.layers.5.attention.rotary_emb.inv_freq", "transformer.layers.5.attention.query_key_value.weight", "transformer.layers.5.attention.query_key_value.bias", "transformer.layers.5.attention.dense.weight", "transformer.layers.5.attention.dense.bias", "transformer.layers.5.post_attention_layernorm.weight", "transformer.layers.5.post_attention_layernorm.bias", "transformer.layers.5.mlp.dense_h_to_4h.weight", "transformer.layers.5.mlp.dense_h_to_4h.bias", "transformer.layers.5.mlp.dense_4h_to_h.weight", "transformer.layers.5.mlp.dense_4h_to_h.bias", "transformer.layers.6.input_layernorm.weight", "transformer.layers.6.input_layernorm.bias", "transformer.layers.6.attention.rotary_emb.inv_freq", "transformer.layers.6.attention.query_key_value.weight", "transformer.layers.6.attention.query_key_value.bias", "transformer.layers.6.attention.dense.weight", "transformer.layers.6.attention.dense.bias", "transformer.layers.6.post_attention_layernorm.weight", "transformer.layers.6.post_attention_layernorm.bias", "transformer.layers.6.mlp.dense_h_to_4h.weight", "transformer.layers.6.mlp.dense_h_to_4h.bias", "transformer.layers.6.mlp.dense_4h_to_h.weight", "transformer.layers.6.mlp.dense_4h_to_h.bias", "transformer.layers.7.input_layernorm.weight", "transformer.layers.7.input_layernorm.bias", "transformer.layers.7.attention.rotary_emb.inv_freq", "transformer.layers.7.attention.query_key_value.weight", "transformer.layers.7.attention.query_key_value.bias", "transformer.layers.7.attention.dense.weight", "transformer.layers.7.attention.dense.bias", "transformer.layers.7.post_attention_layernorm.weight", "transformer.layers.7.post_attention_layernorm.bias", "transformer.layers.7.mlp.dense_h_to_4h.weight", "transformer.layers.7.mlp.dense_h_to_4h.bias", "transformer.layers.7.mlp.dense_4h_to_h.weight", "transformer.layers.7.mlp.dense_4h_to_h.bias", "transformer.layers.8.input_layernorm.weight", "transformer.layers.8.input_layernorm.bias", "transformer.layers.8.attention.rotary_emb.inv_freq", "transformer.layers.8.attention.query_key_value.weight", "transformer.layers.8.attention.query_key_value.bias", "transformer.layers.8.attention.dense.weight", "transformer.layers.8.attention.dense.bias", "transformer.layers.8.post_attention_layernorm.weight", "transformer.layers.8.post_attention_layernorm.bias", "transformer.layers.8.mlp.dense_h_to_4h.weight", "transformer.layers.8.mlp.dense_h_to_4h.bias", "transformer.layers.8.mlp.dense_4h_to_h.weight", "transformer.layers.8.mlp.dense_4h_to_h.bias", "transformer.layers.9.input_layernorm.weight", "transformer.layers.9.input_layernorm.bias", "transformer.layers.9.attention.rotary_emb.inv_freq", "transformer.layers.9.attention.query_key_value.weight", "transformer.layers.9.attention.query_key_value.bias", "transformer.layers.9.attention.dense.weight", "transformer.layers.9.attention.dense.bias", "transformer.layers.9.post_attention_layernorm.weight", "transformer.layers.9.post_attention_layernorm.bias", "transformer.layers.9.mlp.dense_h_to_4h.weight", "transformer.layers.9.mlp.dense_h_to_4h.bias", "transformer.layers.9.mlp.dense_4h_to_h.weight", "transformer.layers.9.mlp.dense_4h_to_h.bias", "transformer.layers.10.input_layernorm.weight", "transformer.layers.10.input_layernorm.bias", "transformer.layers.10.attention.rotary_emb.inv_freq", "transformer.layers.10.attention.query_key_value.weight", "transformer.layers.10.attention.query_key_value.bias", "transformer.layers.10.attention.dense.weight", "transformer.layers.10.attention.dense.bias", "transformer.layers.10.post_attention_layernorm.weight", "transformer.layers.10.post_attention_layernorm.bias", "transformer.layers.10.mlp.dense_h_to_4h.weight", "transformer.layers.10.mlp.dense_h_to_4h.bias", "transformer.layers.10.mlp.dense_4h_to_h.weight", "transformer.layers.10.mlp.dense_4h_to_h.bias", "transformer.layers.11.input_layernorm.weight", "transformer.layers.11.input_layernorm.bias", "transformer.layers.11.attention.rotary_emb.inv_freq", "transformer.layers.11.attention.query_key_value.weight", "transformer.layers.11.attention.query_key_value.bias", "transformer.layers.11.attention.dense.weight", "transformer.layers.11.attention.dense.bias", "transformer.layers.11.post_attention_layernorm.weight", "transformer.layers.11.post_attention_layernorm.bias", "transformer.layers.11.mlp.dense_h_to_4h.weight", "transformer.layers.11.mlp.dense_h_to_4h.bias", "transformer.layers.11.mlp.dense_4h_to_h.weight", "transformer.layers.11.mlp.dense_4h_to_h.bias", "transformer.layers.12.input_layernorm.weight", "transformer.layers.12.input_layernorm.bias", "transformer.layers.12.attention.rotary_emb.inv_freq", "transformer.layers.12.attention.query_key_value.weight", "transformer.layers.12.attention.query_key_value.bias", "transformer.layers.12.attention.dense.weight", "transformer.layers.12.attention.dense.bias", "transformer.layers.12.post_attention_layernorm.weight", "transformer.layers.12.post_attention_layernorm.bias", "transformer.layers.12.mlp.dense_h_to_4h.weight", "transformer.layers.12.mlp.dense_h_to_4h.bias", "transformer.layers.12.mlp.dense_4h_to_h.weight", "transformer.layers.12.mlp.dense_4h_to_h.bias", "transformer.layers.13.input_layernorm.weight", "transformer.layers.13.input_layernorm.bias", "transformer.layers.13.attention.rotary_emb.inv_freq", "transformer.layers.13.attention.query_key_value.weight", "transformer.layers.13.attention.query_key_value.bias", "transformer.layers.13.attention.dense.weight", "transformer.layers.13.attention.dense.bias", "transformer.layers.13.post_attention_layernorm.weight", "transformer.layers.13.post_attention_layernorm.bias", "transformer.layers.13.mlp.dense_h_to_4h.weight", "transformer.layers.13.mlp.dense_h_to_4h.bias", "transformer.layers.13.mlp.dense_4h_to_h.weight", "transformer.layers.13.mlp.dense_4h_to_h.bias", "transformer.layers.14.input_layernorm.weight", "transformer.layers.14.input_layernorm.bias", "transformer.layers.14.attention.rotary_emb.inv_freq", "transformer.layers.14.attention.query_key_value.weight", "transformer.layers.14.attention.query_key_value.bias", "transformer.layers.14.attention.dense.weight", "transformer.layers.14.attention.dense.bias", "transformer.layers.14.post_attention_layernorm.weight", "transformer.layers.14.post_attention_layernorm.bias", "transformer.layers.14.mlp.dense_h_to_4h.weight", "transformer.layers.14.mlp.dense_h_to_4h.bias", "transformer.layers.14.mlp.dense_4h_to_h.weight", "transformer.layers.14.mlp.dense_4h_to_h.bias", "transformer.layers.15.input_layernorm.weight", "transformer.layers.15.input_layernorm.bias", "transformer.layers.15.attention.rotary_emb.inv_freq", "transformer.layers.15.attention.query_key_value.weight", "transformer.layers.15.attention.query_key_value.bias", "transformer.layers.15.attention.dense.weight", "transformer.layers.15.attention.dense.bias", "transformer.layers.15.post_attention_layernorm.weight", "transformer.layers.15.post_attention_layernorm.bias", "transformer.layers.15.mlp.dense_h_to_4h.weight", "transformer.layers.15.mlp.dense_h_to_4h.bias", "transformer.layers.15.mlp.dense_4h_to_h.weight", "transformer.layers.15.mlp.dense_4h_to_h.bias", "transformer.layers.16.input_layernorm.weight", "transformer.layers.16.input_layernorm.bias", "transformer.layers.16.attention.rotary_emb.inv_freq", "transformer.layers.16.attention.query_key_value.weight", "transformer.layers.16.attention.query_key_value.bias", "transformer.layers.16.attention.dense.weight", "transformer.layers.16.attention.dense.bias", "transformer.layers.16.post_attention_layernorm.weight", "transformer.layers.16.post_attention_layernorm.bias", "transformer.layers.16.mlp.dense_h_to_4h.weight", "transformer.layers.16.mlp.dense_h_to_4h.bias", "transformer.layers.16.mlp.dense_4h_to_h.weight", "transformer.layers.16.mlp.dense_4h_to_h.bias", "transformer.layers.17.input_layernorm.weight", "transformer.layers.17.input_layernorm.bias", "transformer.layers.17.attention.rotary_emb.inv_freq", "transformer.layers.17.attention.query_key_value.weight", "transformer.layers.17.attention.query_key_value.bias", "transformer.layers.17.attention.dense.weight", "transformer.layers.17.attention.dense.bias", "transformer.layers.17.post_attention_layernorm.weight", "transformer.layers.17.post_attention_layernorm.bias", "transformer.layers.17.mlp.dense_h_to_4h.weight", "transformer.layers.17.mlp.dense_h_to_4h.bias", "transformer.layers.17.mlp.dense_4h_to_h.weight", "transformer.layers.17.mlp.dense_4h_to_h.bias", "transformer.layers.18.input_layernorm.weight", "transformer.layers.18.input_layernorm.bias", "transformer.layers.18.attention.rotary_emb.inv_freq", "transformer.layers.18.attention.query_key_value.weight", "transformer.layers.18.attention.query_key_value.bias", "transformer.layers.18.attention.dense.weight", "transformer.layers.18.attention.dense.bias", "transformer.layers.18.post_attention_layernorm.weight", "transformer.layers.18.post_attention_layernorm.bias", "transformer.layers.18.mlp.dense_h_to_4h.weight", "transformer.layers.18.mlp.dense_h_to_4h.bias", "transformer.layers.18.mlp.dense_4h_to_h.weight", "transformer.layers.18.mlp.dense_4h_to_h.bias", "transformer.layers.19.input_layernorm.weight", "transformer.layers.19.input_layernorm.bias", "transformer.layers.19.attention.rotary_emb.inv_freq", "transformer.layers.19.attention.query_key_value.weight", "transformer.layers.19.attention.query_key_value.bias", "transformer.layers.19.attention.dense.weight", "transformer.layers.19.attention.dense.bias", "transformer.layers.19.post_attention_layernorm.weight", "transformer.layers.19.post_attention_layernorm.bias", "transformer.layers.19.mlp.dense_h_to_4h.weight", "transformer.layers.19.mlp.dense_h_to_4h.bias", "transformer.layers.19.mlp.dense_4h_to_h.weight", "transformer.layers.19.mlp.dense_4h_to_h.bias", "transformer.layers.20.input_layernorm.weight", "transformer.layers.20.input_layernorm.bias", "transformer.layers.20.attention.rotary_emb.inv_freq", "transformer.layers.20.attention.query_key_value.weight", "transformer.layers.20.attention.query_key_value.bias", "transformer.layers.20.attention.dense.weight", "transformer.layers.20.attention.dense.bias", "transformer.layers.20.post_attention_layernorm.weight", "transformer.layers.20.post_attention_layernorm.bias", "transformer.layers.20.mlp.dense_h_to_4h.weight", "transformer.layers.20.mlp.dense_h_to_4h.bias", "transformer.layers.20.mlp.dense_4h_to_h.weight", "transformer.layers.20.mlp.dense_4h_to_h.bias", "transformer.layers.21.input_layernorm.weight", "transformer.layers.21.input_layernorm.bias", "transformer.layers.21.attention.rotary_emb.inv_freq", "transformer.layers.21.attention.query_key_value.weight", "transformer.layers.21.attention.query_key_value.bias", "transformer.layers.21.attention.dense.weight", "transformer.layers.21.attention.dense.bias", "transformer.layers.21.post_attention_layernorm.weight", "transformer.layers.21.post_attention_layernorm.bias", "transformer.layers.21.mlp.dense_h_to_4h.weight", "transformer.layers.21.mlp.dense_h_to_4h.bias", "transformer.layers.21.mlp.dense_4h_to_h.weight", "transformer.layers.21.mlp.dense_4h_to_h.bias", "transformer.layers.22.input_layernorm.weight", "transformer.layers.22.input_layernorm.bias", "transformer.layers.22.attention.rotary_emb.inv_freq", "transformer.layers.22.attention.query_key_value.weight", "transformer.layers.22.attention.query_key_value.bias", "transformer.layers.22.attention.dense.weight", "transformer.layers.22.attention.dense.bias", "transformer.layers.22.post_attention_layernorm.weight", "transformer.layers.22.post_attention_layernorm.bias", "transformer.layers.22.mlp.dense_h_to_4h.weight", "transformer.layers.22.mlp.dense_h_to_4h.bias", "transformer.layers.22.mlp.dense_4h_to_h.weight", "transformer.layers.22.mlp.dense_4h_to_h.bias", "transformer.layers.23.input_layernorm.weight", "transformer.layers.23.input_layernorm.bias", "transformer.layers.23.attention.rotary_emb.inv_freq", "transformer.layers.23.attention.query_key_value.weight", "transformer.layers.23.attention.query_key_value.bias", "transformer.layers.23.attention.dense.weight", "transformer.layers.23.attention.dense.bias", "transformer.layers.23.post_attention_layernorm.weight", "transformer.layers.23.post_attention_layernorm.bias", "transformer.layers.23.mlp.dense_h_to_4h.weight", "transformer.layers.23.mlp.dense_h_to_4h.bias", "transformer.layers.23.mlp.dense_4h_to_h.weight", "transformer.layers.23.mlp.dense_4h_to_h.bias", "transformer.layers.24.input_layernorm.weight", "transformer.layers.24.input_layernorm.bias", "transformer.layers.24.attention.rotary_emb.inv_freq", "transformer.layers.24.attention.query_key_value.weight", "transformer.layers.24.attention.query_key_value.bias", "transformer.layers.24.attention.dense.weight", "transformer.layers.24.attention.dense.bias", "transformer.layers.24.post_attention_layernorm.weight", "transformer.layers.24.post_attention_layernorm.bias", "transformer.layers.24.mlp.dense_h_to_4h.weight", "transformer.layers.24.mlp.dense_h_to_4h.bias", "transformer.layers.24.mlp.dense_4h_to_h.weight", "transformer.layers.24.mlp.dense_4h_to_h.bias", "transformer.layers.25.input_layernorm.weight", "transformer.layers.25.input_layernorm.bias", "transformer.layers.25.attention.rotary_emb.inv_freq", "transformer.layers.25.attention.query_key_value.weight", "transformer.layers.25.attention.query_key_value.bias", "transformer.layers.25.attention.dense.weight", "transformer.layers.25.attention.dense.bias", "transformer.layers.25.post_attention_layernorm.weight", "transformer.layers.25.post_attention_layernorm.bias", "transformer.layers.25.mlp.dense_h_to_4h.weight", "transformer.layers.25.mlp.dense_h_to_4h.bias", "transformer.layers.25.mlp.dense_4h_to_h.weight", "transformer.layers.25.mlp.dense_4h_to_h.bias", "transformer.layers.26.input_layernorm.weight", "transformer.layers.26.input_layernorm.bias", "transformer.layers.26.attention.rotary_emb.inv_freq", "transformer.layers.26.attention.query_key_value.weight", "transformer.layers.26.attention.query_key_value.bias", "transformer.layers.26.attention.dense.weight", "transformer.layers.26.attention.dense.bias", "transformer.layers.26.post_attention_layernorm.weight", "transformer.layers.26.post_attention_layernorm.bias", "transformer.layers.26.mlp.dense_h_to_4h.weight", "transformer.layers.26.mlp.dense_h_to_4h.bias", "transformer.layers.26.mlp.dense_4h_to_h.weight", "transformer.layers.26.mlp.dense_4h_to_h.bias", "transformer.layers.27.input_layernorm.weight", "transformer.layers.27.input_layernorm.bias", "transformer.layers.27.attention.rotary_emb.inv_freq", "transformer.layers.27.attention.query_key_value.weight", "transformer.layers.27.attention.query_key_value.bias", "transformer.layers.27.attention.dense.weight", "transformer.layers.27.attention.dense.bias", "transformer.layers.27.post_attention_layernorm.weight", "transformer.layers.27.post_attention_layernorm.bias", "transformer.layers.27.mlp.dense_h_to_4h.weight", "transformer.layers.27.mlp.dense_h_to_4h.bias", "transformer.layers.27.mlp.dense_4h_to_h.weight", "transformer.layers.27.mlp.dense_4h_to_h.bias", "transformer.final_layernorm.weight", "transformer.final_layernorm.bias", "lm_head.weight".

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "p10_toy_trl_predict_ppo.py", line 131, in
model = load_model_state(model, model_save_dir=model_save_path_ppo)
File "p10_toy_trl_predict_ppo.py", line 61, in load_model_state
raise Exception("load model error")
Exception: load model error

怎么使用?

你好,我不太懂,想问一下,这个模型是已经训练好的吗?
还是说我得运行一下:
微调: python c00_toy_cpu_train_6b.py
推理: python p00_toy_cpu_predit_6b.py
然后就可以用了?

ValueError: 130004 is not in list 大佬,这是什么错误?

/data/chatglm-ppo/chatglm-maths/chatglm_maths/t10_toy_trl_train_ppo.py:58 in │
│ respond_to_batch_new │
│ │
│ 55 │ for i in range(txt_len): │
│ 56 │ │ try: │
│ 57 │ │ │ # Get Logits │
│ ❱ 58 │ │ │ outputs = model(torch.cat([start_ids, end_ids], dim=-1)) │
│ 59 │ │ │ next_token_logits = outputs[0][:, -1, :] │
│ 60 │ │ │ next_token_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, to │
│ 61 │ │ │ # Sample │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /data/trl/trl/models/modeling_value_head.py:161 in forward │
│ │
│ 158 │ │ """ │
│ 159 │ │ kwargs["output_hidden_states"] = True # this had already been set in the LORA / │
│ 160 │ │ │
│ ❱ 161 │ │ base_model_output = self.pretrained_model( │
│ 162 │ │ │ input_ids=input_ids, │
│ 163 │ │ │ past_key_values=past_key_values, │
│ 164 │ │ │ attention_mask=attention_mask, │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/modeling_chatglm.py:1119 in forward │
│ │
│ 1116 │ │ use_cache = use_cache if use_cache is not None else self.config.use_cache │
│ 1117 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │
│ 1118 │ │ │
│ ❱ 1119 │ │ transformer_outputs = self.transformer( │
│ 1120 │ │ │ input_ids=input_ids, │
│ 1121 │ │ │ position_ids=position_ids, │
│ 1122 │ │ │ attention_mask=attention_mask, │
│ │
│ /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501 in _call_impl │
│ │
│ 1498 │ │ if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks │
│ 1499 │ │ │ │ or _global_backward_pre_hooks or _global_backward_hooks │
│ 1500 │ │ │ │ or _global_forward_hooks or global_forward_pre_hooks): │
│ ❱ 1501 │ │ │ return forward_call(*args, **kwargs) │
│ 1502 │ │ # Do not call functions when jit is used │
│ 1503 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │
│ 1504 │ │ backward_pre_hooks = [] │
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/modeling_chatglm.py:917 in forward │
│ │
│ 914 │ │ │ seq = input_ids[0].tolist() │
│ 915 │ │ │ │
│ 916 │ │ │ if attention_mask is None: │
│ ❱ 917 │ │ │ │ attention_mask = self.get_masks( │
│ 918 │ │ │ │ │ seq=seq, │
│ 919 │ │ │ │ │ device=input_ids.device │
│ 920 │ │ │ │ ) │
│ │
│ /data/chatglm-ppo/chatglm-maths/chatglm_maths/models/modeling_chatglm.py:847 in get_masks │
│ │
│ 844 │ │ self.word_embeddings = new_embeddings │
│ 845 │ │
│ 846 │ def get_masks(self, seq, device): │
│ ❱ 847 │ │ context_length = seq.index(self.config.bos_token_id) │
│ 848 │ │ attention_mask = torch.ones((1, len(seq), len(seq)), device=device) │
│ 849 │ │ attention_mask.tril
() │
│ 850 │ │ attention_mask[..., :context_length] = 1 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: 130004 is not in list

希望取得联系

尊敬的chatglm-maths 应用开发者,我是 InternLM 社区开发者&志愿者尖米, 大佬开源的工作对我的启发很大,希望可以探讨使用 InternLM 实现chatglm-maths 的可能性和实现路径,我的微信是mzm312,希望可以取得联系进行更深度的交流

probs is nan

Traceback (most recent call last):
File "t10_toy_trl_train_ppo.py", line 178, in
response_tensor = respond_to_batch_new(model_ref, query_tensor,
File "t10_toy_trl_train_ppo.py", line 60, in respond_to_batch_new
next_token = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either inf, nan or element < 0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.