Giter Club home page Giter Club logo

Comments (8)

treeform avatar treeform commented on August 23, 2024 7

I got it working by looking at your weights and SD 1.4 weights and matching data with the names. Now any 1.4 or 1.5 model works (including custom models). My script does some thing a little different (I have switched to just loading using safetensors without conversion) but the main part looks like this, someone could clean it up:

s = torch.load(inputFile, weights_only = False)["state_dict"]

new = {}
new['diffusion'] = {}
new['encoder'] = {}
new['decoder'] = {}
new['clip'] = {}

new['diffusion']['time_embedding.linear_1.weight'] = s['model.diffusion_model.time_embed.0.weight']
new['diffusion']['time_embedding.linear_1.bias'] = s['model.diffusion_model.time_embed.0.bias']
new['diffusion']['time_embedding.linear_2.weight'] = s['model.diffusion_model.time_embed.2.weight']
new['diffusion']['time_embedding.linear_2.bias'] = s['model.diffusion_model.time_embed.2.bias']
new['diffusion']['unet.encoders.0.0.weight'] = s['model.diffusion_model.input_blocks.0.0.weight']
new['diffusion']['unet.encoders.0.0.bias'] = s['model.diffusion_model.input_blocks.0.0.bias']
new['diffusion']['unet.encoders.1.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.1.0.in_layers.0.weight']
new['diffusion']['unet.encoders.1.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.1.0.in_layers.0.bias']
new['diffusion']['unet.encoders.1.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.1.0.in_layers.2.weight']
new['diffusion']['unet.encoders.1.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.1.0.in_layers.2.bias']
new['diffusion']['unet.encoders.1.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.1.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.1.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.1.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.1.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.1.0.out_layers.0.weight']
new['diffusion']['unet.encoders.1.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.1.0.out_layers.0.bias']
new['diffusion']['unet.encoders.1.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.1.0.out_layers.3.weight']
new['diffusion']['unet.encoders.1.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.1.0.out_layers.3.bias']
new['diffusion']['unet.encoders.1.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.1.1.norm.weight']
new['diffusion']['unet.encoders.1.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.1.1.norm.bias']
new['diffusion']['unet.encoders.1.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.1.1.proj_in.weight']
new['diffusion']['unet.encoders.1.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.1.1.proj_in.bias']
new['diffusion']['unet.encoders.1.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.1.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.1.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.1.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.1.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.1.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.1.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.1.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.1.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.1.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.1.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.1.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.1.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.1.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.1.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.1.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.1.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.1.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.1.1.proj_out.weight']
new['diffusion']['unet.encoders.1.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.1.1.proj_out.bias']
new['diffusion']['unet.encoders.2.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.2.0.in_layers.0.weight']
new['diffusion']['unet.encoders.2.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.2.0.in_layers.0.bias']
new['diffusion']['unet.encoders.2.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.2.0.in_layers.2.weight']
new['diffusion']['unet.encoders.2.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.2.0.in_layers.2.bias']
new['diffusion']['unet.encoders.2.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.2.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.2.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.2.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.2.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.2.0.out_layers.0.weight']
new['diffusion']['unet.encoders.2.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.2.0.out_layers.0.bias']
new['diffusion']['unet.encoders.2.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.2.0.out_layers.3.weight']
new['diffusion']['unet.encoders.2.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.2.0.out_layers.3.bias']
new['diffusion']['unet.encoders.2.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.2.1.norm.weight']
new['diffusion']['unet.encoders.2.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.2.1.norm.bias']
new['diffusion']['unet.encoders.2.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.2.1.proj_in.weight']
new['diffusion']['unet.encoders.2.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.2.1.proj_in.bias']
new['diffusion']['unet.encoders.2.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.2.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.2.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.2.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.2.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.2.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.2.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.2.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.2.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.2.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.2.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.2.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.2.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.2.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.2.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.2.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.2.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.2.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.2.1.proj_out.weight']
new['diffusion']['unet.encoders.2.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.2.1.proj_out.bias']
new['diffusion']['unet.encoders.3.0.weight'] = s['model.diffusion_model.input_blocks.3.0.op.weight']
new['diffusion']['unet.encoders.3.0.bias'] = s['model.diffusion_model.input_blocks.3.0.op.bias']
new['diffusion']['unet.encoders.4.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.4.0.in_layers.0.weight']
new['diffusion']['unet.encoders.4.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.4.0.in_layers.0.bias']
new['diffusion']['unet.encoders.4.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.4.0.in_layers.2.weight']
new['diffusion']['unet.encoders.4.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.4.0.in_layers.2.bias']
new['diffusion']['unet.encoders.4.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.4.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.4.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.4.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.4.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.4.0.out_layers.0.weight']
new['diffusion']['unet.encoders.4.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.4.0.out_layers.0.bias']
new['diffusion']['unet.encoders.4.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.4.0.out_layers.3.weight']
new['diffusion']['unet.encoders.4.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.4.0.out_layers.3.bias']
new['diffusion']['unet.encoders.4.0.residual_layer.weight'] = s['model.diffusion_model.input_blocks.4.0.skip_connection.weight']
new['diffusion']['unet.encoders.4.0.residual_layer.bias'] = s['model.diffusion_model.input_blocks.4.0.skip_connection.bias']
new['diffusion']['unet.encoders.4.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.4.1.norm.weight']
new['diffusion']['unet.encoders.4.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.4.1.norm.bias']
new['diffusion']['unet.encoders.4.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.4.1.proj_in.weight']
new['diffusion']['unet.encoders.4.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.4.1.proj_in.bias']
new['diffusion']['unet.encoders.4.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.4.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.4.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.4.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.4.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.4.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.4.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.4.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.4.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.4.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.4.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.4.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.4.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.4.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.4.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.4.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.4.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.4.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.4.1.proj_out.weight']
new['diffusion']['unet.encoders.4.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.4.1.proj_out.bias']
new['diffusion']['unet.encoders.5.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.5.0.in_layers.0.weight']
new['diffusion']['unet.encoders.5.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.5.0.in_layers.0.bias']
new['diffusion']['unet.encoders.5.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.5.0.in_layers.2.weight']
new['diffusion']['unet.encoders.5.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.5.0.in_layers.2.bias']
new['diffusion']['unet.encoders.5.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.5.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.5.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.5.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.5.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.5.0.out_layers.0.weight']
new['diffusion']['unet.encoders.5.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.5.0.out_layers.0.bias']
new['diffusion']['unet.encoders.5.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.5.0.out_layers.3.weight']
new['diffusion']['unet.encoders.5.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.5.0.out_layers.3.bias']
new['diffusion']['unet.encoders.5.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.5.1.norm.weight']
new['diffusion']['unet.encoders.5.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.5.1.norm.bias']
new['diffusion']['unet.encoders.5.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.5.1.proj_in.weight']
new['diffusion']['unet.encoders.5.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.5.1.proj_in.bias']
new['diffusion']['unet.encoders.5.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.5.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.5.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.5.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.5.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.5.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.5.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.5.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.5.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.5.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.5.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.5.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.5.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.5.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.5.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.5.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.5.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.5.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.5.1.proj_out.weight']
new['diffusion']['unet.encoders.5.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.5.1.proj_out.bias']
new['diffusion']['unet.encoders.6.0.weight'] = s['model.diffusion_model.input_blocks.6.0.op.weight']
new['diffusion']['unet.encoders.6.0.bias'] = s['model.diffusion_model.input_blocks.6.0.op.bias']
new['diffusion']['unet.encoders.7.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.7.0.in_layers.0.weight']
new['diffusion']['unet.encoders.7.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.7.0.in_layers.0.bias']
new['diffusion']['unet.encoders.7.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.7.0.in_layers.2.weight']
new['diffusion']['unet.encoders.7.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.7.0.in_layers.2.bias']
new['diffusion']['unet.encoders.7.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.7.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.7.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.7.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.7.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.7.0.out_layers.0.weight']
new['diffusion']['unet.encoders.7.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.7.0.out_layers.0.bias']
new['diffusion']['unet.encoders.7.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.7.0.out_layers.3.weight']
new['diffusion']['unet.encoders.7.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.7.0.out_layers.3.bias']
new['diffusion']['unet.encoders.7.0.residual_layer.weight'] = s['model.diffusion_model.input_blocks.7.0.skip_connection.weight']
new['diffusion']['unet.encoders.7.0.residual_layer.bias'] = s['model.diffusion_model.input_blocks.7.0.skip_connection.bias']
new['diffusion']['unet.encoders.7.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.7.1.norm.weight']
new['diffusion']['unet.encoders.7.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.7.1.norm.bias']
new['diffusion']['unet.encoders.7.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.7.1.proj_in.weight']
new['diffusion']['unet.encoders.7.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.7.1.proj_in.bias']
new['diffusion']['unet.encoders.7.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.7.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.7.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.7.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.7.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.7.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.7.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.7.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.7.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.7.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.7.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.7.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.7.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.7.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.7.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.7.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.7.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.7.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.7.1.proj_out.weight']
new['diffusion']['unet.encoders.7.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.7.1.proj_out.bias']
new['diffusion']['unet.encoders.8.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.8.0.in_layers.0.weight']
new['diffusion']['unet.encoders.8.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.8.0.in_layers.0.bias']
new['diffusion']['unet.encoders.8.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.8.0.in_layers.2.weight']
new['diffusion']['unet.encoders.8.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.8.0.in_layers.2.bias']
new['diffusion']['unet.encoders.8.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.8.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.8.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.8.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.8.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.8.0.out_layers.0.weight']
new['diffusion']['unet.encoders.8.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.8.0.out_layers.0.bias']
new['diffusion']['unet.encoders.8.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.8.0.out_layers.3.weight']
new['diffusion']['unet.encoders.8.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.8.0.out_layers.3.bias']
new['diffusion']['unet.encoders.8.1.groupnorm.weight'] = s['model.diffusion_model.input_blocks.8.1.norm.weight']
new['diffusion']['unet.encoders.8.1.groupnorm.bias'] = s['model.diffusion_model.input_blocks.8.1.norm.bias']
new['diffusion']['unet.encoders.8.1.conv_input.weight'] = s['model.diffusion_model.input_blocks.8.1.proj_in.weight']
new['diffusion']['unet.encoders.8.1.conv_input.bias'] = s['model.diffusion_model.input_blocks.8.1.proj_in.bias']
new['diffusion']['unet.encoders.8.1.attention_1.out_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.encoders.8.1.attention_1.out_proj.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.encoders.8.1.linear_geglu_1.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.encoders.8.1.linear_geglu_1.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.encoders.8.1.linear_geglu_2.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.encoders.8.1.linear_geglu_2.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.encoders.8.1.attention_2.q_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.encoders.8.1.attention_2.k_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.encoders.8.1.attention_2.v_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.encoders.8.1.attention_2.out_proj.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.encoders.8.1.attention_2.out_proj.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.encoders.8.1.layernorm_1.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.encoders.8.1.layernorm_1.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.encoders.8.1.layernorm_2.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.encoders.8.1.layernorm_2.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.encoders.8.1.layernorm_3.weight'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.encoders.8.1.layernorm_3.bias'] = s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.encoders.8.1.conv_output.weight'] = s['model.diffusion_model.input_blocks.8.1.proj_out.weight']
new['diffusion']['unet.encoders.8.1.conv_output.bias'] = s['model.diffusion_model.input_blocks.8.1.proj_out.bias']
new['diffusion']['unet.encoders.9.0.weight'] = s['model.diffusion_model.input_blocks.9.0.op.weight']
new['diffusion']['unet.encoders.9.0.bias'] = s['model.diffusion_model.input_blocks.9.0.op.bias']
new['diffusion']['unet.encoders.10.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.10.0.in_layers.0.weight']
new['diffusion']['unet.encoders.10.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.10.0.in_layers.0.bias']
new['diffusion']['unet.encoders.10.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.10.0.in_layers.2.weight']
new['diffusion']['unet.encoders.10.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.10.0.in_layers.2.bias']
new['diffusion']['unet.encoders.10.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.10.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.10.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.10.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.10.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.10.0.out_layers.0.weight']
new['diffusion']['unet.encoders.10.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.10.0.out_layers.0.bias']
new['diffusion']['unet.encoders.10.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.10.0.out_layers.3.weight']
new['diffusion']['unet.encoders.10.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.10.0.out_layers.3.bias']
new['diffusion']['unet.encoders.11.0.groupnorm_feature.weight'] = s['model.diffusion_model.input_blocks.11.0.in_layers.0.weight']
new['diffusion']['unet.encoders.11.0.groupnorm_feature.bias'] = s['model.diffusion_model.input_blocks.11.0.in_layers.0.bias']
new['diffusion']['unet.encoders.11.0.conv_feature.weight'] = s['model.diffusion_model.input_blocks.11.0.in_layers.2.weight']
new['diffusion']['unet.encoders.11.0.conv_feature.bias'] = s['model.diffusion_model.input_blocks.11.0.in_layers.2.bias']
new['diffusion']['unet.encoders.11.0.linear_time.weight'] = s['model.diffusion_model.input_blocks.11.0.emb_layers.1.weight']
new['diffusion']['unet.encoders.11.0.linear_time.bias'] = s['model.diffusion_model.input_blocks.11.0.emb_layers.1.bias']
new['diffusion']['unet.encoders.11.0.groupnorm_merged.weight'] = s['model.diffusion_model.input_blocks.11.0.out_layers.0.weight']
new['diffusion']['unet.encoders.11.0.groupnorm_merged.bias'] = s['model.diffusion_model.input_blocks.11.0.out_layers.0.bias']
new['diffusion']['unet.encoders.11.0.conv_merged.weight'] = s['model.diffusion_model.input_blocks.11.0.out_layers.3.weight']
new['diffusion']['unet.encoders.11.0.conv_merged.bias'] = s['model.diffusion_model.input_blocks.11.0.out_layers.3.bias']
new['diffusion']['unet.bottleneck.0.groupnorm_feature.weight'] = s['model.diffusion_model.middle_block.0.in_layers.0.weight']
new['diffusion']['unet.bottleneck.0.groupnorm_feature.bias'] = s['model.diffusion_model.middle_block.0.in_layers.0.bias']
new['diffusion']['unet.bottleneck.0.conv_feature.weight'] = s['model.diffusion_model.middle_block.0.in_layers.2.weight']
new['diffusion']['unet.bottleneck.0.conv_feature.bias'] = s['model.diffusion_model.middle_block.0.in_layers.2.bias']
new['diffusion']['unet.bottleneck.0.linear_time.weight'] = s['model.diffusion_model.middle_block.0.emb_layers.1.weight']
new['diffusion']['unet.bottleneck.0.linear_time.bias'] = s['model.diffusion_model.middle_block.0.emb_layers.1.bias']
new['diffusion']['unet.bottleneck.0.groupnorm_merged.weight'] = s['model.diffusion_model.middle_block.0.out_layers.0.weight']
new['diffusion']['unet.bottleneck.0.groupnorm_merged.bias'] = s['model.diffusion_model.middle_block.0.out_layers.0.bias']
new['diffusion']['unet.bottleneck.0.conv_merged.weight'] = s['model.diffusion_model.middle_block.0.out_layers.3.weight']
new['diffusion']['unet.bottleneck.0.conv_merged.bias'] = s['model.diffusion_model.middle_block.0.out_layers.3.bias']
new['diffusion']['unet.bottleneck.1.groupnorm.weight'] = s['model.diffusion_model.middle_block.1.norm.weight']
new['diffusion']['unet.bottleneck.1.groupnorm.bias'] = s['model.diffusion_model.middle_block.1.norm.bias']
new['diffusion']['unet.bottleneck.1.conv_input.weight'] = s['model.diffusion_model.middle_block.1.proj_in.weight']
new['diffusion']['unet.bottleneck.1.conv_input.bias'] = s['model.diffusion_model.middle_block.1.proj_in.bias']
new['diffusion']['unet.bottleneck.1.attention_1.out_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.bottleneck.1.attention_1.out_proj.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.bottleneck.1.linear_geglu_1.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.bottleneck.1.linear_geglu_1.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.bottleneck.1.linear_geglu_2.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.bottleneck.1.linear_geglu_2.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.bottleneck.1.attention_2.q_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.bottleneck.1.attention_2.k_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.bottleneck.1.attention_2.v_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.bottleneck.1.attention_2.out_proj.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.bottleneck.1.attention_2.out_proj.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.bottleneck.1.layernorm_1.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.bottleneck.1.layernorm_1.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.bottleneck.1.layernorm_2.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.bottleneck.1.layernorm_2.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.bottleneck.1.layernorm_3.weight'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.bottleneck.1.layernorm_3.bias'] = s['model.diffusion_model.middle_block.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.bottleneck.1.conv_output.weight'] = s['model.diffusion_model.middle_block.1.proj_out.weight']
new['diffusion']['unet.bottleneck.1.conv_output.bias'] = s['model.diffusion_model.middle_block.1.proj_out.bias']
new['diffusion']['unet.bottleneck.2.groupnorm_feature.weight'] = s['model.diffusion_model.middle_block.2.in_layers.0.weight']
new['diffusion']['unet.bottleneck.2.groupnorm_feature.bias'] = s['model.diffusion_model.middle_block.2.in_layers.0.bias']
new['diffusion']['unet.bottleneck.2.conv_feature.weight'] = s['model.diffusion_model.middle_block.2.in_layers.2.weight']
new['diffusion']['unet.bottleneck.2.conv_feature.bias'] = s['model.diffusion_model.middle_block.2.in_layers.2.bias']
new['diffusion']['unet.bottleneck.2.linear_time.weight'] = s['model.diffusion_model.middle_block.2.emb_layers.1.weight']
new['diffusion']['unet.bottleneck.2.linear_time.bias'] = s['model.diffusion_model.middle_block.2.emb_layers.1.bias']
new['diffusion']['unet.bottleneck.2.groupnorm_merged.weight'] = s['model.diffusion_model.middle_block.2.out_layers.0.weight']
new['diffusion']['unet.bottleneck.2.groupnorm_merged.bias'] = s['model.diffusion_model.middle_block.2.out_layers.0.bias']
new['diffusion']['unet.bottleneck.2.conv_merged.weight'] = s['model.diffusion_model.middle_block.2.out_layers.3.weight']
new['diffusion']['unet.bottleneck.2.conv_merged.bias'] = s['model.diffusion_model.middle_block.2.out_layers.3.bias']
new['diffusion']['unet.decoders.0.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.0.0.in_layers.0.weight']
new['diffusion']['unet.decoders.0.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.0.0.in_layers.0.bias']
new['diffusion']['unet.decoders.0.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.0.0.in_layers.2.weight']
new['diffusion']['unet.decoders.0.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.0.0.in_layers.2.bias']
new['diffusion']['unet.decoders.0.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.0.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.0.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.0.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.0.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.0.0.out_layers.0.weight']
new['diffusion']['unet.decoders.0.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.0.0.out_layers.0.bias']
new['diffusion']['unet.decoders.0.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.0.0.out_layers.3.weight']
new['diffusion']['unet.decoders.0.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.0.0.out_layers.3.bias']
new['diffusion']['unet.decoders.0.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.0.0.skip_connection.weight']
new['diffusion']['unet.decoders.0.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.0.0.skip_connection.bias']
new['diffusion']['unet.decoders.1.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.1.0.in_layers.0.weight']
new['diffusion']['unet.decoders.1.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.1.0.in_layers.0.bias']
new['diffusion']['unet.decoders.1.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.1.0.in_layers.2.weight']
new['diffusion']['unet.decoders.1.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.1.0.in_layers.2.bias']
new['diffusion']['unet.decoders.1.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.1.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.1.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.1.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.1.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.1.0.out_layers.0.weight']
new['diffusion']['unet.decoders.1.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.1.0.out_layers.0.bias']
new['diffusion']['unet.decoders.1.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.1.0.out_layers.3.weight']
new['diffusion']['unet.decoders.1.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.1.0.out_layers.3.bias']
new['diffusion']['unet.decoders.1.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.1.0.skip_connection.weight']
new['diffusion']['unet.decoders.1.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.1.0.skip_connection.bias']
new['diffusion']['unet.decoders.2.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.2.0.in_layers.0.weight']
new['diffusion']['unet.decoders.2.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.2.0.in_layers.0.bias']
new['diffusion']['unet.decoders.2.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.2.0.in_layers.2.weight']
new['diffusion']['unet.decoders.2.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.2.0.in_layers.2.bias']
new['diffusion']['unet.decoders.2.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.2.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.2.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.2.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.2.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.2.0.out_layers.0.weight']
new['diffusion']['unet.decoders.2.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.2.0.out_layers.0.bias']
new['diffusion']['unet.decoders.2.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.2.0.out_layers.3.weight']
new['diffusion']['unet.decoders.2.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.2.0.out_layers.3.bias']
new['diffusion']['unet.decoders.2.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.2.0.skip_connection.weight']
new['diffusion']['unet.decoders.2.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.2.0.skip_connection.bias']
new['diffusion']['unet.decoders.2.1.conv.weight'] = s['model.diffusion_model.output_blocks.2.1.conv.weight']
new['diffusion']['unet.decoders.2.1.conv.bias'] = s['model.diffusion_model.output_blocks.2.1.conv.bias']
new['diffusion']['unet.decoders.3.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.3.0.in_layers.0.weight']
new['diffusion']['unet.decoders.3.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.3.0.in_layers.0.bias']
new['diffusion']['unet.decoders.3.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.3.0.in_layers.2.weight']
new['diffusion']['unet.decoders.3.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.3.0.in_layers.2.bias']
new['diffusion']['unet.decoders.3.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.3.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.3.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.3.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.3.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.3.0.out_layers.0.weight']
new['diffusion']['unet.decoders.3.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.3.0.out_layers.0.bias']
new['diffusion']['unet.decoders.3.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.3.0.out_layers.3.weight']
new['diffusion']['unet.decoders.3.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.3.0.out_layers.3.bias']
new['diffusion']['unet.decoders.3.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.3.0.skip_connection.weight']
new['diffusion']['unet.decoders.3.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.3.0.skip_connection.bias']
new['diffusion']['unet.decoders.3.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.3.1.norm.weight']
new['diffusion']['unet.decoders.3.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.3.1.norm.bias']
new['diffusion']['unet.decoders.3.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.3.1.proj_in.weight']
new['diffusion']['unet.decoders.3.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.3.1.proj_in.bias']
new['diffusion']['unet.decoders.3.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.3.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.3.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.3.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.3.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.3.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.3.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.3.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.3.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.3.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.3.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.3.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.3.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.3.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.3.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.3.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.3.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.3.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.3.1.proj_out.weight']
new['diffusion']['unet.decoders.3.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.3.1.proj_out.bias']
new['diffusion']['unet.decoders.4.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.4.0.in_layers.0.weight']
new['diffusion']['unet.decoders.4.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.4.0.in_layers.0.bias']
new['diffusion']['unet.decoders.4.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.4.0.in_layers.2.weight']
new['diffusion']['unet.decoders.4.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.4.0.in_layers.2.bias']
new['diffusion']['unet.decoders.4.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.4.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.4.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.4.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.4.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.4.0.out_layers.0.weight']
new['diffusion']['unet.decoders.4.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.4.0.out_layers.0.bias']
new['diffusion']['unet.decoders.4.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.4.0.out_layers.3.weight']
new['diffusion']['unet.decoders.4.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.4.0.out_layers.3.bias']
new['diffusion']['unet.decoders.4.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.4.0.skip_connection.weight']
new['diffusion']['unet.decoders.4.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.4.0.skip_connection.bias']
new['diffusion']['unet.decoders.4.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.4.1.norm.weight']
new['diffusion']['unet.decoders.4.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.4.1.norm.bias']
new['diffusion']['unet.decoders.4.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.4.1.proj_in.weight']
new['diffusion']['unet.decoders.4.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.4.1.proj_in.bias']
new['diffusion']['unet.decoders.4.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.4.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.4.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.4.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.4.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.4.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.4.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.4.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.4.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.4.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.4.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.4.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.4.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.4.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.4.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.4.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.4.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.4.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.4.1.proj_out.weight']
new['diffusion']['unet.decoders.4.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.4.1.proj_out.bias']
new['diffusion']['unet.decoders.5.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.5.0.in_layers.0.weight']
new['diffusion']['unet.decoders.5.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.5.0.in_layers.0.bias']
new['diffusion']['unet.decoders.5.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.5.0.in_layers.2.weight']
new['diffusion']['unet.decoders.5.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.5.0.in_layers.2.bias']
new['diffusion']['unet.decoders.5.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.5.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.5.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.5.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.5.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.5.0.out_layers.0.weight']
new['diffusion']['unet.decoders.5.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.5.0.out_layers.0.bias']
new['diffusion']['unet.decoders.5.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.5.0.out_layers.3.weight']
new['diffusion']['unet.decoders.5.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.5.0.out_layers.3.bias']
new['diffusion']['unet.decoders.5.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.5.0.skip_connection.weight']
new['diffusion']['unet.decoders.5.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.5.0.skip_connection.bias']
new['diffusion']['unet.decoders.5.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.5.1.norm.weight']
new['diffusion']['unet.decoders.5.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.5.1.norm.bias']
new['diffusion']['unet.decoders.5.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.5.1.proj_in.weight']
new['diffusion']['unet.decoders.5.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.5.1.proj_in.bias']
new['diffusion']['unet.decoders.5.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.5.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.5.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.5.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.5.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.5.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.5.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.5.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.5.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.5.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.5.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.5.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.5.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.5.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.5.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.5.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.5.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.5.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.5.1.proj_out.weight']
new['diffusion']['unet.decoders.5.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.5.1.proj_out.bias']
new['diffusion']['unet.decoders.5.2.conv.weight'] = s['model.diffusion_model.output_blocks.5.2.conv.weight']
new['diffusion']['unet.decoders.5.2.conv.bias'] = s['model.diffusion_model.output_blocks.5.2.conv.bias']
new['diffusion']['unet.decoders.6.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.6.0.in_layers.0.weight']
new['diffusion']['unet.decoders.6.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.6.0.in_layers.0.bias']
new['diffusion']['unet.decoders.6.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.6.0.in_layers.2.weight']
new['diffusion']['unet.decoders.6.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.6.0.in_layers.2.bias']
new['diffusion']['unet.decoders.6.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.6.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.6.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.6.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.6.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.6.0.out_layers.0.weight']
new['diffusion']['unet.decoders.6.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.6.0.out_layers.0.bias']
new['diffusion']['unet.decoders.6.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.6.0.out_layers.3.weight']
new['diffusion']['unet.decoders.6.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.6.0.out_layers.3.bias']
new['diffusion']['unet.decoders.6.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.6.0.skip_connection.weight']
new['diffusion']['unet.decoders.6.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.6.0.skip_connection.bias']
new['diffusion']['unet.decoders.6.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.6.1.norm.weight']
new['diffusion']['unet.decoders.6.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.6.1.norm.bias']
new['diffusion']['unet.decoders.6.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.6.1.proj_in.weight']
new['diffusion']['unet.decoders.6.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.6.1.proj_in.bias']
new['diffusion']['unet.decoders.6.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.6.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.6.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.6.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.6.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.6.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.6.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.6.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.6.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.6.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.6.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.6.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.6.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.6.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.6.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.6.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.6.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.6.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.6.1.proj_out.weight']
new['diffusion']['unet.decoders.6.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.6.1.proj_out.bias']
new['diffusion']['unet.decoders.7.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.7.0.in_layers.0.weight']
new['diffusion']['unet.decoders.7.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.7.0.in_layers.0.bias']
new['diffusion']['unet.decoders.7.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.7.0.in_layers.2.weight']
new['diffusion']['unet.decoders.7.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.7.0.in_layers.2.bias']
new['diffusion']['unet.decoders.7.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.7.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.7.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.7.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.7.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.7.0.out_layers.0.weight']
new['diffusion']['unet.decoders.7.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.7.0.out_layers.0.bias']
new['diffusion']['unet.decoders.7.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.7.0.out_layers.3.weight']
new['diffusion']['unet.decoders.7.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.7.0.out_layers.3.bias']
new['diffusion']['unet.decoders.7.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.7.0.skip_connection.weight']
new['diffusion']['unet.decoders.7.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.7.0.skip_connection.bias']
new['diffusion']['unet.decoders.7.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.7.1.norm.weight']
new['diffusion']['unet.decoders.7.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.7.1.norm.bias']
new['diffusion']['unet.decoders.7.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.7.1.proj_in.weight']
new['diffusion']['unet.decoders.7.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.7.1.proj_in.bias']
new['diffusion']['unet.decoders.7.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.7.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.7.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.7.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.7.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.7.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.7.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.7.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.7.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.7.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.7.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.7.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.7.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.7.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.7.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.7.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.7.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.7.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.7.1.proj_out.weight']
new['diffusion']['unet.decoders.7.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.7.1.proj_out.bias']
new['diffusion']['unet.decoders.8.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.8.0.in_layers.0.weight']
new['diffusion']['unet.decoders.8.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.8.0.in_layers.0.bias']
new['diffusion']['unet.decoders.8.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.8.0.in_layers.2.weight']
new['diffusion']['unet.decoders.8.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.8.0.in_layers.2.bias']
new['diffusion']['unet.decoders.8.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.8.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.8.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.8.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.8.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.8.0.out_layers.0.weight']
new['diffusion']['unet.decoders.8.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.8.0.out_layers.0.bias']
new['diffusion']['unet.decoders.8.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.8.0.out_layers.3.weight']
new['diffusion']['unet.decoders.8.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.8.0.out_layers.3.bias']
new['diffusion']['unet.decoders.8.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.8.0.skip_connection.weight']
new['diffusion']['unet.decoders.8.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.8.0.skip_connection.bias']
new['diffusion']['unet.decoders.8.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.8.1.norm.weight']
new['diffusion']['unet.decoders.8.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.8.1.norm.bias']
new['diffusion']['unet.decoders.8.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.8.1.proj_in.weight']
new['diffusion']['unet.decoders.8.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.8.1.proj_in.bias']
new['diffusion']['unet.decoders.8.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.8.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.8.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.8.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.8.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.8.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.8.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.8.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.8.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.8.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.8.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.8.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.8.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.8.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.8.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.8.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.8.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.8.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.8.1.proj_out.weight']
new['diffusion']['unet.decoders.8.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.8.1.proj_out.bias']
new['diffusion']['unet.decoders.8.2.conv.weight'] = s['model.diffusion_model.output_blocks.8.2.conv.weight']
new['diffusion']['unet.decoders.8.2.conv.bias'] = s['model.diffusion_model.output_blocks.8.2.conv.bias']
new['diffusion']['unet.decoders.9.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.9.0.in_layers.0.weight']
new['diffusion']['unet.decoders.9.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.9.0.in_layers.0.bias']
new['diffusion']['unet.decoders.9.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.9.0.in_layers.2.weight']
new['diffusion']['unet.decoders.9.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.9.0.in_layers.2.bias']
new['diffusion']['unet.decoders.9.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.9.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.9.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.9.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.9.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.9.0.out_layers.0.weight']
new['diffusion']['unet.decoders.9.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.9.0.out_layers.0.bias']
new['diffusion']['unet.decoders.9.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.9.0.out_layers.3.weight']
new['diffusion']['unet.decoders.9.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.9.0.out_layers.3.bias']
new['diffusion']['unet.decoders.9.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.9.0.skip_connection.weight']
new['diffusion']['unet.decoders.9.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.9.0.skip_connection.bias']
new['diffusion']['unet.decoders.9.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.9.1.norm.weight']
new['diffusion']['unet.decoders.9.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.9.1.norm.bias']
new['diffusion']['unet.decoders.9.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.9.1.proj_in.weight']
new['diffusion']['unet.decoders.9.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.9.1.proj_in.bias']
new['diffusion']['unet.decoders.9.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.9.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.9.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.9.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.9.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.9.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.9.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.9.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.9.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.9.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.9.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.9.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.9.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.9.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.9.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.9.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.9.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.9.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.9.1.proj_out.weight']
new['diffusion']['unet.decoders.9.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.9.1.proj_out.bias']
new['diffusion']['unet.decoders.10.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.10.0.in_layers.0.weight']
new['diffusion']['unet.decoders.10.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.10.0.in_layers.0.bias']
new['diffusion']['unet.decoders.10.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.10.0.in_layers.2.weight']
new['diffusion']['unet.decoders.10.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.10.0.in_layers.2.bias']
new['diffusion']['unet.decoders.10.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.10.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.10.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.10.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.10.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.10.0.out_layers.0.weight']
new['diffusion']['unet.decoders.10.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.10.0.out_layers.0.bias']
new['diffusion']['unet.decoders.10.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.10.0.out_layers.3.weight']
new['diffusion']['unet.decoders.10.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.10.0.out_layers.3.bias']
new['diffusion']['unet.decoders.10.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.10.0.skip_connection.weight']
new['diffusion']['unet.decoders.10.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.10.0.skip_connection.bias']
new['diffusion']['unet.decoders.10.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.10.1.norm.weight']
new['diffusion']['unet.decoders.10.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.10.1.norm.bias']
new['diffusion']['unet.decoders.10.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.10.1.proj_in.weight']
new['diffusion']['unet.decoders.10.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.10.1.proj_in.bias']
new['diffusion']['unet.decoders.10.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.10.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.10.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.10.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.10.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.10.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.10.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.10.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.10.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.10.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.10.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.10.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.10.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.10.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.10.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.10.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.10.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.10.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.10.1.proj_out.weight']
new['diffusion']['unet.decoders.10.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.10.1.proj_out.bias']
new['diffusion']['unet.decoders.11.0.groupnorm_feature.weight'] = s['model.diffusion_model.output_blocks.11.0.in_layers.0.weight']
new['diffusion']['unet.decoders.11.0.groupnorm_feature.bias'] = s['model.diffusion_model.output_blocks.11.0.in_layers.0.bias']
new['diffusion']['unet.decoders.11.0.conv_feature.weight'] = s['model.diffusion_model.output_blocks.11.0.in_layers.2.weight']
new['diffusion']['unet.decoders.11.0.conv_feature.bias'] = s['model.diffusion_model.output_blocks.11.0.in_layers.2.bias']
new['diffusion']['unet.decoders.11.0.linear_time.weight'] = s['model.diffusion_model.output_blocks.11.0.emb_layers.1.weight']
new['diffusion']['unet.decoders.11.0.linear_time.bias'] = s['model.diffusion_model.output_blocks.11.0.emb_layers.1.bias']
new['diffusion']['unet.decoders.11.0.groupnorm_merged.weight'] = s['model.diffusion_model.output_blocks.11.0.out_layers.0.weight']
new['diffusion']['unet.decoders.11.0.groupnorm_merged.bias'] = s['model.diffusion_model.output_blocks.11.0.out_layers.0.bias']
new['diffusion']['unet.decoders.11.0.conv_merged.weight'] = s['model.diffusion_model.output_blocks.11.0.out_layers.3.weight']
new['diffusion']['unet.decoders.11.0.conv_merged.bias'] = s['model.diffusion_model.output_blocks.11.0.out_layers.3.bias']
new['diffusion']['unet.decoders.11.0.residual_layer.weight'] = s['model.diffusion_model.output_blocks.11.0.skip_connection.weight']
new['diffusion']['unet.decoders.11.0.residual_layer.bias'] = s['model.diffusion_model.output_blocks.11.0.skip_connection.bias']
new['diffusion']['unet.decoders.11.1.groupnorm.weight'] = s['model.diffusion_model.output_blocks.11.1.norm.weight']
new['diffusion']['unet.decoders.11.1.groupnorm.bias'] = s['model.diffusion_model.output_blocks.11.1.norm.bias']
new['diffusion']['unet.decoders.11.1.conv_input.weight'] = s['model.diffusion_model.output_blocks.11.1.proj_in.weight']
new['diffusion']['unet.decoders.11.1.conv_input.bias'] = s['model.diffusion_model.output_blocks.11.1.proj_in.bias']
new['diffusion']['unet.decoders.11.1.attention_1.out_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_out.0.weight']
new['diffusion']['unet.decoders.11.1.attention_1.out_proj.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_out.0.bias']
new['diffusion']['unet.decoders.11.1.linear_geglu_1.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.0.proj.weight']
new['diffusion']['unet.decoders.11.1.linear_geglu_1.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.0.proj.bias']
new['diffusion']['unet.decoders.11.1.linear_geglu_2.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.2.weight']
new['diffusion']['unet.decoders.11.1.linear_geglu_2.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.ff.net.2.bias']
new['diffusion']['unet.decoders.11.1.attention_2.q_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_q.weight']
new['diffusion']['unet.decoders.11.1.attention_2.k_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight']
new['diffusion']['unet.decoders.11.1.attention_2.v_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight']
new['diffusion']['unet.decoders.11.1.attention_2.out_proj.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_out.0.weight']
new['diffusion']['unet.decoders.11.1.attention_2.out_proj.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_out.0.bias']
new['diffusion']['unet.decoders.11.1.layernorm_1.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm1.weight']
new['diffusion']['unet.decoders.11.1.layernorm_1.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm1.bias']
new['diffusion']['unet.decoders.11.1.layernorm_2.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm2.weight']
new['diffusion']['unet.decoders.11.1.layernorm_2.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm2.bias']
new['diffusion']['unet.decoders.11.1.layernorm_3.weight'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm3.weight']
new['diffusion']['unet.decoders.11.1.layernorm_3.bias'] = s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.norm3.bias']
new['diffusion']['unet.decoders.11.1.conv_output.weight'] = s['model.diffusion_model.output_blocks.11.1.proj_out.weight']
new['diffusion']['unet.decoders.11.1.conv_output.bias'] = s['model.diffusion_model.output_blocks.11.1.proj_out.bias']
new['diffusion']['final.groupnorm.weight'] = s['model.diffusion_model.out.0.weight']
new['diffusion']['final.groupnorm.bias'] = s['model.diffusion_model.out.0.bias']
new['diffusion']['final.conv.weight'] = s['model.diffusion_model.out.2.weight']
new['diffusion']['final.conv.bias'] = s['model.diffusion_model.out.2.bias']
new['encoder']['0.weight'] = s['first_stage_model.encoder.conv_in.weight']
new['encoder']['0.bias'] = s['first_stage_model.encoder.conv_in.bias']
new['encoder']['1.groupnorm_1.weight'] = s['first_stage_model.encoder.down.0.block.0.norm1.weight']
new['encoder']['1.groupnorm_1.bias'] = s['first_stage_model.encoder.down.0.block.0.norm1.bias']
new['encoder']['1.conv_1.weight'] = s['first_stage_model.encoder.down.0.block.0.conv1.weight']
new['encoder']['1.conv_1.bias'] = s['first_stage_model.encoder.down.0.block.0.conv1.bias']
new['encoder']['1.groupnorm_2.weight'] = s['first_stage_model.encoder.down.0.block.0.norm2.weight']
new['encoder']['1.groupnorm_2.bias'] = s['first_stage_model.encoder.down.0.block.0.norm2.bias']
new['encoder']['1.conv_2.weight'] = s['first_stage_model.encoder.down.0.block.0.conv2.weight']
new['encoder']['1.conv_2.bias'] = s['first_stage_model.encoder.down.0.block.0.conv2.bias']
new['encoder']['2.groupnorm_1.weight'] = s['first_stage_model.encoder.down.0.block.1.norm1.weight']
new['encoder']['2.groupnorm_1.bias'] = s['first_stage_model.encoder.down.0.block.1.norm1.bias']
new['encoder']['2.conv_1.weight'] = s['first_stage_model.encoder.down.0.block.1.conv1.weight']
new['encoder']['2.conv_1.bias'] = s['first_stage_model.encoder.down.0.block.1.conv1.bias']
new['encoder']['2.groupnorm_2.weight'] = s['first_stage_model.encoder.down.0.block.1.norm2.weight']
new['encoder']['2.groupnorm_2.bias'] = s['first_stage_model.encoder.down.0.block.1.norm2.bias']
new['encoder']['2.conv_2.weight'] = s['first_stage_model.encoder.down.0.block.1.conv2.weight']
new['encoder']['2.conv_2.bias'] = s['first_stage_model.encoder.down.0.block.1.conv2.bias']
new['encoder']['3.weight'] = s['first_stage_model.encoder.down.0.downsample.conv.weight']
new['encoder']['3.bias'] = s['first_stage_model.encoder.down.0.downsample.conv.bias']
new['encoder']['4.groupnorm_1.weight'] = s['first_stage_model.encoder.down.1.block.0.norm1.weight']
new['encoder']['4.groupnorm_1.bias'] = s['first_stage_model.encoder.down.1.block.0.norm1.bias']
new['encoder']['4.conv_1.weight'] = s['first_stage_model.encoder.down.1.block.0.conv1.weight']
new['encoder']['4.conv_1.bias'] = s['first_stage_model.encoder.down.1.block.0.conv1.bias']
new['encoder']['4.groupnorm_2.weight'] = s['first_stage_model.encoder.down.1.block.0.norm2.weight']
new['encoder']['4.groupnorm_2.bias'] = s['first_stage_model.encoder.down.1.block.0.norm2.bias']
new['encoder']['4.conv_2.weight'] = s['first_stage_model.encoder.down.1.block.0.conv2.weight']
new['encoder']['4.conv_2.bias'] = s['first_stage_model.encoder.down.1.block.0.conv2.bias']
new['encoder']['4.residual_layer.weight'] = s['first_stage_model.encoder.down.1.block.0.nin_shortcut.weight']
new['encoder']['4.residual_layer.bias'] = s['first_stage_model.encoder.down.1.block.0.nin_shortcut.bias']
new['encoder']['5.groupnorm_1.weight'] = s['first_stage_model.encoder.down.1.block.1.norm1.weight']
new['encoder']['5.groupnorm_1.bias'] = s['first_stage_model.encoder.down.1.block.1.norm1.bias']
new['encoder']['5.conv_1.weight'] = s['first_stage_model.encoder.down.1.block.1.conv1.weight']
new['encoder']['5.conv_1.bias'] = s['first_stage_model.encoder.down.1.block.1.conv1.bias']
new['encoder']['5.groupnorm_2.weight'] = s['first_stage_model.encoder.down.1.block.1.norm2.weight']
new['encoder']['5.groupnorm_2.bias'] = s['first_stage_model.encoder.down.1.block.1.norm2.bias']
new['encoder']['5.conv_2.weight'] = s['first_stage_model.encoder.down.1.block.1.conv2.weight']
new['encoder']['5.conv_2.bias'] = s['first_stage_model.encoder.down.1.block.1.conv2.bias']
new['encoder']['6.weight'] = s['first_stage_model.encoder.down.1.downsample.conv.weight']
new['encoder']['6.bias'] = s['first_stage_model.encoder.down.1.downsample.conv.bias']
new['encoder']['7.groupnorm_1.weight'] = s['first_stage_model.encoder.down.2.block.0.norm1.weight']
new['encoder']['7.groupnorm_1.bias'] = s['first_stage_model.encoder.down.2.block.0.norm1.bias']
new['encoder']['7.conv_1.weight'] = s['first_stage_model.encoder.down.2.block.0.conv1.weight']
new['encoder']['7.conv_1.bias'] = s['first_stage_model.encoder.down.2.block.0.conv1.bias']
new['encoder']['7.groupnorm_2.weight'] = s['first_stage_model.encoder.down.2.block.0.norm2.weight']
new['encoder']['7.groupnorm_2.bias'] = s['first_stage_model.encoder.down.2.block.0.norm2.bias']
new['encoder']['7.conv_2.weight'] = s['first_stage_model.encoder.down.2.block.0.conv2.weight']
new['encoder']['7.conv_2.bias'] = s['first_stage_model.encoder.down.2.block.0.conv2.bias']
new['encoder']['7.residual_layer.weight'] = s['first_stage_model.encoder.down.2.block.0.nin_shortcut.weight']
new['encoder']['7.residual_layer.bias'] = s['first_stage_model.encoder.down.2.block.0.nin_shortcut.bias']
new['encoder']['8.groupnorm_1.weight'] = s['first_stage_model.encoder.down.2.block.1.norm1.weight']
new['encoder']['8.groupnorm_1.bias'] = s['first_stage_model.encoder.down.2.block.1.norm1.bias']
new['encoder']['8.conv_1.weight'] = s['first_stage_model.encoder.down.2.block.1.conv1.weight']
new['encoder']['8.conv_1.bias'] = s['first_stage_model.encoder.down.2.block.1.conv1.bias']
new['encoder']['8.groupnorm_2.weight'] = s['first_stage_model.encoder.down.2.block.1.norm2.weight']
new['encoder']['8.groupnorm_2.bias'] = s['first_stage_model.encoder.down.2.block.1.norm2.bias']
new['encoder']['8.conv_2.weight'] = s['first_stage_model.encoder.down.2.block.1.conv2.weight']
new['encoder']['8.conv_2.bias'] = s['first_stage_model.encoder.down.2.block.1.conv2.bias']
new['encoder']['9.weight'] = s['first_stage_model.encoder.down.2.downsample.conv.weight']
new['encoder']['9.bias'] = s['first_stage_model.encoder.down.2.downsample.conv.bias']
new['encoder']['10.groupnorm_1.weight'] = s['first_stage_model.encoder.down.3.block.0.norm1.weight']
new['encoder']['10.groupnorm_1.bias'] = s['first_stage_model.encoder.down.3.block.0.norm1.bias']
new['encoder']['10.conv_1.weight'] = s['first_stage_model.encoder.down.3.block.0.conv1.weight']
new['encoder']['10.conv_1.bias'] = s['first_stage_model.encoder.down.3.block.0.conv1.bias']
new['encoder']['10.groupnorm_2.weight'] = s['first_stage_model.encoder.down.3.block.0.norm2.weight']
new['encoder']['10.groupnorm_2.bias'] = s['first_stage_model.encoder.down.3.block.0.norm2.bias']
new['encoder']['10.conv_2.weight'] = s['first_stage_model.encoder.down.3.block.0.conv2.weight']
new['encoder']['10.conv_2.bias'] = s['first_stage_model.encoder.down.3.block.0.conv2.bias']
new['encoder']['11.groupnorm_1.weight'] = s['first_stage_model.encoder.down.3.block.1.norm1.weight']
new['encoder']['11.groupnorm_1.bias'] = s['first_stage_model.encoder.down.3.block.1.norm1.bias']
new['encoder']['11.conv_1.weight'] = s['first_stage_model.encoder.down.3.block.1.conv1.weight']
new['encoder']['11.conv_1.bias'] = s['first_stage_model.encoder.down.3.block.1.conv1.bias']
new['encoder']['11.groupnorm_2.weight'] = s['first_stage_model.encoder.down.3.block.1.norm2.weight']
new['encoder']['11.groupnorm_2.bias'] = s['first_stage_model.encoder.down.3.block.1.norm2.bias']
new['encoder']['11.conv_2.weight'] = s['first_stage_model.encoder.down.3.block.1.conv2.weight']
new['encoder']['11.conv_2.bias'] = s['first_stage_model.encoder.down.3.block.1.conv2.bias']
new['encoder']['12.groupnorm_1.weight'] = s['first_stage_model.encoder.mid.block_1.norm1.weight']
new['encoder']['12.groupnorm_1.bias'] = s['first_stage_model.encoder.mid.block_1.norm1.bias']
new['encoder']['12.conv_1.weight'] = s['first_stage_model.encoder.mid.block_1.conv1.weight']
new['encoder']['12.conv_1.bias'] = s['first_stage_model.encoder.mid.block_1.conv1.bias']
new['encoder']['12.groupnorm_2.weight'] = s['first_stage_model.encoder.mid.block_1.norm2.weight']
new['encoder']['12.groupnorm_2.bias'] = s['first_stage_model.encoder.mid.block_1.norm2.bias']
new['encoder']['12.conv_2.weight'] = s['first_stage_model.encoder.mid.block_1.conv2.weight']
new['encoder']['12.conv_2.bias'] = s['first_stage_model.encoder.mid.block_1.conv2.bias']
new['encoder']['13.groupnorm.weight'] = s['first_stage_model.encoder.mid.attn_1.norm.weight']
new['encoder']['13.groupnorm.bias'] = s['first_stage_model.encoder.mid.attn_1.norm.bias']
new['encoder']['13.attention.out_proj.bias'] = s['first_stage_model.encoder.mid.attn_1.proj_out.bias']
new['encoder']['14.groupnorm_1.weight'] = s['first_stage_model.encoder.mid.block_2.norm1.weight']
new['encoder']['14.groupnorm_1.bias'] = s['first_stage_model.encoder.mid.block_2.norm1.bias']
new['encoder']['14.conv_1.weight'] = s['first_stage_model.encoder.mid.block_2.conv1.weight']
new['encoder']['14.conv_1.bias'] = s['first_stage_model.encoder.mid.block_2.conv1.bias']
new['encoder']['14.groupnorm_2.weight'] = s['first_stage_model.encoder.mid.block_2.norm2.weight']
new['encoder']['14.groupnorm_2.bias'] = s['first_stage_model.encoder.mid.block_2.norm2.bias']
new['encoder']['14.conv_2.weight'] = s['first_stage_model.encoder.mid.block_2.conv2.weight']
new['encoder']['14.conv_2.bias'] = s['first_stage_model.encoder.mid.block_2.conv2.bias']
new['encoder']['15.weight'] = s['first_stage_model.encoder.norm_out.weight']
new['encoder']['15.bias'] = s['first_stage_model.encoder.norm_out.bias']
new['encoder']['17.weight'] = s['first_stage_model.encoder.conv_out.weight']
new['encoder']['17.bias'] = s['first_stage_model.encoder.conv_out.bias']
new['decoder']['1.weight'] = s['first_stage_model.decoder.conv_in.weight']
new['decoder']['1.bias'] = s['first_stage_model.decoder.conv_in.bias']
new['decoder']['2.groupnorm_1.weight'] = s['first_stage_model.decoder.mid.block_1.norm1.weight']
new['decoder']['2.groupnorm_1.bias'] = s['first_stage_model.decoder.mid.block_1.norm1.bias']
new['decoder']['2.conv_1.weight'] = s['first_stage_model.decoder.mid.block_1.conv1.weight']
new['decoder']['2.conv_1.bias'] = s['first_stage_model.decoder.mid.block_1.conv1.bias']
new['decoder']['2.groupnorm_2.weight'] = s['first_stage_model.decoder.mid.block_1.norm2.weight']
new['decoder']['2.groupnorm_2.bias'] = s['first_stage_model.decoder.mid.block_1.norm2.bias']
new['decoder']['2.conv_2.weight'] = s['first_stage_model.decoder.mid.block_1.conv2.weight']
new['decoder']['2.conv_2.bias'] = s['first_stage_model.decoder.mid.block_1.conv2.bias']
new['decoder']['3.groupnorm.weight'] = s['first_stage_model.decoder.mid.attn_1.norm.weight']
new['decoder']['3.groupnorm.bias'] = s['first_stage_model.decoder.mid.attn_1.norm.bias']
new['decoder']['3.attention.out_proj.bias'] = s['first_stage_model.decoder.mid.attn_1.proj_out.bias']
new['decoder']['4.groupnorm_1.weight'] = s['first_stage_model.decoder.mid.block_2.norm1.weight']
new['decoder']['4.groupnorm_1.bias'] = s['first_stage_model.decoder.mid.block_2.norm1.bias']
new['decoder']['4.conv_1.weight'] = s['first_stage_model.decoder.mid.block_2.conv1.weight']
new['decoder']['4.conv_1.bias'] = s['first_stage_model.decoder.mid.block_2.conv1.bias']
new['decoder']['4.groupnorm_2.weight'] = s['first_stage_model.decoder.mid.block_2.norm2.weight']
new['decoder']['4.groupnorm_2.bias'] = s['first_stage_model.decoder.mid.block_2.norm2.bias']
new['decoder']['4.conv_2.weight'] = s['first_stage_model.decoder.mid.block_2.conv2.weight']
new['decoder']['4.conv_2.bias'] = s['first_stage_model.decoder.mid.block_2.conv2.bias']
new['decoder']['20.groupnorm_1.weight'] = s['first_stage_model.decoder.up.0.block.0.norm1.weight']
new['decoder']['20.groupnorm_1.bias'] = s['first_stage_model.decoder.up.0.block.0.norm1.bias']
new['decoder']['20.conv_1.weight'] = s['first_stage_model.decoder.up.0.block.0.conv1.weight']
new['decoder']['20.conv_1.bias'] = s['first_stage_model.decoder.up.0.block.0.conv1.bias']
new['decoder']['20.groupnorm_2.weight'] = s['first_stage_model.decoder.up.0.block.0.norm2.weight']
new['decoder']['20.groupnorm_2.bias'] = s['first_stage_model.decoder.up.0.block.0.norm2.bias']
new['decoder']['20.conv_2.weight'] = s['first_stage_model.decoder.up.0.block.0.conv2.weight']
new['decoder']['20.conv_2.bias'] = s['first_stage_model.decoder.up.0.block.0.conv2.bias']
new['decoder']['20.residual_layer.weight'] = s['first_stage_model.decoder.up.0.block.0.nin_shortcut.weight']
new['decoder']['20.residual_layer.bias'] = s['first_stage_model.decoder.up.0.block.0.nin_shortcut.bias']
new['decoder']['21.groupnorm_1.weight'] = s['first_stage_model.decoder.up.0.block.1.norm1.weight']
new['decoder']['21.groupnorm_1.bias'] = s['first_stage_model.decoder.up.0.block.1.norm1.bias']
new['decoder']['21.conv_1.weight'] = s['first_stage_model.decoder.up.0.block.1.conv1.weight']
new['decoder']['21.conv_1.bias'] = s['first_stage_model.decoder.up.0.block.1.conv1.bias']
new['decoder']['21.groupnorm_2.weight'] = s['first_stage_model.decoder.up.0.block.1.norm2.weight']
new['decoder']['21.groupnorm_2.bias'] = s['first_stage_model.decoder.up.0.block.1.norm2.bias']
new['decoder']['21.conv_2.weight'] = s['first_stage_model.decoder.up.0.block.1.conv2.weight']
new['decoder']['21.conv_2.bias'] = s['first_stage_model.decoder.up.0.block.1.conv2.bias']
new['decoder']['22.groupnorm_1.weight'] = s['first_stage_model.decoder.up.0.block.2.norm1.weight']
new['decoder']['22.groupnorm_1.bias'] = s['first_stage_model.decoder.up.0.block.2.norm1.bias']
new['decoder']['22.conv_1.weight'] = s['first_stage_model.decoder.up.0.block.2.conv1.weight']
new['decoder']['22.conv_1.bias'] = s['first_stage_model.decoder.up.0.block.2.conv1.bias']
new['decoder']['22.groupnorm_2.weight'] = s['first_stage_model.decoder.up.0.block.2.norm2.weight']
new['decoder']['22.groupnorm_2.bias'] = s['first_stage_model.decoder.up.0.block.2.norm2.bias']
new['decoder']['22.conv_2.weight'] = s['first_stage_model.decoder.up.0.block.2.conv2.weight']
new['decoder']['22.conv_2.bias'] = s['first_stage_model.decoder.up.0.block.2.conv2.bias']
new['decoder']['15.groupnorm_1.weight'] = s['first_stage_model.decoder.up.1.block.0.norm1.weight']
new['decoder']['15.groupnorm_1.bias'] = s['first_stage_model.decoder.up.1.block.0.norm1.bias']
new['decoder']['15.conv_1.weight'] = s['first_stage_model.decoder.up.1.block.0.conv1.weight']
new['decoder']['15.conv_1.bias'] = s['first_stage_model.decoder.up.1.block.0.conv1.bias']
new['decoder']['15.groupnorm_2.weight'] = s['first_stage_model.decoder.up.1.block.0.norm2.weight']
new['decoder']['15.groupnorm_2.bias'] = s['first_stage_model.decoder.up.1.block.0.norm2.bias']
new['decoder']['15.conv_2.weight'] = s['first_stage_model.decoder.up.1.block.0.conv2.weight']
new['decoder']['15.conv_2.bias'] = s['first_stage_model.decoder.up.1.block.0.conv2.bias']
new['decoder']['15.residual_layer.weight'] = s['first_stage_model.decoder.up.1.block.0.nin_shortcut.weight']
new['decoder']['15.residual_layer.bias'] = s['first_stage_model.decoder.up.1.block.0.nin_shortcut.bias']
new['decoder']['16.groupnorm_1.weight'] = s['first_stage_model.decoder.up.1.block.1.norm1.weight']
new['decoder']['16.groupnorm_1.bias'] = s['first_stage_model.decoder.up.1.block.1.norm1.bias']
new['decoder']['16.conv_1.weight'] = s['first_stage_model.decoder.up.1.block.1.conv1.weight']
new['decoder']['16.conv_1.bias'] = s['first_stage_model.decoder.up.1.block.1.conv1.bias']
new['decoder']['16.groupnorm_2.weight'] = s['first_stage_model.decoder.up.1.block.1.norm2.weight']
new['decoder']['16.groupnorm_2.bias'] = s['first_stage_model.decoder.up.1.block.1.norm2.bias']
new['decoder']['16.conv_2.weight'] = s['first_stage_model.decoder.up.1.block.1.conv2.weight']
new['decoder']['16.conv_2.bias'] = s['first_stage_model.decoder.up.1.block.1.conv2.bias']
new['decoder']['17.groupnorm_1.weight'] = s['first_stage_model.decoder.up.1.block.2.norm1.weight']
new['decoder']['17.groupnorm_1.bias'] = s['first_stage_model.decoder.up.1.block.2.norm1.bias']
new['decoder']['17.conv_1.weight'] = s['first_stage_model.decoder.up.1.block.2.conv1.weight']
new['decoder']['17.conv_1.bias'] = s['first_stage_model.decoder.up.1.block.2.conv1.bias']
new['decoder']['17.groupnorm_2.weight'] = s['first_stage_model.decoder.up.1.block.2.norm2.weight']
new['decoder']['17.groupnorm_2.bias'] = s['first_stage_model.decoder.up.1.block.2.norm2.bias']
new['decoder']['17.conv_2.weight'] = s['first_stage_model.decoder.up.1.block.2.conv2.weight']
new['decoder']['17.conv_2.bias'] = s['first_stage_model.decoder.up.1.block.2.conv2.bias']
new['decoder']['19.weight'] = s['first_stage_model.decoder.up.1.upsample.conv.weight']
new['decoder']['19.bias'] = s['first_stage_model.decoder.up.1.upsample.conv.bias']
new['decoder']['10.groupnorm_1.weight'] = s['first_stage_model.decoder.up.2.block.0.norm1.weight']
new['decoder']['10.groupnorm_1.bias'] = s['first_stage_model.decoder.up.2.block.0.norm1.bias']
new['decoder']['10.conv_1.weight'] = s['first_stage_model.decoder.up.2.block.0.conv1.weight']
new['decoder']['10.conv_1.bias'] = s['first_stage_model.decoder.up.2.block.0.conv1.bias']
new['decoder']['10.groupnorm_2.weight'] = s['first_stage_model.decoder.up.2.block.0.norm2.weight']
new['decoder']['10.groupnorm_2.bias'] = s['first_stage_model.decoder.up.2.block.0.norm2.bias']
new['decoder']['10.conv_2.weight'] = s['first_stage_model.decoder.up.2.block.0.conv2.weight']
new['decoder']['10.conv_2.bias'] = s['first_stage_model.decoder.up.2.block.0.conv2.bias']
new['decoder']['11.groupnorm_1.weight'] = s['first_stage_model.decoder.up.2.block.1.norm1.weight']
new['decoder']['11.groupnorm_1.bias'] = s['first_stage_model.decoder.up.2.block.1.norm1.bias']
new['decoder']['11.conv_1.weight'] = s['first_stage_model.decoder.up.2.block.1.conv1.weight']
new['decoder']['11.conv_1.bias'] = s['first_stage_model.decoder.up.2.block.1.conv1.bias']
new['decoder']['11.groupnorm_2.weight'] = s['first_stage_model.decoder.up.2.block.1.norm2.weight']
new['decoder']['11.groupnorm_2.bias'] = s['first_stage_model.decoder.up.2.block.1.norm2.bias']
new['decoder']['11.conv_2.weight'] = s['first_stage_model.decoder.up.2.block.1.conv2.weight']
new['decoder']['11.conv_2.bias'] = s['first_stage_model.decoder.up.2.block.1.conv2.bias']
new['decoder']['12.groupnorm_1.weight'] = s['first_stage_model.decoder.up.2.block.2.norm1.weight']
new['decoder']['12.groupnorm_1.bias'] = s['first_stage_model.decoder.up.2.block.2.norm1.bias']
new['decoder']['12.conv_1.weight'] = s['first_stage_model.decoder.up.2.block.2.conv1.weight']
new['decoder']['12.conv_1.bias'] = s['first_stage_model.decoder.up.2.block.2.conv1.bias']
new['decoder']['12.groupnorm_2.weight'] = s['first_stage_model.decoder.up.2.block.2.norm2.weight']
new['decoder']['12.groupnorm_2.bias'] = s['first_stage_model.decoder.up.2.block.2.norm2.bias']
new['decoder']['12.conv_2.weight'] = s['first_stage_model.decoder.up.2.block.2.conv2.weight']
new['decoder']['12.conv_2.bias'] = s['first_stage_model.decoder.up.2.block.2.conv2.bias']
new['decoder']['14.weight'] = s['first_stage_model.decoder.up.2.upsample.conv.weight']
new['decoder']['14.bias'] = s['first_stage_model.decoder.up.2.upsample.conv.bias']
new['decoder']['5.groupnorm_1.weight'] = s['first_stage_model.decoder.up.3.block.0.norm1.weight']
new['decoder']['5.groupnorm_1.bias'] = s['first_stage_model.decoder.up.3.block.0.norm1.bias']
new['decoder']['5.conv_1.weight'] = s['first_stage_model.decoder.up.3.block.0.conv1.weight']
new['decoder']['5.conv_1.bias'] = s['first_stage_model.decoder.up.3.block.0.conv1.bias']
new['decoder']['5.groupnorm_2.weight'] = s['first_stage_model.decoder.up.3.block.0.norm2.weight']
new['decoder']['5.groupnorm_2.bias'] = s['first_stage_model.decoder.up.3.block.0.norm2.bias']
new['decoder']['5.conv_2.weight'] = s['first_stage_model.decoder.up.3.block.0.conv2.weight']
new['decoder']['5.conv_2.bias'] = s['first_stage_model.decoder.up.3.block.0.conv2.bias']
new['decoder']['6.groupnorm_1.weight'] = s['first_stage_model.decoder.up.3.block.1.norm1.weight']
new['decoder']['6.groupnorm_1.bias'] = s['first_stage_model.decoder.up.3.block.1.norm1.bias']
new['decoder']['6.conv_1.weight'] = s['first_stage_model.decoder.up.3.block.1.conv1.weight']
new['decoder']['6.conv_1.bias'] = s['first_stage_model.decoder.up.3.block.1.conv1.bias']
new['decoder']['6.groupnorm_2.weight'] = s['first_stage_model.decoder.up.3.block.1.norm2.weight']
new['decoder']['6.groupnorm_2.bias'] = s['first_stage_model.decoder.up.3.block.1.norm2.bias']
new['decoder']['6.conv_2.weight'] = s['first_stage_model.decoder.up.3.block.1.conv2.weight']
new['decoder']['6.conv_2.bias'] = s['first_stage_model.decoder.up.3.block.1.conv2.bias']
new['decoder']['7.groupnorm_1.weight'] = s['first_stage_model.decoder.up.3.block.2.norm1.weight']
new['decoder']['7.groupnorm_1.bias'] = s['first_stage_model.decoder.up.3.block.2.norm1.bias']
new['decoder']['7.conv_1.weight'] = s['first_stage_model.decoder.up.3.block.2.conv1.weight']
new['decoder']['7.conv_1.bias'] = s['first_stage_model.decoder.up.3.block.2.conv1.bias']
new['decoder']['7.groupnorm_2.weight'] = s['first_stage_model.decoder.up.3.block.2.norm2.weight']
new['decoder']['7.groupnorm_2.bias'] = s['first_stage_model.decoder.up.3.block.2.norm2.bias']
new['decoder']['7.conv_2.weight'] = s['first_stage_model.decoder.up.3.block.2.conv2.weight']
new['decoder']['7.conv_2.bias'] = s['first_stage_model.decoder.up.3.block.2.conv2.bias']
new['decoder']['9.weight'] = s['first_stage_model.decoder.up.3.upsample.conv.weight']
new['decoder']['9.bias'] = s['first_stage_model.decoder.up.3.upsample.conv.bias']
new['decoder']['23.weight'] = s['first_stage_model.decoder.norm_out.weight']
new['decoder']['23.bias'] = s['first_stage_model.decoder.norm_out.bias']
new['decoder']['25.weight'] = s['first_stage_model.decoder.conv_out.weight']
new['decoder']['25.bias'] = s['first_stage_model.decoder.conv_out.bias']
new['encoder']['18.weight'] = s['first_stage_model.quant_conv.weight']
new['encoder']['18.bias'] = s['first_stage_model.quant_conv.bias']
new['decoder']['0.weight'] = s['first_stage_model.post_quant_conv.weight']
new['decoder']['0.bias'] = s['first_stage_model.post_quant_conv.bias']
new['clip']['embedding.token_embedding.weight'] = s['cond_stage_model.transformer.text_model.embeddings.token_embedding.weight']
new['clip']['embedding.position_value'] = s['cond_stage_model.transformer.text_model.embeddings.position_embedding.weight']
new['clip']['layers.0.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.weight']
new['clip']['layers.0.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.out_proj.bias']
new['clip']['layers.0.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm1.weight']
new['clip']['layers.0.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm1.bias']
new['clip']['layers.0.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.weight']
new['clip']['layers.0.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc1.bias']
new['clip']['layers.0.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.weight']
new['clip']['layers.0.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.mlp.fc2.bias']
new['clip']['layers.0.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.weight']
new['clip']['layers.0.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.0.layer_norm2.bias']
new['clip']['layers.1.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.weight']
new['clip']['layers.1.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.out_proj.bias']
new['clip']['layers.1.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.weight']
new['clip']['layers.1.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm1.bias']
new['clip']['layers.1.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.weight']
new['clip']['layers.1.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc1.bias']
new['clip']['layers.1.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.weight']
new['clip']['layers.1.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.mlp.fc2.bias']
new['clip']['layers.1.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.weight']
new['clip']['layers.1.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.1.layer_norm2.bias']
new['clip']['layers.2.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.weight']
new['clip']['layers.2.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.out_proj.bias']
new['clip']['layers.2.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.weight']
new['clip']['layers.2.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm1.bias']
new['clip']['layers.2.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.weight']
new['clip']['layers.2.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc1.bias']
new['clip']['layers.2.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.weight']
new['clip']['layers.2.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.mlp.fc2.bias']
new['clip']['layers.2.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.weight']
new['clip']['layers.2.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.2.layer_norm2.bias']
new['clip']['layers.3.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.weight']
new['clip']['layers.3.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.out_proj.bias']
new['clip']['layers.3.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.weight']
new['clip']['layers.3.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm1.bias']
new['clip']['layers.3.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.weight']
new['clip']['layers.3.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc1.bias']
new['clip']['layers.3.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.weight']
new['clip']['layers.3.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.mlp.fc2.bias']
new['clip']['layers.3.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.weight']
new['clip']['layers.3.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.3.layer_norm2.bias']
new['clip']['layers.4.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.weight']
new['clip']['layers.4.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.out_proj.bias']
new['clip']['layers.4.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.weight']
new['clip']['layers.4.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm1.bias']
new['clip']['layers.4.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.weight']
new['clip']['layers.4.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc1.bias']
new['clip']['layers.4.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.weight']
new['clip']['layers.4.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.mlp.fc2.bias']
new['clip']['layers.4.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.weight']
new['clip']['layers.4.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.4.layer_norm2.bias']
new['clip']['layers.5.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.weight']
new['clip']['layers.5.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.out_proj.bias']
new['clip']['layers.5.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.weight']
new['clip']['layers.5.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm1.bias']
new['clip']['layers.5.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.weight']
new['clip']['layers.5.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc1.bias']
new['clip']['layers.5.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.weight']
new['clip']['layers.5.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.mlp.fc2.bias']
new['clip']['layers.5.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.weight']
new['clip']['layers.5.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.5.layer_norm2.bias']
new['clip']['layers.6.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.weight']
new['clip']['layers.6.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.out_proj.bias']
new['clip']['layers.6.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.weight']
new['clip']['layers.6.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm1.bias']
new['clip']['layers.6.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.weight']
new['clip']['layers.6.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc1.bias']
new['clip']['layers.6.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.weight']
new['clip']['layers.6.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.mlp.fc2.bias']
new['clip']['layers.6.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.weight']
new['clip']['layers.6.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.6.layer_norm2.bias']
new['clip']['layers.7.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.weight']
new['clip']['layers.7.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.out_proj.bias']
new['clip']['layers.7.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.weight']
new['clip']['layers.7.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm1.bias']
new['clip']['layers.7.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.weight']
new['clip']['layers.7.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc1.bias']
new['clip']['layers.7.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.weight']
new['clip']['layers.7.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.mlp.fc2.bias']
new['clip']['layers.7.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.weight']
new['clip']['layers.7.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.7.layer_norm2.bias']
new['clip']['layers.8.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.weight']
new['clip']['layers.8.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.out_proj.bias']
new['clip']['layers.8.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.weight']
new['clip']['layers.8.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm1.bias']
new['clip']['layers.8.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.weight']
new['clip']['layers.8.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc1.bias']
new['clip']['layers.8.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.weight']
new['clip']['layers.8.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.mlp.fc2.bias']
new['clip']['layers.8.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.weight']
new['clip']['layers.8.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.8.layer_norm2.bias']
new['clip']['layers.9.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.weight']
new['clip']['layers.9.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.out_proj.bias']
new['clip']['layers.9.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.weight']
new['clip']['layers.9.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm1.bias']
new['clip']['layers.9.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.weight']
new['clip']['layers.9.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc1.bias']
new['clip']['layers.9.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.weight']
new['clip']['layers.9.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.mlp.fc2.bias']
new['clip']['layers.9.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.weight']
new['clip']['layers.9.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.9.layer_norm2.bias']
new['clip']['layers.10.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.weight']
new['clip']['layers.10.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.out_proj.bias']
new['clip']['layers.10.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.weight']
new['clip']['layers.10.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm1.bias']
new['clip']['layers.10.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.weight']
new['clip']['layers.10.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc1.bias']
new['clip']['layers.10.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.weight']
new['clip']['layers.10.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.mlp.fc2.bias']
new['clip']['layers.10.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.weight']
new['clip']['layers.10.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.10.layer_norm2.bias']
new['clip']['layers.11.attention.out_proj.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.weight']
new['clip']['layers.11.attention.out_proj.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.out_proj.bias']
new['clip']['layers.11.layernorm_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.weight']
new['clip']['layers.11.layernorm_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm1.bias']
new['clip']['layers.11.linear_1.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.weight']
new['clip']['layers.11.linear_1.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc1.bias']
new['clip']['layers.11.linear_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.weight']
new['clip']['layers.11.linear_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.mlp.fc2.bias']
new['clip']['layers.11.layernorm_2.weight'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.weight']
new['clip']['layers.11.layernorm_2.bias'] = s['cond_stage_model.transformer.text_model.encoder.layers.11.layer_norm2.bias']
new['clip']['layernorm.weight'] = s['cond_stage_model.transformer.text_model.final_layer_norm.weight']
new['clip']['layernorm.bias'] = s['cond_stage_model.transformer.text_model.final_layer_norm.bias']
new['diffusion']['unet.encoders.1.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.2.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.4.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.5.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.7.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.encoders.8.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.bottleneck.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.middle_block.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.3.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.4.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.5.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.6.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.7.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.8.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.9.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.10.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['diffusion']['unet.decoders.11.1.attention_1.in_proj.weight'] = torch.cat((s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_q.weight'], s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_k.weight'], s['model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn1.to_v.weight']), 0)
new['encoder']['13.attention.in_proj.weight'] = torch.cat((s['first_stage_model.encoder.mid.attn_1.q.weight'], s['first_stage_model.encoder.mid.attn_1.k.weight'], s['first_stage_model.encoder.mid.attn_1.v.weight']), 0).reshape((1536, 512))
new['encoder']['13.attention.in_proj.bias'] = torch.cat((s['first_stage_model.encoder.mid.attn_1.q.bias'], s['first_stage_model.encoder.mid.attn_1.k.bias'], s['first_stage_model.encoder.mid.attn_1.v.bias']), 0)
new['encoder']['13.attention.out_proj.weight'] = s['first_stage_model.encoder.mid.attn_1.proj_out.weight'].reshape((512, 512))
new['decoder']['3.attention.in_proj.weight'] = torch.cat((s['first_stage_model.decoder.mid.attn_1.q.weight'], s['first_stage_model.decoder.mid.attn_1.k.weight'], s['first_stage_model.decoder.mid.attn_1.v.weight']), 0).reshape((1536, 512))
new['decoder']['3.attention.in_proj.bias'] = torch.cat((s['first_stage_model.decoder.mid.attn_1.q.bias'], s['first_stage_model.decoder.mid.attn_1.k.bias'], s['first_stage_model.decoder.mid.attn_1.v.bias']), 0)
new['decoder']['3.attention.out_proj.weight'] = s['first_stage_model.decoder.mid.attn_1.proj_out.weight'].reshape((512, 512))
new['clip']['layers.0.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.weight']), 0)
new['clip']['layers.0.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.0.self_attn.v_proj.bias']), 0)
new['clip']['layers.1.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.weight']), 0)
new['clip']['layers.1.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.1.self_attn.v_proj.bias']), 0)
new['clip']['layers.2.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.weight']), 0)
new['clip']['layers.2.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.2.self_attn.v_proj.bias']), 0)
new['clip']['layers.3.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.weight']), 0)
new['clip']['layers.3.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.3.self_attn.v_proj.bias']), 0)
new['clip']['layers.4.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.weight']), 0)
new['clip']['layers.4.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.4.self_attn.v_proj.bias']), 0)
new['clip']['layers.5.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.weight']), 0)
new['clip']['layers.5.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.5.self_attn.v_proj.bias']), 0)
new['clip']['layers.6.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.weight']), 0)
new['clip']['layers.6.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.6.self_attn.v_proj.bias']), 0)
new['clip']['layers.7.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.weight']), 0)
new['clip']['layers.7.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.7.self_attn.v_proj.bias']), 0)
new['clip']['layers.8.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.weight']), 0)
new['clip']['layers.8.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.8.self_attn.v_proj.bias']), 0)
new['clip']['layers.9.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.weight']), 0)
new['clip']['layers.9.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.9.self_attn.v_proj.bias']), 0)
new['clip']['layers.10.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.weight']), 0)
new['clip']['layers.10.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.10.self_attn.v_proj.bias']), 0)
new['clip']['layers.11.attention.in_proj.weight'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.weight'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.weight']), 0)
new['clip']['layers.11.attention.in_proj.bias'] = torch.cat((s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.q_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.k_proj.bias'], s['cond_stage_model.transformer.text_model.encoder.layers.11.self_attn.v_proj.bias']), 0)

from stable-diffusion-pytorch.

kjsman avatar kjsman commented on August 23, 2024

Hello!

In short: the conversion scripts exists, but they are pure spaghetti and unpolished, so I don't want to publish them. (If you need anyway, I can email you) Recently I'm working on another project, so I cannot guarentee when I'll polish and publish them...

data.zip is converted from the official SDv1.4 model.

The conversion scripts are incompatible with SDv2 model and its variants. In fact, this repository is incompatible with SDv2 and its variants.

I believe this repository can be easily edited for compatibility with SDv2.0; v1.4 and v2.0 are only different in hyperparameters (which are hardcoded and can be easily changed) and CLIP last-layer-skipping behavior (which can be easily implemented).

As long as you're familiar with PyTorch and willing to tackle these problems, I think it is fairly easy to use v2.0 model on this codebase.

from stable-diffusion-pytorch.

treeform avatar treeform commented on August 23, 2024

I would love to have them even if they are unpolished. You can email them to me treeform a-t istrolid.com .
I find your repo to be the easiest to understand and cleanest implementation of all the other ones I have looked at.
Thanks!

from stable-diffusion-pytorch.

treeform avatar treeform commented on August 23, 2024

Opps I think I did my email wrong treeform a-t istrolid.com is the correct one.

from stable-diffusion-pytorch.

vgoklani avatar vgoklani commented on August 23, 2024

I'd like to see the conversion scripts too, and I'm offering to help clean them up! Could you please share the link. thanks!

from stable-diffusion-pytorch.

rnxyfvls avatar rnxyfvls commented on August 23, 2024

@treeform Would you mind giving permission to merge your code into this repository, under the MIT license, as part of pull request #16 ?

from stable-diffusion-pytorch.

treeform avatar treeform commented on August 23, 2024

Sure, I don't mind.

from stable-diffusion-pytorch.

Amna-pro avatar Amna-pro commented on August 23, 2024

Can anyone guide me
if I use my dataset to train the model . and use this code to test.

from stable-diffusion-pytorch.

Related Issues (11)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.