Giter Club home page Giter Club logo

Comments (24)

hayoung-jeremy avatar hayoung-jeremy commented on June 13, 2024 4

It works perfectly fine!
image
Thank you so much for your help @kunalkathare , I really appreciate it!
I needed to generate a dataset of 100 GLB files, so I created a process_glb_files.sh file that automatically runs a Blender script command for each file!
Share this with those who need it:

#!/bin/bash
DIRECTORY="./data" # replace it with your data folder path containing glb files

for glb_file in "$DIRECTORY"/*.glb; do
  echo "Processing $glb_file"
  blender -b -P scripts/data/objaverse/blender_script.py -- --object_path "$glb_file"
done

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024 1

Hi,
Plz follow the data preparation instructions here.
You may need to use other scripts and follow the instructions here for distributed rendering https://github.com/allenai/objaverse-rendering. mathutils and bpy are built-in packages under blender.

Thankyou

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024 1

Just to clarify the blender_script.py works only for 1 object right? Can you please also tell the format of the json file that I have to mention in train sample config file.

Yes, blender_script.py is for 1 object only.
The json file just contains a list of object ids. E.g. ["xxx", "yyy"]
You may refer to this comment #26 (comment) for the data structure.

Thanks a lot 😄

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024 1

One possible workaround here is to modify https://github.com/3DTopia/OpenLRM/blob/main/openlrm/runners/train/lrm.py#L418-L422.

self.log_images({ f'Images_split{split}/rendered': renders.unsqueeze(0), f'Images_split{split}/gt': gts.unsqueeze(0), f'Images_merged{split}': merged.unsqueeze(0), }, log_progress, {})

Just add an explicit empty {} to self.log_images method.

It works now, thanks

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024 1

Hi,

I've updated some docs here https://github.com/3DTopia/OpenLRM/tree/main?tab=readme-ov-file#inference-on-trained-models.

Plz try to use python scripts/convert_hf.py --config <YOUR_EXACT_TRAINING_CONFIG> convert.global_step=null, which will convert the last training checkpoint by setting the convert.global_step to null.

Works now 😄

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024 1

Thank you for the kind reply!
You mean I have to run the blender_script.py from this repository, not the one in the Objaverse Rendering repository, right?
I'll try it again, thank you so much!

Yes

from openlrm.

ZexinHe avatar ZexinHe commented on June 13, 2024

Hi,
Plz follow the data preparation instructions here.
You may need to use other scripts and follow the instructions here for distributed rendering https://github.com/allenai/objaverse-rendering. mathutils and bpy are built-in packages under blender.

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

Just to clarify the blender_script.py works only for 1 object right?
Can you please also tell the format of the json file that I have to mention in train sample config file.

from openlrm.

ZexinHe avatar ZexinHe commented on June 13, 2024

Just to clarify the blender_script.py works only for 1 object right? Can you please also tell the format of the json file that I have to mention in train sample config file.

Yes, blender_script.py is for 1 object only.
The json file just contains a list of object ids. E.g. ["xxx", "yyy"]
You may refer to this comment #26 (comment) for the data structure.

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

@ZexinHe whenever I try to train I'm getting the following error while on [TRAIN STEP] at 100th iteration, I tried to change the values of accum_steps and epochs in yaml file but at every 100th iteration the error occurs.
File "/home/Kunal/OpenLRM/openlrm/runners/train/lrm.py", line 418, in log_image_monitor
self.log_images({
Type error: Trainer.log_images() missing 1 required positional argument: 'log_kwargs'

Also what should I put in distributed type in accelerate-train.yaml if I'm a using single gpu.

I'm using one nvidia A6000.

from openlrm.

ZexinHe avatar ZexinHe commented on June 13, 2024

Hi,

It looks like some usage problem with self.log_images method. Could you plz provide more details about your environment? I don't see it a problem in mine.

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

Hi,

It looks like some usage problem with self.log_images method. Could you plz provide more details about your environment? I don't see it a problem in mine.

Ubuntu 22
Python 3.10
Using python virtual environment

from openlrm.

ZexinHe avatar ZexinHe commented on June 13, 2024

One possible workaround here is to modify https://github.com/3DTopia/OpenLRM/blob/main/openlrm/runners/train/lrm.py#L418-L422.

self.log_images({ f'Images_split{split}/rendered': renders.unsqueeze(0), f'Images_split{split}/gt': gts.unsqueeze(0), f'Images_merged{split}': merged.unsqueeze(0), }, log_progress, {})

Just add an explicit empty {} to self.log_images method.

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

Hey @ZexinHe I'm not able to run the convert_hf.ly file, I'm getting the following error:

omegaconf.errors.ConfigAttributeError: Missing key convert
full key: convert
object_type=dict

My command: python scripts/convert_hf.py --config "/home/Kunal/OpenLRM/configs/train-sample.yaml"

Also can I download the objaverse renderings used in training from anywhere, as it takes a lot of time for me to produce them through objaverse.
And also on how many objects the small model on hugging face (just objaverse) is trained on and how long did it take to train and on which and how many gpus?

from openlrm.

ZexinHe avatar ZexinHe commented on June 13, 2024

Hi,

I've updated some docs here https://github.com/3DTopia/OpenLRM/tree/main?tab=readme-ov-file#inference-on-trained-models.

Plz try to use python scripts/convert_hf.py --config <YOUR_EXACT_TRAINING_CONFIG> convert.global_step=null, which will convert the last training checkpoint by setting the convert.global_step to null.

from openlrm.

hayoung-jeremy avatar hayoung-jeremy commented on June 13, 2024

Hi @kunalkathare , thank you for the post, it was very helpful.
By the way, since I'm very new to AI, don't know how to properly prepare data and run training.
I've prepared images through Objaverse Rendering, but not sure how to deal with the rest.
I've posted question on this issue.
Could you please check it once when you have time?
Thank you in advance.

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

Hi @hayoung-jeremy, the blender_script.py renders pose, rgba, and intrinsic for each object, like this u have to run this script for each object, all this will be stored under views/ folder which will be the root dir in train sample yaml file, the json files should contain the list of uids of objects ["xxx","yyy",...].

This blender script file stores these 3 things under views/uid-of-object for your each object.

First install blender, then run this command for each object:
blender -b -P blender_script.py -- --object-path "your path to object"

from openlrm.

hayoung-jeremy avatar hayoung-jeremy commented on June 13, 2024

Thank you for the kind reply!
You mean I have to run the blender_script.py from this repository, not the one in the Objaverse Rendering repository, right?
I'll try it again, thank you so much!

from openlrm.

hayoung-jeremy avatar hayoung-jeremy commented on June 13, 2024

Hi @kunalkathare , I was able to sucessfully run the training code.
However, I encountered an error after executing it and I'm not sure what I did wrong, so I am leaving a question here.
I generated a dataset of 100 pairs, each containing high-quality GLB files processed through OpenLRM's blender_script.py, and each pair includes rgba, pose, and intrinsics.npy information.
Afterwards, I ran the training code on Runpod’s multi GPU instance, which has four A100 SXM with 80GB VRAM.

I needed to set up the config file, but it was difficult to adjust because I don't understand what each item means, so I only specified the path and the number of GPUs before proceeding with the training as is.

Would it be meaningless to proceed with training using a dataset of 100 pairs?
When I tried with only 15 pairs, there wasn’t enough data, and it failed.
It would be helpful to know the minimum amount of data required and how to adjust the config file accordingly to match the amount of data.
Here is the link of the issue, I would appreciate it if you could check this when you have time.
Thank you.

from openlrm.

Mrguanglei avatar Mrguanglei commented on June 13, 2024

它工作得很好! 非常感谢你的帮助 图像@kunalkathare, 对此,我真的非常感激! 我需要生成 100 个 GLB 文件的数据集,因此我创建了一个process_glb_files.sh文件,该文件会自动为每个文件运行 Blender 脚本命令! 分享给有需要的人:

#!/bin/bash
DIRECTORY="./data" # replace it with your data folder path containing glb files

for glb_file in "$DIRECTORY"/*.glb; do
  echo "Processing $glb_file"
  blender -b -P scripts/data/objaverse/blender_script.py -- --object_path "$glb_file"
done

Hello! I also encountered this problem, my mathutils and bpy appeared red error, may I ask how you solved it? Your help will be greatly appreciated

from openlrm.

Mrguanglei avatar Mrguanglei commented on June 13, 2024

@hayoung-jeremy Hello! I also encountered this problem, my mathutils and bpy appeared red error, may I ask how you solved it? Your help will be greatly appreciated

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

它工作得很好! 非常感谢你的帮助 图像@kunalkathare, 对此,我真的非常感激! 我需要生成 100 个 GLB 文件的数据集,因此我创建了一个process_glb_files.sh文件,该文件会自动为每个文件运行 Blender 脚本命令! 分享给有需要的人:

#!/bin/bash
DIRECTORY="./data" # replace it with your data folder path containing glb files

for glb_file in "$DIRECTORY"/*.glb; do
  echo "Processing $glb_file"
  blender -b -P scripts/data/objaverse/blender_script.py -- --object_path "$glb_file"
done

Hello! I also encountered this problem, my mathutils and bpy appeared red error, may I ask how you solved it? Your help will be greatly appreciated

Hi,
Plz follow the data preparation instructions here.
You may need to use other scripts and follow the instructions here for distributed rendering https://github.com/allenai/objaverse-rendering. mathutils and bpy are built-in packages under blender.

from openlrm.

Mrguanglei avatar Mrguanglei commented on June 13, 2024

它工作得很好! 非常感谢你的帮助 图像@kunalkathare, 对此,我真的非常感激! 我需要生成 100 个 GLB 文件的数据集,因此我创建了一个process_glb_files.sh文件,该文件会自动为每个文件运行 Blender 脚本命令! 分享给有需要的人:

#!/bin/bash
DIRECTORY="./data" # replace it with your data folder path containing glb files

for glb_file in "$DIRECTORY"/*.glb; do
  echo "Processing $glb_file"
  blender -b -P scripts/data/objaverse/blender_script.py -- --object_path "$glb_file"
done

Hello! I also encountered this problem, my mathutils and bpy appeared red error, may I ask how you solved it? Your help will be greatly appreciated

Hi, Plz follow the data preparation instructions here. You may need to use other scripts and follow the instructions here for distributed rendering https://github.com/allenai/objaverse-rendering. mathutils and bpy are built-in packages under blender.

Do you mean to use the blender_script from this URL instead of openlrm? I have a lot of knowledge is not very understanding at the beginning,Thank you for your help.

from openlrm.

kunalkathare avatar kunalkathare commented on June 13, 2024

它工作得很好! 非常感谢你的帮助 图像@kunalkathare, 对此,我真的非常感激! 我需要生成 100 个 GLB 文件的数据集,因此我创建了一个process_glb_files.sh文件,该文件会自动为每个文件运行 Blender 脚本命令! 分享给有需要的人:

#!/bin/bash
DIRECTORY="./data" # replace it with your data folder path containing glb files

for glb_file in "$DIRECTORY"/*.glb; do
  echo "Processing $glb_file"
  blender -b -P scripts/data/objaverse/blender_script.py -- --object_path "$glb_file"
done

Hello! I also encountered this problem, my mathutils and bpy appeared red error, may I ask how you solved it? Your help will be greatly appreciated

Hi, Plz follow the data preparation instructions here. You may need to use other scripts and follow the instructions here for distributed rendering https://github.com/allenai/objaverse-rendering. mathutils and bpy are built-in packages under blender.

Do you mean to use the blender_script from this URL instead of openlrm? I have a lot of knowledge is not very understanding at the beginning,Thank you for your help.

Use from OpenLRM

from openlrm.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.