Giter Club home page Giter Club logo

joinable's People

Contributors

karldd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

joinable's Issues

Problems about Multi-part assembly in paper

Hello,

First of all thanks for sharing such great work.

I try to reproduce the part about multi-part assembly in the paper (Sec 6 and Appendix A.3.1). I first tried to use the Assembly Dataset - Joint Data and the joint2cad script to build the joint parts , and tried to further perform the joint axis prediction task.

In order to perform the joint axis prediction task, I refer to the code of regraph script , trying to extract features from the joint parts, but I found that, in Fusion 360, after I import the joint part, since it is no longer a single part, I cannot use the regraph script to extracts features, including the types of surfaces and curves. Specifically, after importing an assembly's step file, the object appears to have no solid bodies. My fusion 360 script code is as follows

app = adsk.core.Application.get()
adsk.doEvents()

for doc in app.documents:
    doc.close(False)
design = adsk.fusion.Design.cast(app.activeProduct)

reconstruction = design.rootComponent.occurrences.addNewComponent(adsk.core.Matrix3D.create())
reconstruction.activate()
adsk.doEvents()

import_options = app.importManager.createSTEPImportOptions(file_path)
import_options.isViewFit = False
imported_designs = app.importManager.importToTarget2(import_options, design.rootComponent)
target = imported_designs[0]

In this code, there is no object in target.bRepBodies, which prevents me from getting the required features for the JoinABLe model. I upload the step file in this Google Drive link, and here is the debug information

1682655295228

Since I am not familiar with Fusion 360 api, I just simply modified the existing script file. Is there something wrong with my script, or does assembly feature extraction require some other method. Or can you give some more details about multi-part assembly.

Thanks in advance

Labeling and loss function problems for joint axis prediction

Hi, I would like to ask why the label in the joint axis prediction task is an mn all-1 matrix. Shouldn't the labels provided in the dataset be the entities of the interconnected fits in the two parts. My understanding is that since the joint axis prediction is to predict the paired BRep entities (face or edge) of a pair of parts, if a BRep entity i (face or edge) of part 1 and a BRep entity j (face or edge) of part 2 are paired to form a target component, shouldn't the mn joint axis prediction matrix label be set to a prominent value? How is the all-1 matrix optimized for joint axis prediction? Why are the joint entities of two parts not distinguished from other paired entities in the label?

Sequence issues in future applications

This is so cool that you are doing this work, thank you for being able to open source such cool work.
In the future application in this article you give some demonstration of multi-part assembly. I was interested in this part, so I went and downloaded your open source assembly dataset, but unfortunately I wasn't able to get the sequence information about the multi-part assembly from the assembly.json file of this dataset. So if it is convenient for you, I would like you to give me some tips about how to extract the assembly sequence from the information in the assembly dataset.
Thank you!

`conda` environment using the yml file is taking forever

Hello,

Creating the conda environment using the command conda env create -f environment.yml is taking forever and says Solving environment, even with the setting conda config --set channel_priority flexible. Is there a way to create a conda environment quickly? I also tried creating the conda environment using the default way and installing the packages individually. However, there are many conflicts due to No matching distribution errors.

Below is the Terminal output:

conda env create -f environment.yml
Collecting package metadata (repodata.json): \ WARNING conda.models.version:get_matcher(556): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.8.0.*, but conda is ignoring the .* and treating it as 1.8.0
WARNING conda.models.version:get_matcher(556): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.9.0.*, but conda is ignoring the .* and treating it as 1.9.0
WARNING conda.models.version:get_matcher(556): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.7.1.*, but conda is ignoring the .* and treating it as 1.7.1
WARNING conda.models.version:get_matcher(556): Using .* with relational operator is superfluous and deprecated and will be removed in a future version of conda. Your spec was 1.6.0.*, but conda is ignoring the .* and treating it as 1.6.0
done
Solving environment:

Using joinable with other datasets and other 3d formats

Hi @karldd,

thanks for this open source work and code, it seems quite a promissing job. I am now trying to reproduce your code and I would be interested to check whether I could adapt it for different datasets like the Breaking bad dataset. As I understand the 3d models need to be represented in a B-Rep format. So I have a couple of questions:

  1. Thus, is there a way that I can obtain the B-Rep representation from .obj files and use them to train the model on a new dataset or even infer the pretrained models to the new dataset?
  2. Can you elaborate a bit more on the weak supervision part, I am not sure I understood what in practice you mean with that?
  3. How different would be to adapt the same principle for point clouds?

Thanks.

Data set loading issues

Hi, thanks for the open source code, it's a great job. I am now trying to reproduce your code and when I load the training dataset, the system's 16 GB of runtime memory are filled up causing the computer to crash and report process finished with exit code 137 (interupted by signal 9: SIGKILL). Now I don't know what to do.

Joint prediction problem

Hello, this project is really cool!
I am trying to reproduce your results, but I am having some problems understanding the joint axis prediction section through the paper and the code, and would appreciate your help.
The specific questions are as follows: In the EdgeMLPMPN part of the model outputs a vector of size G1 nodes * G2 nodes, does this vector represent whether the nodes in G1 and the nodes in G2 are adjacent to the binaries result? And what does the adjacency here indicate? Contact? Is a softmax operation done on the output during testing to distinguish the pair of B-REP faces and edge entities chosen by the designer?
Thank you as always and have a nice day !

Confusion regarding the meaning of "reversed" of edges in model features

Firstly, I would like to express my gratitude to the authors for their excellent research.

I am encountering confusion regarding the interpretation of the term "reversed" of edge in the context of model features. After reading the JoinABLe paper, it was mentioned that "reversed" refers to the edge being opposite to the curve it is associated with. However, in my understanding, edges do not inherently possess a direction; their direction is determined by the direction of the curve they are associated with. Therefore, I am puzzled about the meaning of "reversed" in this context. Could someone please clarify its significance?

Any insights or explanations regarding the interpretation of "reversed" would be greatly appreciated. Thank you for your assistance!

Performance comparison with BRepNet encoder

Hello and thank you for unveiling this comprehensive code ! 😄

I've read the JoinABLe supplementary material, and I was wondering why your baseline experiments do not include a model with the encoder from BRepNet (minus the layer for face classification used in the original architecture), the same way you use UV-Net as a baseline encoder.

Is that something you tried and decided not to include in the paper, or are there any specific reason why you haven't give it a try ?

I may try it myself, but I noticed BRepNet takes .step parts as input, and I'm afraid it won't preserve the critical topological entities order used in a part .json, and in the joint sets annotations.

Thank you as always and have a nice day !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.