Giter Club home page Giter Club logo

Comments (4)

silviatti avatar silviatti commented on May 18, 2024 1

Thanks for open the specific issue, because I had lost the question. Yes, I confirm that there's currently no way to load the unpreprocessed corpus.
As mentioned before, this would require to rethink how we pre-process the corpus and the format of the pre-processed corpus. This is currently a .tsv file with no header. The first two columns are mandatory, and they contain the pre-processed text and the partition of the dataset to which the document belong. Then we may have an additional column representing the label associated with the doc.

A possibility could be to add an additional column representing the unpreprocessed text. This could be mandatory (although it's not necessary if one doesn't use CTM) or this could be optional. In case it's optional, this can create some confusion (how can we recognize that a specific column represents the unpreprocessed text, the labels, etc?), unless we provide a header to the .tsv file.

Happy to discuss if you want. Unfortunately my time to dedicate to this project has been reduced lately. So I am be slow to respond. However I think OCTIS can be useful for the community and I'm trying to keep it alive :)

from octis.

silviatti avatar silviatti commented on May 18, 2024 1

Thanks Roberta! :)

yes, that is correct.

My suggestion is to first try hyperparameter configurations that "usually" work well. You can find some reference values in these papers:

Moreover, make sure you select an appropriate pre-trained model for generating the contextualized representations of the documents. In this paper we noticed that this has an impact on the results. And also the pre-processing is quite important. It seems CTM works better with smaller vocabularies.

Hope it helps :)

Silvia

from octis.

rbroc avatar rbroc commented on May 18, 2024

following up on this cause I stumbled on the same issue (I think) and want to double-check I understand correctly.

I need to do hyperparameter optimization + model comparison for multiple CTMs, and I want to pass unpreprocessed text to the transformer part of the pipeline, while passing processed text to the neural topic model.

It seems like at the moment this is not supported here, I have to stick to manually trying different HP combinations and computing metrics through https://github.com/MilaNLProc/contextualized-topic-models, correct?

Amazing work, by the way 🙏

from octis.

rbroc avatar rbroc commented on May 18, 2024

thanks for the super quick reply and the pointers! :)

from octis.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.