Giter Club home page Giter Club logo

chatgpt-sentiment-evaluation's People

Contributors

balancedzx avatar grayground avatar rxiacn avatar sinclaircoder avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

chatgpt-sentiment-evaluation's Issues

实验设计

请问chatgpt没有提供api接口,而gpt3.5需要付费使用,你们是如何使用chatgpt进行实验的呢?是通过人工一个一个输入数据吗?

Performance Record Documents

Hi there!

Your work is excellent!

To manually check and understand the prediction results of different models, I am eager to find the documents storing relevant information.

I went through your work and read the code on GitHub. I understand the row data (true labels) stored in the standard data file. However, there are many files named by "50_test" "100_test" "train" "test" "dev" under different folders that I did not understand their connections with each other. To my understanding, "50_test" and "100_test" are generated by extracting 50 lines and 100 lines from "dev" respectively.

I am trying to understand these files with the purpose of finding docs that record the detailed prediction results for different models in different tasks. Could you kindly help me have a clearer mind about that?

I would appreciate it a lot if you would like to tell me about either the logic/connection behind the filenames or where I could find the docs storing the prediction results of different models!

Question about the number of the examples of test dataset in E2E-ABSA task

Hi, I noticed the statistics in Table 2 of your paper and found that the number of test instances for the E2E-ABSA task is inconsistent with the Sem14 test dataset. In the paper by Pontiki et al. (2014), the test size for the laptop and restaurant domains is stated as 800 sentences each. However, in Table 2, the numbers are changed to 339 and 496, and you mentioned in your paper that you used the entire test set. Therefore, I am curious about the differences here.

prompt for few shot

Tnank you for your interesting work.
Could you show your prompt with some few-shot learning please?
Thank you very much.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.