neulab / code-bert-score Goto Github PK
View Code? Open in Web Editor NEWCodeBERTScore: an automatic metric for code generation, based on BERTScore
License: MIT License
CodeBERTScore: an automatic metric for code generation, based on BERTScore
License: MIT License
Current code_bert_score
is able to handle the case where any of 'cands', 'refs' contains empty strings and no 'sources' is passed to the score
method. See the example below:
from code_bert_score import score
score([''],['a'], lang="python")
However, when 'sources' is provided, the method will raise IndexError
.
from code_bert_score import score
score([''],['a'], sources=["a"], lang="python")
It would be great if this kind of cases can be handled.
When I enter a long code snippet, I get an error like this: “IndexError: The shape of the mask [425] at index 0 does not match the shape of the indexed tensor [413, 768] at index 0”. Does this mean that the maximum input length supported by the model configuration is 413 tokens? I was wondering if the API could be further refined to support arbitrary length or automatic truncation.
I came across the paper CodeBERTScore: Evaluating Code Generation with Pretrained Models of Code which mentions that CodeBERTScore is evaluated on CoNaLa.
Human Preference Experiments
We evaluate on CoNaLa (Yin et al., 2018a), a natural language
context to python code generation benchmark collected from StackOverflow. We use the human
annotation released by Evtikhiev et al. (2022) to measure the correlation between each metric and
human preference. For each example, Evtikhiev et al. (2022) asked experienced software developers
to grade the generated code snippets from five different models. The grade scales from zero to four,
with zero meaning that the generated code is irrelevant and unhelpful, and four meaning that the
generated code solves the problem accurately. Overall, there are 2860 annotated code snippets (5
generations × 472 examples) where each snippet is graded by 4.5 annotators.
I checked ./evaluation
, but only found the humaneval dataset. Would you be kind enough to share the code used for this evaluation?
I get the following warning when using the code given below
def compute_code_bert_nl(candidates: list[str], references: list[list[str]]):
return score(candidates, references, lang="en", rescale_with_baseline=True)[
-1
].tolist()
Warning: Baseline not Found for microsoft/codebert-base-mlm on en at [c:\Python39\lib\site-packages\code_bert_score\rescale_baseline/en/microsoft/codebert-base-mlm.tsv](file:///C:/Python39/lib/site-packages/code_bert_score/rescale_baseline/en/microsoft/codebert-base-mlm.tsv)
Can you tell me why this happens and what can I do to fix this, I have been following your notebook example and you use the same stuff. Also thanks I enjoyed going through your paper and found it really cool that you have such a neat setup on Github! Really appreciate the effort and thanks for looking at my question!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.