Giter Club home page Giter Club logo

antlr4-c3's Introduction

GitHub Workflow Status (with event) License Weekly Downloads npm version

antlr4-c3 The ANTLR4 Code Completion Core

This project contains a grammar agnostic code completion engine for ANTLR4 based parsers. The c3 engine is able to provide code completion candidates useful for editors with ANTLR generated parsers, independent of the actual language/grammar used for the generation.

The original implementation is provided as a node module (works in both, Node.js and browsers), and is written in TypeScript. Ports to Java, C#, C++ are available in the ports folder. These ports might not be up to date compared to the TypeScript version.

Abstract

This library provides a common infrastructure for code completion implementations. The c3 engine implementation is based on an idea presented a while ago under Universal Code Completion using ANTLR3. There a grammar was loaded into a memory structure so that it can be walked through with the current input to find a specific location (usually the caret position) and then collect all possible tokens and special rules, which then describe the possible set of code completion candidates for that position. With ANTLR4 we no longer need to load a grammar, because the grammar structure is now available as part of a parser (via the ATN - Augmented Transition Network). The ANTLR4 runtime even provides the LL1Analyzer class, which helps with retrieving follow sets for a given ATN state, but has a few shortcomings and is in general not easy to use.

With the Code Completion Core implementation things become a lot easier. In the simplest setup you only give it a parser instance and a caret position and it will return the candidates for it. Still, a full code completion implementation requires some support code that we need to discuss first before we can come to the actual usage of the c3 engine.

A Code Completion Breakdown

For showing possible symbols in source code you obviously need a source for all available symbols at the given position. Providing them is usually the task of a symbol table. Its content can be derived from your current source code (with the help of a parser + a parse listener). More static parts (like runtime functions) can be loaded from disk or provided by a hard coded list etc. The symbol table can then answer your question about all symbols of a given type that are visible from a given position. The position usually corresponds to a specific symbol in the symbol table and the structure then allows to easily get visible symbols. The c3 engine comes with a small symbol table implementation, which is however not mandatory to use the library, but instead provides an easy start, if you don't have an own symbol table class already.

While the symbol table provides symbols of a given type, we need to find out which type is actually required. This is the task of the c3 engine. In its simplest setup it will return only keywords (and other lexer symbols) that are allowed by the grammar for a given position (which is of course the same position used to find the context for a symbol lookup in your symbol table). Keywords are a fixed set of words (or word sequences) that usually don't live in a symbol table. You can get the actual text strings directly from the parser vocabulary. The c3 engine only returns the lexer tokens for them.

In order to also get other types like variables or class names you have to do 2 steps:

  • Identify entities in your grammar which you are interested in and put them into own rules. More about this below.
  • Tell the engine in which parser rules you are particularly interested. It will then return those to you instead of the lexer tokens they are made of.

Let's consider a grammar which can parse simple expressions like:

var a = b + c()

Such a grammar could look like:

grammar Expr;
expression: assignment | simpleExpression;

assignment: (VAR | LET) ID EQUAL simpleExpression;

simpleExpression
    : simpleExpression (PLUS | MINUS) simpleExpression
    | simpleExpression (MULTIPLY | DIVIDE) simpleExpression
    | variableRef
    | functionRef
;

variableRef: ID;
functionRef: ID OPEN_PAR CLOSE_PAR;

VAR: [vV] [aA] [rR];
LET: [lL] [eE] [tT];

PLUS: '+';
MINUS: '-';
MULTIPLY: '*';
DIVIDE: '/';
EQUAL: '=';
OPEN_PAR: '(';
CLOSE_PAR: ')';
ID: [a-zA-Z] [a-zA-Z0-9_]*;
WS: [ \n\r\t] -> channel(HIDDEN);

You can see the 2 special rules variableRef and functionRef, which mostly consist of the ID lexer rule. We could have instead used a single ID reference in the simpleExpression rule. However, this is where your domain knowledge about the language comes in. By making the two use cases explicit you can now exactly tell what to query from your symbol table. As you see we are using parser rules to denote entity types, which is half of the magic here.

The code completion core can return parser rule indexes (as created by ANTLR4 when it generated your files). With a returned candidate ExprParser.RULE_variableRef you know that you have to ask your symbol for all visible variables (or functions if you get back ExprParser.RULE_functionRef). It's easy to see how this applies to much more complex grammars. The principle is always the same: create an own parser rule for your entity references. If you have an SQL grammar where you drop a table write your rules so:

dropTable: DROP TABLE tableRef;
tableRef: ID;

instead of:

dropTable: DROP TABLE ID;

Then tell the c3 engine that you want to get back tableRef if it is a valid candidate at a given position.

Getting Started

With this knowledge we can now look at a simple code example that shows how to use the engine. For further details check the unit tests for this node module (under the test/ folder).

Since this library is made for ANTLR4 based parser, it requires a JavaScript/TypeScript runtime, just like your parser (namely antlr4ng).

let inputStream = new ANTLRInputStream("var c = a + b()");
let lexer = new ExprLexer(inputStream);
let tokenStream = new CommonTokenStream(lexer);

let parser = new ExprParser(tokenStream);
let errorListener = new ErrorListener();
parser.addErrorListener(errorListener);
let tree = parser.expression();

let core = new c3.CodeCompletionCore(parser);
let candidates = core.collectCandidates(0);

This is a pretty standard parser setup here. It's not even necessary to actually parse the input. But the c3 engine needs a few things for its work:

  • the ATN of your parser class
  • the tokens from your input stream
  • vocabulary and rule names for debug output

All these could be passed in individually, but since your parser contains all of that anyway and we need a parser for predicate execution, the API has been designed to take a parser instead (predicates work however only if written for the Javascript/Typescript target). In real world applications you will have a parser anyway (e.g. for error checking), which is perfect as ATN and input provider for the code completion core. But keep in mind: whatever parser you pass in it must have a fully set up token stream. It's not required that it parsed anything before calling the code completion engine and the current stream positions don't matter either.

The returned candidate collection contains fields for lexer tokens (mostly keywords, but also other tokens if they are not on the ignore list) and parser rule indexes. This collection is defined as:

class CandidatesCollection {
    public tokens: Map<number, TokenList>;
    public rules: Map<number, CandidateRule>;
};

where the map keys are the lexer tokens and the rule indices, respectively. Both can come with additional values, which you may or may not use for your implementation.

For parser rules the value includes a startTokenIndex, which reflects the index of the starting token within the evaluated rule. This allows consumers to determine the range of tokens that should be replaced or matched against when resolving symbols for your rule. The value also contains a rule list which represents the call stack at which the given rule was found during evaluation. This allows consumers to determine a context for rules that are used in different places.

For the lexer tokens the list consists of further token ids which directly follow the given token in the grammar (if any). This allows you to show token sequences if they are always used together. For example consider this SQL rule:

createTable: CREATE TABLE (IF NOT EXISTS)? ...;

Here, if a possible candidate is the IF keyword, you can also show the entire IF NOT EXISTS sequence to the user (and let him complete all 3 words in one go in the source code). The engine will return a candidate entry for IF with a token list containing NOT and EXISTS. This list will of course update properly when the user comes to NOT. Then you will get a candidate entry for NOT and an additional list consisting of just EXISTS.

Essential for getting any rule index, which you can use to query your symbol table, is that you specify those you want in the CodeCompletionCore.preferredRules field before running CodeCompletionCore.collectCandidates().

The final step to get your completion strings is usually something like this:

let keywords: string[] = [];
for (let candidate of candidates.tokens) {
    keywords.push(parser.vocabulary.getDisplayName(candidate[0]));
}

let symbol = ...; // Find the symbol that covers your caret position.
let functionNames: string[] = [];
let variableNames: string[] = [];
for (let candidate of candidates.rules) {
  switch (candidate[0]) {
    case ExprParser.RULE_functionRef: {
      let functions = symbol.getSymbolsOfType(c3.FunctionSymbol);
      for (function of functions)
        functionNames.push(function.name);
      break;
    }

    case ExprParser.RULE_variableRef: {
      let variables = symbol.getSymbolsOfType(c3.VariableSymbol);
      for (variable of variables)
        functionNames.push(variable.name);
      break;
    }
  }
}

// Finally combine all found lists into one for the UI.
// We do that in separate steps so that you can apply some ordering to each of your sub lists.
// Then you also can order symbols groups as a whole depending their importance.
let candidates: string[] = [];
candidates.push(...keywords);
candidates.push(...functionNames);
candidates.push(...variableNames);

Fine Tuning

Ignored Tokens

As mentioned above in the base setup the engine will only return lexer tokens. This will include your keywords, but also many other tokens like operators, which you usually don't want in your completion list. In order to ease usage you can tell the engine which lexer tokens you are not interested in and which therefor should not appear in the result. This can easily be done by assigning a list of token ids to the ignoredTokens field before you invoke collectCandidates():

core.ignoredTokens = new Set([
  ExprLexer.ID,
  ExprLexer.PLUS, ExprLexer.MINUS,
  ExprLexer.MULTIPLY, ExprLexer.DIVIDE,
  ExprLexer.EQUAL,
  ExprLexer.OPEN_PAR, ExprLexer.CLOSE_PAR,
]);

Preferred Rules

As mentioned already the preferredRules field is an essential part for getting more than just keywords. It lets you specify the parser rules that are interesting for you and should include the rule indexes for the entities we talked about in the code completion breakdown paragraph above. Whenever the c3 engine hits a lexer token when collecting candidates from a specific ATN state it will check the call stack for it and, if that contains any of the preferred rules, will select that instead of the lexer token. This transformation ensures that the engine returns contextual information which can actually be used to look up symbols.

Constraining the Search Space

Walking the ATN can at times be quite expensive, especially for complex grammars with many rules and perhaps (left) recursive expression rules. I have seen millions of visited ATN states for complex input, which will take very long to finish. In such cases it pays off to limit the engine to just a specific rule (and those called by it). For that there is an optional parser rule context parameter in the collectCandidates() method. If a context is given the engine will never look outside of this rule. It is necessary that the specified caret position lies within that rule (or any of those called by it) to properly finish the ATN walk.

You can determine a parser rule context from your symbol table if it stores the context together with its symbols. Another way would be to use the parse tree and do a search to find the most deeply nested context which contains the caret position. While it will make the c3 engine ultra fast when you pick the context that most closely covers the caret position it might have also a negative side effect: candidates located outside of this context (or those called by it) will not appear in the returned candidates list. So, this is a tradeoff between speed and precision here. You can select any parse rule context you wish between the top rule (or null) and the most deeply nested one. with increasing execution time (but more complete results) the higher in the stack your given rule is.

In any case, when you want to limit the search space you have to parse your input first to get a parse tree.

Selecting the Right Caret Position

It might sound weird to talk about such a trivial thing like the caret position but there's one thing to consider, which makes this something you have to think about. The issue is the pure token index returned by the token stream and the visual appearance on screen. This image shows a typical scenario:

token position

Each vertical line corresponds to a possible caret position. The first 3 lines clearly belong to token index 0, but the next line is no longer that clear. At that position we already are on token index 1 while visually the caret still belongs to index 0, because it could be that we are just at the end of a word and want to add more letters to it and hence have to provide candidates for that word. However, for token position 5 the situation is different. After the equal sign there are no possible further characters that could belong to it, so in this case position 5 really means 5. Similarly, token position 7 visually belongs to 6, while 8 is really 8. That means in order to find the correct candidates you have to change the token index based on the type of the token that immediately precedes the caret token.

Things get really tricky however, when your grammar never stores whitespaces (i.e. when using the skip lexer action). In that case you won't get token indexes for whitespaces, as demonstrated in the second index line in the image. In such a scenario you cannot even tell (e.g. for token position 1) whether you still have to complete the var keyword or want candidates for the a. Also the position between the two whitespaces is unclear, since you have no token index for that and have to use other indicators to decide if that position should go to index 3 (b) or 4 (+). Given these problems it is probably better not to use the skip action for your whitespace rule, but simply put whitespaces on a hidden channel instead.

Debugging

Sometimes you are not getting what you actually expect and you need take a closer look at what the c3 engine is doing. For this situation a few fields have been added which control some debug output dumped to the console:

  • showResult: Set this field to true to print a summary of what has been processed and collected. It will print the number of visited ATN states as well as all collected tokens and rules (along with their additional info).
  • showDebugOutput: This setting enables output of states and symbols (labels) seen for transitions as they are processed by the engine. There will also be lines showing when input is consumed and candidates are added to the result.
  • debugOutputWithTransitions: This setting only has an effect if showDebugOutput is enabled. It adds all transitions to the output which the engine encountered (not all of them are actually followed, however).
  • showRuleStack: Also this setting only has an effect if showDebugOutput is enabled. It will make the engine print the current rule stack whenever it enters a new rule during the walk.

The last two options potentially create a lot of output which can significantly slow down the collection process.

Release Notes

3.4.0

  • Switched to a new major version of the antlr4ng runtime (3.0.0).
  • Fixed issue #96 Add .cjs output to package

3.3.7

  • Stop bundling 3rd party libraries in the own lib bundle. This is not only unnecessary (these deps are installed with all the other dependencies in a target project), but can cause trouble if a dependent project uses 2 different versions of such a bundled 3rd party lib.

3.3.6

  • Fixed bug #93 Add command to esbuild (stop including 3rd party libs in bundle).
  • Updated dependencies.

3.3.5

Updated dependencies.

3.3.1 - 3.3.4

Updated dependencies.

3.3.0

Now using esbuild for building the package.

3.2.4 - 3.2.5

  • Last changes for the dependency switch (antlr-ts -> antlr4ng).
  • Updated Jest settings to run ESM + TS tests.

3.2.3

  • Completed switch away from antlr4ts.

3.2.0

  • A new TypeScript runtime powers this package now (antlr4ng).
  • The package is now published as ES module, which is supported by all modern browsers and Node.js.
  • The contributors list has been moved to a separate file, because now contributions are tracked via git's signed-off commits.

3.1.1

  • Renamed a few interfaces to follow the interface naming rules (a leading I).
  • Merged PR #81 from Aaron Braunstein.
  • Upgraded all dependencies to their latest version.

3.0.0

BREAKING CHANGES: With this major version release the API has been changed to make it more consistent and easier to use. The most important changes are:

  • All the classes in the SymbolTable.ts file have been split into separate files.
  • The main Symbol class has been renamed to BaseSymbol to avoid confusion and trouble with the Javascript Symbol class.
  • The package works now with Typescript 5.0 and above.
  • The tests have been organized into a separate sub project, which is no longer built with the main project. Instead tests files are transpiled on-the-fly (using ts-jest) when running the tests. These transpiled files are never written to disk.
  • Symbol creation functions (like SymbolTable.addNewSymbolOfType) now allow Typescript to check the given parameters for the class type. You will now have to provide the correct parameter list for the symbol type you want to create. This is a breaking change, because the old version allowed you to pass any parameter list to any symbol creation function.

2.2.3

Upgraded dependencies, which includes a new major version of Typescript (5.0). With this version the main field in package.json apparently became necessary, because of the package organization, and has been set in this release.

2.2.2

  • Some improvements in the symbol table implementation.
  • Updated dependencies.
  • PR #76 (fixes bug #23) Account for empty and fully-optional-body rules when collecting tokens, thanks to Aaron Braunstein.

2.2.1

Reverted changes from any to unknown for SymbolTable.addNewSymbolOfType. It works in the tests, but is not accepted by consumers of the node module.

2.2.0

  • Added InterfaceSymbol to SymbolTable and enhanced ClassSymbol for interface implementations.
  • Added a modifier and a visibility field to Symbol, so that's available for all symbols now. Removed the obsolete visibility field from method and field symbols.

2.1.0

  • It turned out that synchronous symbol retrieval methods have their value, so I brought them back by adding ...Sync() variants of all methods with an async behavior.
  • Brought back and extended project tests on Github.
  • Upgraded module dependencies.
  • Cleaned up the code again, now with latest eslint settings.

2.0.2

  • getAllSymbols<T> now returns symbols of type T (instead of Symbol), like all other enumeration methods.

2.0.1

  • Breaking change: some of the methods in the symbol table implementation, which may require extra work return now promises (symbol collections and resolver methods). This allows also to override them and return asynchronous results which are constructed from external resources (like database symbols).

1.1.16

  • Fixed an issue where wrong tokens were collected for code completion.

1.1.15

  • Fixed a problem with seen states in the follow set determination.

1.1.13

  • Added a C# port of the library (thanks to Jonathan Philipps)
  • Optionally allow to walk the rule stack on matching a preferred rule either top-down or bottom-up (which changes how preference is given to multiple preferred rules in a single stack).
  • Rule candidates now include the start token index of where they matched.

1.1.12

  • Updated modules with known vulnerabilities.
  • Better handling of recursive rules in code completion (via precedence).
  • Updated to latest antlr4ts.

1.1.8

  • Renamed a number of methods for improved consistency (next -> nextSibling etc.) and updated some tests.
  • Also simple symbols can be used to resolve other symbols (by delegating this call to their parents, if there's one).
  • Added a method to find a symbol by it's associated context + added a test for that.

1.1.6

  • Added Java port from Nick Stephen.
  • Added contributors.txt file.
  • A symbol can now store a ParseTree reference (which allows for terminal symbols in addition to parser rules).
  • Added navigation functions to Symbol and ScopedSymbol (first/last child, next/previous sibling) and added tests for that.
  • Fixed formatting and spelling in tests + SymbolTable.
  • Updated readme.

1.1.1

  • Travis-CI integration
  • Implemented completion optimizations

1.0.4

  • First public release
  • Added initial tests, SymbolTable and CodeCompletionCore classes

antlr4-c3's People

Contributors

alessiostalla avatar bbourbie avatar br0nstein avatar dependabot[bot] avatar haydenorz avatar kaidjohnson avatar kpainter-atl avatar kyle-painter avatar mallman avatar mike-lischke avatar nchen63 avatar nick-stephen avatar nuzelac avatar rovo98 avatar swillits avatar xenoamess avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

antlr4-c3's Issues

ANTLR version?

Thanks for the library. I'm having trouble using it with ANTLR 4.7.0 (the latest to have published TypeScript types).
However, I'm finding API mismatches (can get more specific if you need). What ANTLR4 version is this targeting?

Non-optional rules with fully-optional children prevent additional rules from being collected

I am working on a grammar that has a handful of optional top-level rules. If I attempt to group a few of these optional rules together, for the convenience of listening/visiting, it changes the candidates collected by antlr4-c3.


Working Example:

grammar MyGrammar;

expression: GET FOO? BAR? BAZ? withQux? EOF ;

withQux: WITH QUX ; 

GET: 'get';
FOO: 'foo' ;
BAR: 'bar' ;
BAZ: 'baz' ;
WITH: 'with' ;
QUX: 'qux' ;

core.collectCandidates('get', 1); returns tokens ['foo', 'bar', 'baz', 'with qux']. This is the result I am expecting.


Non-working Example:

grammar MyGrammar;

expression: GET fooBarBaz withQux? EOF ;

fooBarBaz: FOO? BAR? BAZ? ;

withQux: WITH QUX ; 

GET: 'get';
FOO: 'foo' ;
BAR: 'bar' ;
BAZ: 'baz' ;
WITH: 'with' ;
QUX: 'qux' ;

core.collectCandidates('get', 1); returns ['foo', 'bar', 'baz'] but is unexpectedly missing with qux.


If I make fooBarBaz itself optional, fooBarBaz?, the compilation of the grammar throws a warning: rule 'expression' contains an optional block with at least one alternative that can match an empty string, which is expected given the creation of an optional rule with optional children.

As far as I can tell, the grammars are syntactically the same and I would expect them to return the same candidates.

Support [email protected]

When I try out antlr4-c3 in my project,I get the following error:
image

image

My package.json is shown below:

{
  "scripts": {
    "build": "webpack --config webpack.config.js --watch",
    "antlr4ts": "antlr4ts -visitor src/parser/calculator.g4"
  },
  "dependencies": {
    "antlr4-c3": "^1.1.8",
    "antlr4ts": "^0.5.0-alpha.1"
  },
  "devDependencies": {
    "antlr4ts-cli": "^0.5.0-alpha.1",
    "ts-loader": "^5.3.1",
    "typescript": "^3.2.2",
    "webpack": "^4.28.0",
    "webpack-cli": "^3.1.2"
  }
}

Is c3 support [email protected]

Thanks!

Some further usage examples

Not a bug, a request. This project looks very hopeful for our usage, thanks.

To be really useful it would help to have some more concrete use cases on how to retrieve the incomplete symbol and its context, for example (from my experiments, where I'm not sure what to do.

I'm writing a simple filter grammar (imagine something grepping through logfiles) and want to auto-complete on something like:

fu = "incomplete string

When finding possible completions, I'd like to be able to have the context that I'm comparing a field called 'fu' (and thus be able to query all known values of 'fu') and that the token I'm trying to complete on is "incomplete string

Similarly we might not have the double-quotes on the incomplete string if we want the user to be able to type in a string without quotes...

An example or test case tying together the grammar with the symbol table and extracting the parse context + the half-complete symbol would be really helpful, I'm a little stuck here...

As an aside, if interested I've ported the completion core and tests to Java and may be able to release them back to this project. If so please context me.

TypeError on collectCandidates

Hi,

I am trying to start a project to try out the c3 engine, but I am running into a little obstacle early on. I am setting up the scaffold as outlined in the Getting Started section of the README, but I get the following error when I do the core.collectCandidates(0) call:

TypeError: Cannot read property 'index' of undefined
    at CodeCompletionCore.collectCandidates (/home/helgeg/proj/spotqa/virtuoso/standalone_parser/node_modules/antlr4-c3/src/CodeCompletionCore.ts:107:40)

I am assuming that there is some initialization that I haven't done, but I cannot figure it out. Below is the setup leading up to the failing call.

const antlr4ts = require('antlr4ts/index');
const c3 = require('antlr4-c3');
const ExprParser = require("./ExprParser").ExprParser
const ExprLexer = require("./ExprLexer").ExprLexer
...
let inputStream = new antlr4ts.ANTLRInputStream("var c = a + b()");
let lexer = new ExprLexer(inputStream);
let tokenStream = new antlr4ts.CommonTokenStream(lexer);
let parser = new ExprParser(tokenStream);
let errorListener = new VirtuosoErrorListener();
parser.addErrorListener(errorListener);
let tree = parser.expression();
let core = new c3.CodeCompletionCore(parser);
let candidates = core.collectCandidates(0);

This is using a parser generated by antlr 4.7.1 with a JavaScript target and antlr4-c3 1.1.8. Hopefully someone can glance at the code and tell me where I am going astray.

Stepping through the code, it looks like when I call c3.CodeCompletionCore(parser) the member variables get set (including this.parser = parser;), but when I call core.collectCandidates, the parser.inputStream variable is undefined:

let tokenStream = this.parser.inputStream;  // this.parser.inputStream is undefined
let currentIndex = tokenStream.index; // this fails, since there is no tokenStream

It looks to me like I am not setting the tokenStream correctly since there is no definition of the inputStream on the parser object.

No way to get starting candidates on empty input

Without suppling input I would like to get valid candidates for completion. The current way with parser.collectCandidates(tokenIndex) requires that there be a token index which doesn't exist for the empty string.

Next release?

Hi @mike-lischke

I wasn't sure what process you follow to get a new release cut, but I'd like to be able to consume the fix for #13 via npm with a version bump.

Thanks!

NPM package install attempts to build tests when installing as dependency, generating errors

Transferred from issue tunnelvisionlabs/antlr4ts#304

Repro (from an empty directory):

npm install antlr4-graps

Errors generated:

> [email protected] postinstall C:\try\304\node_modules\antlr4-c3
> tsc
test/test.ts(10,40): error TS2307: Cannot find module 'chai'.
test/test.ts(87,1): error TS2304: Cannot find name 'describe'.
test/test.ts(89,3): error TS2304: Cannot find name 'describe'.
test/test.ts(90,5): error TS2304: Cannot find name 'it'.
test/test.ts(168,5): error TS2304: Cannot find name 'it'.
test/test.ts(208,5): error TS2304: Cannot find name 'it'.
test/test.ts(219,5): error TS2304: Cannot find name 'it'.
test/test.ts(233,5): error TS2304: Cannot find name 'it'.
test/test.ts(277,3): error TS2304: Cannot find name 'describe'.
test/test.ts(278,5): error TS2304: Cannot find name 'it'.
test/test.ts(338,5): error TS2304: Cannot find name 'it'.
test/test.ts(403,3): error TS2304: Cannot find name 'describe'.
test/test.ts(404,5): error TS2304: Cannot find name 'it'.
test/test.ts(412,5): error TS2304: Cannot find name 'it'.

Analysis:

This is happening because the postinstall step for antlr4-c3 in NPM is attempting to build the unit tests. I'm no expert in node development, but what I tried to do in building antlr4ts was exclude development-time dependencies (like the tests).

Do not cache follow sets if predicates are involved in a path

The method processRule uses cached follow sets to speed up the walk process. This doesn't work if there's a predicate in one of the transitions while doing the walk. The transition path might depend on the outcome of that predicate.

Solution: don't cache follow sets if either a predicate is in the path taken or, if that cannot be determined efficiently, do not cache the sets at all.

Port for Antlr4 runtime

Hello, since recently Antlr4 4.12.0 has been released and officially supports typescript as a grammar target I was wondering if there were any plans to rewrite/port this library for the official Antlr4 runtime instead of Antlr4ts?

After all the antlr4ts library (and its cli) seem to have been abandoned at the moment

How can I set the context for collecting candidates

From the docs there seems to be a way to limit context to collect candidates. Nevertheless it looks like that my knowledge about this topic is to limited to make sense to me. In our case we need to limit the user to enter only specific parts of a query. So I know the name for the context but have no idea how to create/find the corresponding context. Could you give me some hints on how to solve this issue.

followSetsByATN is static and shouldn't be

The code has:
private static followSetsByATN: Map<string, FollowSetsPerState> = new Map();

However in the tests, depending upon the test order, the contents of this are different,
in particular, if "Most simple setup" test is commented out and not invoked (this test initializes the above static map with data without any ignored tokens) , then "Typical setup" test fails,
because the ignored tokens affect what the follow sets are initialized with.

I'm not sure of the correct fix.

HTH!

Return start token index for candidate rules

In certain grammars, C3 may return a collection of candidate rules which logically start at different tokens in the string.
For example the following grammar rule

clause: leftOperand operator rightOperand;

leftOperand: string;
operator: WAS | WAS IN
rightOperand: string;

For the clause status WAS I| we could receive the following candidate rules:

{
  [Parser.rule_operator]: [...ruleList],
  [Parser.rule_rightOperand]: [...ruleList]
}

In my case I'm showing suggestions to the user based on these rules and replacing text in the string when a suggestion is selected. However currently there's no way to determine which text/tokens should be replaced for a given rule.

To get around this I've forked the library and updated the rule collection to include the token index at which a candidate rule starts, e.g.

{
  [Parser.rule_operator]: {
    startTokenIndex: 2,
    ruleList: [...ruleList]
  },
  [Parser.rule_rightOperand]: {
    startTokenIndex: 4,
    ruleList: [...ruleList]
  }
}

I think this makes sense to include for other consumers so they can solve similar use cases in a grammar agnostic way.

How can I use this project to create Language Server for VS Code?

I am developing extensions to support Structured Text IEC 61131-3 language. I have syntax highlights, outline, snippets and basic formatting already. But now I want to create a Language Server. I came across this project. And after briefly reading README I have a feeling that it can be the way to go.

Can you make me a short instruction on a basic rules on that? Or would it be possible at all?

Get token values for a lexer rule

First of all many thanks for making the auto-completion feature available. It saved me lot of work.
I am able to use the CodeCompletionCore and get the list of tokens rules and token names (using vocabulary). But is there a way to extract the actual values used in the grammar?
For ex: l have the following lexer rule
AndorOr: ‘&’ | ‘|’ ;
From the CodeCompletionCore I can get AndorOr . How can I get the values & , | ?

Preferred rules precedence

I have a grammar like this:

identifier: IDENTIFIER;
binary_operation: identifier 'op' operand;

I'm interested in identifier then binary_operation. I expected that if I types just an identifier, the CandidatesCollection.rulePositions should contains just preferred identifier rule; if I types an identifier then op then an operand, the CandidatesCollection.rulePositions should contains binary_operation rule.

But in reality, the binary_operation rule always overrides the identifier rule. That is if I type just an identifier, the rulePositions contains binary_operation rule.

I tried to use LinkedHashSet to rearrange the preferred rules but it didn't seem to work.

Argument of type 'SqlParser' is not assignable to parameter of type 'Parser'.

I got errors

Argument of type 'PrestoSqlParser' is not assignable to parameter of type 'Parser'.
  Type 'PrestoSqlParser' is missing the following properties from type 'Parser': _errHandler, _input, _precedenceStack, _ctx, and 68 more.

PrestoSqlParser is generated by from antlr4/presto/PrestoSql.g4 by ANTLR 4.9.0-SNAPSHOT

Screenshot 2022-12-18 at 6 56 07 PM

deps:

  "dependencies": {
    "@types/antlr4": "4.7.0",
    "antlr4": "^4.11.0",
    "antlr4-c3": "^2.2.1",
    "antlr4ts": "^0.5.0-alpha.4",
    "assert": "1.5.0"
  },

Prioritise higher index preferred rules

As noted in #20

identifier: IDENTIFIER;
binary_operation: identifier 'op' operand;

If we have binary_operation and identifier as preferred rules, we'll only every receive binary_operation as a candidate as the rule stack is searched from lowest to highest index.

Was this a deliberate decision? I think it would make sense to traverse the stack from highest to lowest as it would allow consumers to target more specific rules, and if need be they can inspect the rule stack for parent rules.

How to correctly use collectCandidates()?

Say I have mysql grammar, and want to get autocomplete after select query.
Simple grammar:

query: SELECT tableRef;
tableRef: ID;
ID
    : [a-zA-Z] [a-zA-Z0-9_]*
    ;

How should I correctly find caretTokenIndex?
const candidates = core.collectCandidates('select'.length-1); OR const candidates = core.collectCandidates(0);

How to achieve sql column name auto completion?

example: select na from tb1 a left join tb2 b a.code=b.code。
Enter na, how to get the table name: tb1 and tb2, more complex sql, may contain nesting, how to get the corresponding table name, query field information through the table name.

@mike-lischke

collectCandidates fails when seeking the tokenStream

Whenever I call collectCandidates for a relatively simple grammar, I get an assertion error:

image

I tracked the call state:

image

Here we can see that collectCandidates calls seek on a BufferedTokenStream from the antlr4ts library. The seek method in turn calls sync:

sync(i) {
        assert(i >= 0);
        let n = i - this.tokens.length + 1; // how many more elements we need?
        //System.out.println("sync("+i+") needs "+n); [sic]
        if (n > 0) {
            let fetched = this.fetch(n);
            return fetched >= n;
        }
        return true;
    }

The assertion on the first row fails some time down the line, causing the processing to halt.

By uncommenting the seeks in collectCandidates:

collectCandidates(caretTokenIndex, context) {
        this.shortcutMap.clear();
        this.candidates.rules.clear();
        this.candidates.tokens.clear();
        this.statesProcessed = 0;
        this.precedenceStack = [];
        this.tokenStartIndex = context ? context.start.tokenIndex : 0;
        let tokenStream = this.parser.inputStream;
        let currentIndex = tokenStream.index;
        // tokenStream.seek(this.tokenStartIndex); <---
        tokenStream.seek(this.tokenStartIndex);
        this.tokens = [];
        let offset = 1;
        while (true) {
            let token = tokenStream.LT(offset++);
            this.tokens.push(token.type);
            if (token.tokenIndex >= caretTokenIndex || token.type == antlr4ts_1.Token.EOF)
                break;
        }
        // tokenStream.seek(currentIndex); <---
        tokenStream.seek(currentIndex);

It all works for me. The tests fail, which clearly means the lines are important.

What could be the reason for the seek to fail?

I'm using the library as shown in the README and tests. I cannot share my grammar file, but I am happy to assist in any bug tracing.

Not getting expected rule candidates

I've tried to cut down the grammar a bit for this but it's not particularly complex. So given the following grammar

/* ===== PARSER ===== */

statement: expression EOF ;

expression
    : lhs=expression OP_DOT rhs=binaryDotCompletion                                                    # binaryDotExpression
    | LPAREN expr=expression RPAREN                                                                             # parenExpression
    | funcCall                                                                                                  # funcCallExpression
    | op=(OP_MINUS|OP_NOT) expr=expression                                                                      # unaryExpression
    | lhs=expression op=(OP_MULTIPLY|OP_DIVIDE) rhs=expression                                                  # binaryExpression
    | lhs=expression op=(OP_PLUS|OP_MINUS) rhs=expression                                                       # binaryExpression
    | lhs=expression op=REL_OP rhs=expression                                                                   # binaryExpression
    | lhs=expression op=OP_AND rhs=expression                                                                   # binaryExpression
    | lhs=expression op=OP_OR rhs=expression                                                                    # binaryExpression
    | lhs=expression op=OP_IN rhs=array                                                                         # inConditionExpression
    | <assoc=right> lhs=expression OP_TERN_QUERY trueBranch=expression OP_TERN_COLON falseBranch=expression     # ternaryExpression
    | term                                                                                                      # termExpression
    ;

binaryDotCompletion: ID ;

funcCall: name=ID LPAREN (args+=funcArg (OP_COMMA args+=funcArg)*)? RPAREN ;

funcArg
    : lambdaExpression      # lambdaArg
    | expression            # expressionArg
    ;

lambdaExpression
    : LPAREN (args=ID (OP_COMMA args=ID)*)? RPAREN OP_ARROW body=expression
    | args=ID OP_ARROW body=expression
    ;

array: LBRACKET (expression (OP_COMMA expression)*) RBRACKET ;

term
    : BOOL
    | FLOAT
    | INT
    | LONG
    | identifier
    | STRING
    ;

  identifier: ID ;

/* ====== LEXER ===== */


REL_OP: OP_EQ | OP_GT | OP_GTE | OP_NEQ | OP_LT | OP_LTE;

OP_EQ: '==' ;
OP_GTE: '>=' ;
OP_GT: '>' ;
OP_NEQ: '!=' ;
OP_LTE: '<=' ;
OP_LT: '<' ;
OP_IN: '=IN=' ;

OP_DOT: '.' ;
OP_COMMA: ',' ;

OP_PLUS: '+' ;
OP_MINUS: '-' ;
OP_MULTIPLY: '*' ;
OP_DIVIDE: '/' ;

OP_ARROW: '=>' ;

OP_AND: '&&' ;
OP_OR: '||' ;
OP_NOT: '!' ;

OP_TERN_QUERY: '?' ;
OP_TERN_COLON: ':' ;

LPAREN: '(' ;
RPAREN: ')' ;

LBRACKET: '[' ;
RBRACKET: ']' ;

DOLLAR: '$' ;

APOSTROPHE: '\'' ;

STRING: APOSTROPHE (~(['] | '\\') | '\\' (APOSTROPHE | '\\'))* APOSTROPHE ;

fragment EXPONENT: 'e' (OP_PLUS | OP_MINUS)? DIGIT+ ;
fragment FLOAT_SUFFIX: 'f' | 'F' ;
fragment LONG_SUFFIX: 'l' | 'L' ;
FLOAT: (DIGIT+ '.' DIGIT+ EXPONENT? FLOAT_SUFFIX?)
     | (DIGIT+ EXPONENT FLOAT_SUFFIX?)
     | (DIGIT+ FLOAT_SUFFIX)
     | ('.' DIGIT+ EXPONENT? FLOAT_SUFFIX?) ;
LONG: DIGIT+ LONG_SUFFIX;
INT: DIGIT+ ;
BOOL: ('true')|('false') ;
ID: DOLLAR? '_'* ALPHA(ALPHA|DIGIT|'_')* ;

WHITESPACE : SPACE+ -> channel(HIDDEN) ;

Let's say I've got an example input like foo. and I'm setting my preferredRules with identifier and binaryDotCompletion.
As I type foo I'm getting parse tree output like (statement (expression (term (identifier foo))) <EOF>) and the collected rules contains identifier as I would expect.
But as soon as I add the . no rules are collected at all. The parse tree output becomes (statement (expression (expression (term (identifier foo))) . (binaryDotCompletion <missing ID>)) <EOF>) so the parser seems to know what should come next.

I verified that the token index being used for assessing foo. is 1

As soon as I add another letter after the . the rules candidates has binaryDotCompletion but not a single token candidate is collected anymore (I haven't set any to be ignored while debugging this so I end up getting a lot of operator tokens). The missing tokens isn't particularly important for my purposes but it seems weird.

The concept here looks really similar to the example grammar in the README with simpleExpression etc. so I'm baffled as to the results I'm getting.

Quadratic time to gather tokens

While profiling the performance of a port of this code, I found the following line dominated:

let token = tokenStream.LT(offset++);

(In the Java antlr library at least...) LT performs a linear search from the 'seek' location, making this O(N^2) in the number of tokens gathered.

Would it work to call LT(1) each time, with tokenStream.consume at the end of the loop (after the break)?

Infinite loop in processRule()

Hello, your library has been of great use to assist with implementing auto-completion for an old language called "UnrealScript".

However, in UnrealScript we will often stumble on pieces of C++ text blocks, which I solved with the following grammar:

cppText
	: 'cpptext' exportBlockText
	;

// Skips a C++ block of text: "{ ... | { ... }* }
exportBlockText
	: OPEN_BRACE (~(OPEN_BRACE | CLOSE_BRACE)+ | exportBlockText)* CLOSE_BRACE
	;

UCParser.g4#L598

This had worked flawlessly except for c3. When c3 hits the first instance of 'exportBlockText' it will run in an infinite loop with 'processRule'. That may seem obvious given the grammar, any chance this could be fixed, or a possible work around?

TypeError: Class extends value undefined is not a constructor or null when import CodeCompletionCore

  ● Test suite failed to run

    TypeError: Class extends value undefined is not a constructor or null

      3 | import { PrestoSqlParser } from '../libts/presto/PrestoSqlParser';
      4 | import { CustomErrorStrategy } from './CustomErrorStrategy';
    > 5 | import { CodeCompletionCore } from 'antlr4-c3';
        | ^
      6 | import {
      7 |   ANTLRErrorListener,
      8 |   ANTLRErrorStrategy,

      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4ts-npm-0.5.0-dev-7e0fc8988a-640dae2229.zip/node_modules/src/tree/xpath/XPathLexer.ts:18:33)
      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4ts-npm-0.5.0-dev-7e0fc8988a-640dae2229.zip/node_modules/src/tree/xpath/XPath.ts:16:1)
      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4ts-npm-0.5.0-dev-7e0fc8988a-640dae2229.zip/node_modules/src/tree/xpath/index.ts:6:1)
      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4ts-npm-0.5.0-dev-7e0fc8988a-640dae2229.zip/node_modules/src/tree/index.ts:18:1)
      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4ts-npm-0.5.0-dev-7e0fc8988a-640dae2229.zip/node_modules/src/index.ts:9:1)
      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4-c3-npm-2.2.1-db3ae1db96-8dd44825f3.zip/node_modules/antlr4-c3/src/CodeCompletionCore.ts:10:1)
      at Object.<anonymous> (../../../../../../.yarn/cache/antlr4-c3-npm-2.2.1-db3ae1db96-8dd44825f3.zip/node_modules/antlr4-c3/index.ts:8:1)
      at Object.<anonymous> (src/autocomplete/presto.ts:5:1)
      at Object.<anonymous> (__tests__/autocomplete/presto/suggest.test.ts:1:1)

Any idea why? thanks

translateToRuleIndex addNew doesn't really add, it replaces

translateToRuleIndex checks for a rule to already be present in order to avoid adding a duplicate entry.

However, entries are in a map, not a list, so no duplicates are possible, and what really happens is that each new found path overwrites the previous one. IMO this should be clarified:

  • if the intention is to return only one path, then it makes no sense to perform a linear search on a map, just store the latest path (another option could be to store only the first and cut the search early if possible)
  • if the intention is to return multiple paths, then it makes no sense to use a map from index to path; candidate.rules should then be a map from index to a list of paths (and then it makes sense to perform a linear search to avoid including the same path twice).

This could be controlled by a flag, as one user might want to stop searching as early as possible, while another might want to collect all the paths. But as it is now, all paths are examined but only the latest is returned, which is a waste.

Status of the project

Hi Mike,
first of all thank you for writing this amazing project.

I see that the build is failing, the Java port is three years old and the last version is unreleased.

I was wondering about the status of the project and if you would be interesting in any help on this project.

Publish the ports?

Is the C# port published anywhere that is easily included in projects?

Semantic Predicate not working

Hello,

it seems that semantic predicates are not handled.

Given the following grammar with semantic predicates

grammar Expr;

@parser::header {
  export function isSummable(ID: string) {
    // symbol table checks here.. now using a mock logic
    return ID.startsWith('x');
  }
}

expression
  : summableIdentifier '+' NUM
  | subtractableIdentifier '-' NUM
  ;
  
summableIdentifier: {isSummable(this.currentToken.text)}? ID;
subtractableIdentifier: {!isSummable(this.currentToken.text)}? ID;

ID: [a-z];
NUM: [0-9]+;

WHITESPACE
	: [ \t\r\n] -> skip
	;

during prediction this.currentToken is <EOF> so all checks are not working.

I think that prediction algorithm should walk the parsed tree until caretTokenIndex is reached, and only then determine follow sets, but maybe I'm totally wrong.

Thanks so much for this awesome library.

request for collectCandidates work even if there is syntax error

Very appreciate about the work! I came across a problem and believe walker should have ability to go further even if there is syntax error before caret position. Suppose I have defined mysql grammar. And editing the following sentence,

select from |

where '|' represents caret position and it will get nothing by calling collectCandidates. Well, I can still do a little work to make it work. But It is also reasonable the walker keep walking if there is potential road to current request position. And I believe this is achievable by search algorithm like A-star. When walker walked to select and find no bridge to from, it search all neighbor rules until it finally reach from. And this search should be constrained because it maybe very time-consuming.

Is it make sense? Thanks!

npm test failing

Hi Mike,

I downloaded the source. I executed npm install and then npm tests, but the test cases failed

npm test

[email protected] test /Users/pluthra/repo/antlr4-c3-master
tsc --version && tsc && mocha out/test

Version 2.3.4
test/CPP14Lexer.ts(8,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATN'.
test/CPP14Lexer.ts(9,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATNDeserializer'.
test/CPP14Lexer.ts(12,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'LexerATNSimulator'.
test/CPP14Lexer.ts(18,10): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'Utils'.
test/CPP14Parser.ts(7,2): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATN'.
test/CPP14Parser.ts(8,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATNDeserializer'.
test/CPP14Parser.ts(13,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ParserATNSimulator'.
test/CPP14Parser.ts(14,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ParseTreeListener'.
test/CPP14Parser.ts(15,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ParseTreeVisitor'.
test/CPP14Parser.ts(19,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'TerminalNode'.
test/CPP14Parser.ts(487,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(523,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(602,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(641,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(724,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(889,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(927,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(965,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1016,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1136,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1175,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1224,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1270,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1713,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1738,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1834,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(1973,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2009,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2117,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2146,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2181,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2326,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2379,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2447,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(2478,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3380,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3427,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3478,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3645,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3670,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3818,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3910,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(3946,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4046,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4121,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4199,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4317,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4356,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4394,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4486,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4574,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4661,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4756,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4843,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(4887,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5005,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5042,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5067,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5094,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5191,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5242,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5278,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5314,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5339,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5386,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5460,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5511,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5562,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5735,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5790,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5841,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5947,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(5972,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6034,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6128,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6178,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6229,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6320,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6363,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6388,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6427,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6452,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6491,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6530,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6574,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6618,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6660,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6694,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6719,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6752,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6787,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6849,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6901,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(6934,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7056,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7104,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7292,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7327,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7366,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7395,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7420,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7519,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7642,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7677,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7720,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7896,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(7965,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8002,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8120,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8155,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8191,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8227,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8263,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8298,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8357,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8561,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8710,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8836,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(8964,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9022,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9096,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9141,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9183,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9359,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9415,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9454,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9494,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9586,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9621,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9646,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9682,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9795,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(9969,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10131,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10167,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10195,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10306,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10408,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10457,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10482,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10518,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10545,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10580,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10615,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10642,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10706,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10762,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10801,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10828,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10873,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(10970,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11009,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11199,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11239,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11315,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11424,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11471,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11529,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11567,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11598,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11627,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11667,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11702,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11735,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11815,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11850,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(11890,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12014,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12059,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12086,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12115,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12486,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12565,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12601,3): error TS2346: Supplied parameters do not match any signature of call target.
test/CPP14Parser.ts(12626,3): error TS2346: Supplied parameters do not match any signature of call target.
test/ExprLexer.ts(5,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATN'.
test/ExprLexer.ts(6,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATNDeserializer'.
test/ExprLexer.ts(9,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'LexerATNSimulator'.
test/ExprLexer.ts(15,10): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'Utils'.
test/ExprParser.ts(4,2): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATN'.
test/ExprParser.ts(5,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ATNDeserializer'.
test/ExprParser.ts(10,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ParserATNSimulator'.
test/ExprParser.ts(11,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ParseTreeListener'.
test/ExprParser.ts(12,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'ParseTreeVisitor'.
test/ExprParser.ts(16,5): error TS2305: Module '"/Users/pluthra/repo/antlr4-c3-master/node_modules/antlr4ts/index"' has no exported member 'TerminalNode'.
test/ExprParser.ts(73,3): error TS2346: Supplied parameters do not match any signature of call target.
test/ExprParser.ts(114,3): error TS2346: Supplied parameters do not match any signature of call target.
test/ExprParser.ts(273,3): error TS2346: Supplied parameters do not match any signature of call target.
test/ExprParser.ts(298,3): error TS2346: Supplied parameters do not match any signature of call target.

Thanks,
Priyanka

Candidate sets for Antlr grammars

Mike,

Thanks for this library.

I am interested in computing the set of possible lookaheads in an Antlr grammar itself, specifically Expr.g4, defined at https://github.com/mike-lischke/antlr4-c3/blob/master/ports/c%23/test/Antlr4CodeCompletion.CoreUnitTest/Grammar/Expr.g4.

I've ported your library to use Antlr4.Runtime.Standard (v4.7.2), and updated the code to use the Java Antlr tool generator (https://github.com/kaby76/antlr4-c3). (Note, unless there is a compelling reason why people keep using Harwell's Antlr4cs generator, https://github.com/tunnelvisionlabs/antlr4cs, I prefer to use the standard Antlr Java tool and Antlr4.Runtime.Standard. As far as I know, no other target has a separate Antlr tool generator, and Harwell's tool is several version behind the current tool version 4.7.2.) I then added a test (https://github.com/kaby76/antlr4-c3/blob/master/ports/c%23/XUnitTestProject1/UnitTest1.cs#L121) that has the input of Expr.g4 erased from line 9 to the end of the file:

grammar Expr;

expression: assignment | simpleExpression;

assignment
    : (VAR | LET) ID EQUAL simpleExpression
;

The input has no syntax errors, but it is, of course, incomplete. I want to simulate typing in a new rule after the last semi-colon, e.g., code completion. I'm expecting at least RULE_REF to appear in the candidate.tokens list because after ";", I can start a new rule. When I call the library to compute the candidate set after the last semi-colon "core.CollectCandidates(index, null)", I get instead a token list of only CATCH, FINALLY, and -2. CATCH and FINALLY make sense because the rule for parserRuleSpec is "parserRuleSpec : DOC_COMMENT? ruleModifiers? RULE_REF argActionBlock? ruleReturns? throwsSpec? localsSpec? rulePrequel* COLON ruleBlock SEMI exceptionGroup ;". FOLLOW(SEMI) should contain FIRST(exceptionGroup), which in this case, contains CATCH, FINALLY. However, exceptionGroup can derive the empty string, so it should also contain FIRST(parserRuleSpec). If I add to the input "simpleExpression" after the last semi-colon, again I would expect RULE_REF at the caret positioned at "simpleExpression", but I only get CATCH, FINALLY, -2.

My question is why isn't RULE_REF in the lookahead? I've only started to debug your code, and it'll take a while for me to fully comprehend.

--Ken

Allow specifying a timeout or max call number for processRule

Our language makes antlr4-c3 take an exponentially long time as we add clauses to a relatively simple expression. It appears that the processRule method enters into a loop of recursive calls that go on for a very long time.
Would it be possible to limit processRule with a timeout, or max number of calls?

Help Wanted: Not getting rule candidates

Hi
I've build a very simple lexer parser that is supposed to match simple syntax such as the one below

$.arg1.arg2.arg3

parser grammar MyTmplParser; 
options { tokenVocab=MyTmplLexer; }

contextVariable: DOT argumentRef;
argumentRef: IDENTIFIER;

contextVariablesRef: (contextVariable)+;
templateExpression: (JSONPATH_START contextVariablesRef)*;
root: templateExpression;
lexer grammar MyTmplLexer;

WS: [\t\r\n] -> channel(HIDDEN);
SPACE: ' ' -> channel(HIDDEN);
DOT: '.';
IDENTIFIER : (ALPHA | DIGIT)+;
JSONPATH_START: '$';

fragment
    ALPHA : [a-zA-Z] ;
fragment
    DIGIT : [0-9] ;

Parsing works fine:
image

I have some symbols predefined.
I would like to auto complete argumentRef.
The prefix of JSONPATH_START in templateExpression seems to break the auto complete logic.
If I were to define it as templateExpression: (contextVariablesRef)*; it works as expected. I don't understand what I'm doing wrong. Any pointers will be helpful

Code looks like this:

    const preferred = [ MyTmplParser.RULE_argumentRef ];

    core.ignoredTokens = new Set([]);
    core.preferredRules = new Set(preferred);
    let candidates = core.collectCandidates(position.index);

This is the output that I get for input "$.a.b" for "token position" 2:

Error Count: 0
Token position: 2 - a

States processed: 0


Collected rules:



Collected tokens:




Cpp14.g4 and InputMismatchException

Hi Mike, I'm just wrapping my head around what you've published here, but it seems really cool.

Oh snap, trying to fix the build for Windows, I'm getting

test/CPP14Parser.ts(10179,78): error TS2304: Cannot find name 'InputMismatchException'.

Root cause seems to be CPP14.g4 : line 1097, a line mentioned in comments. But it seems like the checked-in version of CPP14Parser.ts has the missing import. I'm not clear why the checked-in version differs from what I've generated.

Adding the following to the .g4 seems to resolve:

@header {
	import { InputMismatchException } from 'antlr4ts/InputMismatchException';
}

Did you do something different? If the above is good, assign this bug back to me, I've got a branch I've included it on.

Demo?

It would be great to test a code completion online.

Release version 1.1.14

In our codebase, we're using antlr4-c3 from the master branch as it fixes some issues for us. However, this complicates our build. Would it be possible to release a 1.1.14 version? Is something missing for it (e.g. some critical bug fix)? I'm available to dedicate some time to this if it can be useful.

collectFollowSets works incorrectly when ...

I have a grammar where alternatives in one rule lead to same rule in some branches and in this case c3 fails to find alternatives.
As a synthetic grammar fragment example:

...
operand: 
           varReference
         | Number
         | functionExpr;

varRefernce: name;
funcReference: funcName LPar  arguments  RPar;
funcName: name;

name:  Id;
...

I think this is pretty legal grammar, but c3 will fail to find funcName rule if I want both varReference and funcName.

The problem is in collectFollowSets it cuts recursion if it finds same state on the stack (which is one to one to rule start IMHO). It is said in comment there that the algorithm is taken from ANTLR4 but there it takes into account rule context so I think there it does not cut recursion in such case.

Generating lexer and parser

Hi I have run:

npm install --only=dev

To install dev dependencies, but when I run:

npm run-script antlr4ts

I get this error:

npm ERR! Darwin 16.6.0
npm ERR! argv "/usr/local/Cellar/node/7.7.4/bin/node" "/usr/local/bin/npm" "run-script" "antlr4ts"
npm ERR! node v7.7.4
npm ERR! npm  v4.1.2

npm ERR! missing script: antlr4ts
npm ERR!
npm ERR! If you need help, you may report this error at:
npm ERR!     <https://github.com/npm/npm/issues>

npm ERR! Please include the following file with any support request:
npm ERR!     /Users/federico/repos/antlr4-c3/test/npm-debug.log

Maybe I am missing something because of my ignorance on TS

Simple expression parser tests fail when executing out-of-order

When executing the "Typical setup" test case of the simple expression parser without running the "Most simple setup" case first, it fails with the following error:

 FAIL  tests/CodeCompletionCore.spec.ts (78.342 s)
  ● Code Completion Tests › Simple expression parser: › Typical setup

    expect(received).toEqual(expected) // deep equality

    - Expected  - 4
    + Received  + 1

    - Array [
    -   10,
    -   7,
    - ]
    + Array []

      513 |             expect(candidates.tokens.has(ExprLexer.LET)).toEqual(true);
      514 |
    > 515 |             expect(candidates.tokens.get(ExprLexer.VAR)).toEqual([ExprLexer.ID, ExprLexer.EQUAL]);
          |                                                          ^
      516 |             expect(candidates.tokens.get(ExprLexer.LET)).toEqual([ExprLexer.ID, ExprLexer.EQUAL]);
      517 |
      518 |             // 2) On the variable name ('c').

      at Object.<anonymous> (tests/CodeCompletionCore.spec.ts:515:58)

This can be reproduced by simply deleting the "Most simple setup" case and running npm run test.

As far as I can see, the problem stems from two issues:

  • Is there a mistake in the "typical setup" case? If the ExprLexer.ID and ExprLexer.EQUAL tokens are in ignoredTokens of the code completion core, is it expected that they are included in the following lists of the candidate tokens?
  • The tests are order dependent because the follow sets are cached in the followSetsByATN field, which is static and is therefore shared between different instances of the class. Maybe it makes sense to reset the field at the beginning of each test case to ensure a reproducible state? Also, the cache uses only the parser name as the key, which does not take configuration changes, e.g. different ignoredTokens into account.

In any case, thanks for the great work! I'm currently working on a python port of the library for my employer, and hope to contribute it once it is done. Please let me know if I can help with fixing this bug 🙂

Prefered rule being collected after caret position

I use a grammar very similar to sql and i am getting the tableRef rule as a candidade on a case even tought it should only collect the valid tokens.

E.g.

deleteTable: DELETE TABLE tableRef WHERE EXPR;
tableRef: ID;

with tableRef as a prefered rule.

When typing delete table with the caret at the last position the engine collects the candidate tableRef as expected.

After typing the identifier for this rule delete table items the engine now collects the next token WHERE correctly but it still returns the tableRef rule as a candidate.

Is this working as intended?

Thanks

Unable to use Antlr4-c3 with angular

In 1 of my projects our team was previously using antlr4-c3 with angular 6 but, after version update it doesn't seems to work.
it says ERROR in ./node_modules/antlr4ts/misc/InterpreterDataReader.js Module not found: Error: Can't resolve 'fs'. i don't think it is problem with antlr4ts, as in our POC project we were able to use antlr4ts without any issue.
Please also suggest, if we can have a work arround, or we need to use older version of library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.