Giter Club home page Giter Club logo

parsita's People

Contributors

arseniiv avatar drhagen avatar johnthagen avatar lostinplace avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

parsita's Issues

Make `lit` do the same thing for `GeneralParsers` and `TextParsers`

lit("abc") in TextParsers matches the sequence of the three elements of the input. lit("abc") in the GeneralParsers matches one element in the input. This is a rather weird inconsistency that Parsita actually spends a lot of effort maintaining. It is far too late to change anything about TextParsers, but introducing tok that matches a single element and making lit match a sequence of elements in GeneralParsers may make the whole situation cleaner. Obviously, this can only be introduced in version 2.0 as it is a pretty breaking change for GeneralParsers.

parsing failure with alternative `opt()`s

This is a silly example, but I would have expected the following to parse "week", "year", or "":

class SimpleParser(TextParsers):
    weeks = opt(lit("week", "wk", "w")) > constant("week")
    years = opt(lit("year", "yr", "y")) > constant("year")

    value = weeks | years

SimpleParser.value.parse("")
# => Success('week')

SimpleParser.value.parse("week")
# => Success('week')

SimpleParser.value.parse("year")
# Failure("Expected 'week' or 'wk' or 'w' or end of source but found 'year'\nLine 1, character 1\n\nyear\n^   ")

A slight mismatch in `Parser` doc

Hi again! Parserโ€™s doc states:

2. Implement the ``__str__`` method

but what is actually needed and implemented in subclasses is the __repr__ method. ๐Ÿ™‚

Failure in opt(parser) example regex

This isn't an error in the code, but the example assertion for the opt(parser) regex actually fails. Because the regex ends in a '+' instead of '*', and id has to be at least two characters, so it doesn't actually match 'x', the example id used in the assertion.

Reversibility

If I use this to parse text into structured data, how hard would it be to come up with the reverse direction? Imagine a configuration file like:

parent
    child
        bad-parameter
    other-child bad-parameter
!

should become something like

{"parent": {"child": {"bad-parameter": True}, "other-child": {"bad-parameter": True}}}

which then is fixed to

{"parent": {"child": {"bad-parameter": False, "good-parameter": True}, "other-child": {"bad-parameter": False}}}

which should result in

parent
    child
        good-parameter
    other-child
!

The whitespace in this scenario is syntax-relevant but should follow easy patterns.

zero width negative lookahead

Could we have a way to do a zero width negative lookahead?

The first thing I tried to do was define a set of reserved words, and then define identifiers as "words but not keywords".

In terms of syntax you could use unary minus, as it binds tightly. Something like:

keyword = lit("def", "class", "in", "out")
ident = -keyword & reg(r'[a-zA-Z_][a-zA-Z0-9_]')

EDIT: I realise that I could do it using just regexes, but then word boundaries get a bit clunky, as I'd need to embed the whitespace handling in the regex, which isn't ideal.

Documentation doesn't cover how to use the results

The tutorial is great, but I can't find any other documentation, and it seems the tutorial neglects to mention how to actually get the result from parsing something. If I have an object like Success("hello"), how can I get the string "hello" out of that object in my code? The examples only ever print the success object or assert that it has the correct value.

eof does not handle whitespace

Given this parser:

class EofParser(TextParsers):
    a = lit('a') << eof

This succeeds as expected:

>>> Temp.a.parse(' a')
Success('a')

However, this should probably not fail:

>>> Temp.a.parse('a ').or_die()
ParseError: Expected end of source but found ' '
Line 1, character 2

a
 ^

It fails because eof merely looks to see if the parser is at the end of the source. An extra space before the end causes a failure. Other parsers in the TextParsers context chew up any leading whitespace before trying to match, which eof does not do. I see three solutions:

  • Make eof context sensitive. This would probably mean turning eof into a function eof() because some code would have to run at definition time in order to grab the context. Then eof would know what whitespace was and chew it up before testing for the end of the input.
  • Make TextParsers chew whitespace from both ends. I have not done this because I have suspected that this would cause performance problems by adding an extra regex comparison to every step. But it would almost always stop on the first character, so I should get some actual performance numbers. This would also make combining parsers with different whitespace constraints (like the JSON parser) smoother because it would eliminate the need for manually chewing the whitespace.
  • Swap which side whitespace gets chewed from. I think that only eof is affected by this whitespace issue, so ensuring that whitespace is always consumed from the end rather than the beginning should fix it without introducing new problems. Of course, the problem would reemerge if a "start of input" parser was added.

`test_examples.py` seem to fail due to import error

Iโ€™m probably doing something wrong but when I run pytest . in the main directory, all tests pass except all the ones in test_examples.py due to

ModuleNotFoundError: No module named 'examples'

(the same error in all three cases). Iโ€™m a bit confused, maybe I should run tests in some other way.

Improve error messages for fallible conversion

Currently, error messages always display the farthest that parsing got and the next token as the actual value that stopped parsing. This worked fine until the transformation parser, which can succeed in parsing input and then fail to accept that parsed value. The message the parsing failed on the token after the successful parse is not particularly correct. The error is the parsed token itself, which can be seen in this example:

from dataclasses import dataclass

from parsita import *

@dataclass
class Percent:
    number: int

def to_percent(number: int):
    if not 0 <= number <= 100:
        return failure("a number between 0 and 100")
    else:
        return success(Percent(number))

class TestParsers(TextParsers):
    percent = (reg(r"[0-9]+") > int) >= to_percent

TestParsers.percent.parse("150").or_die()

This displays the error Expected a number between 0 and 100 but found end of source, when Expected a number between 0 and 100 but found 150 would be more sensible.

To fix this would require that the actual value and farthest point be configurable separately by the parser.

Try later alternatives when earlier success can't complete parsing

Under no circumstances will Parsita attempt later alternatives if an early alternative succeeds. This behavior, which is typical in parser combinators, can be unexpected.

from parsita import *

class FunctionParsers(TextParsers):
    id = reg('[a-z]+')
    function = id & '(' >> repsep(id, ',') << ')'
    expression_good = function | id
    expression_bad = id | function

FunctionParsers.expression_good.parse('a(b)').or_die()
# ['a', ['b']]

FunctionParsers.expression_bad.parse('a(b)').or_die()
# parsita.state.ParseError: Expected end of source but found '('
# Line 1, character 2
# a(b)
#  ^  

That is because consume returns a single result--in the case of the AlternativeParser, the first result found among the alternatives. The typical workaround is to always put the longest alternative first if multiple may match. This is annoying, counterintuitive, and for some complex grammars, hard to achieve. It would be better if Parsita would keep trying alternatives if it could not succeed.

The consume method would have to be rewritten into a generator where each alternative of a parsers argument is tried before failing. This is probably doable, but would be a pretty big change to the internals of Parsita.

Implementing this could lead to very bad performance in cases where the text could not be parsed. Parsita would have to explore every avenue before giving up, which may take a while. It is possible that this feature would require a packrat parser to be usable.

Replace Result with version from Returns library

Parsita currently has its own implementation of the classic Result type. The Result type is essential for functional programming, but Python comes with no implementation of its own, and to my memory, there was no community standard at the time of Parsita's v1 release. Since then, the Returns library has become quite popular, which contains a full-featured implementation of Result that is similar to Parsita's. It is unlikely that such a class will be added Python in the near future. The Returns implementation is quite mature. If Parsita just used Returns, Result objects would gain a lot of functionality and integrate Parsita into the Returns ecosystem.

Returns also encourages using Failure[Exception]. The new Failure should wrap ParseError instead of a str. The return type of Parser.parse will be returns.Result[T, ParseError]. It is unclear if Parsita should add a parsita.Result[T] type (or maybe ParseResult or ResultE) as an alias for this return type. Even today, Parsita only uses this type in four places.

This change introduces some pretty serious compatibility concerns. So serious that such a change would certainly need to be made as part of a v2 release.

  • Result, Success, and Failure are no longer exported by Parsita, but would have the same names
  • The Result.or_die method would be renamed to Result.unwrap
  • The Success.value attribute would be renamed to Success._inner_value
  • The Failure.message attribute would be renamed to str(Failure._inner_value)

I propose that Parsita 2.0 be the first version to use returns.Result. However, it will actually use subclasses of the Returns Result which has deprecated versions of or_die and value. Importing these subclasses will themselves be deprecated. In Parsita 2.1, these deprecated items will all be removed.

Retain parser state in ParseError

Parser.parse destroys all the interesting state in Backtrack (i.e. farthest and expected) when it packages it all up into the message for Failure. This state should be retained unadulterated all the way to ParseError and only turned into a message by ParseError.__str__.

For backwards compatibility, a deprecated property for message would have to be retained on Failure and ParseError until v2.

Combine GeneralParsers and TextParsers

After #79, the only difference between GeneralParsers and TextParsers will be that in GeneralParsers the default whitespace is None and in TextParsers the default whitespace is r'\s*'. This is a pretty small difference that may not justify keeping both around. If combined, the default would have to be None. I would probably call the combined context class ParserContext to further emphasize that this class should never be instantiated.

This will require that whitespace in GeneralParsers will need to be backported and a stub in version 1.9 where ParserContext = GeneralParsers will need to be added for forwards compatibility.

Mark private methods as private

I neglected to make a bunch of methods private that should not be accessed externally. All these below, I did not consider part of the public interface, but maybe some of them should be stabilized.

  • StringReader.current_line should be renamed to _current_line
    • It could be __current_line, but StringReader should be considered final anyway
  • Parser.consume should be renamed to _consume
    • Parser.cached_consume should be renamed to consume now that it is available
  • Parser.name_or_repr should be renamed to _name_or_repr
  • Parser.name_or_nothing should be renamed to _name_or_nothing
  • Parser.handle_other should be renamed to _handle_other
    • It could be __handle_other, but that would be the only use of this questionable Python feature
  • Parser.name and Parser.protected are funky in that it should only be accessed by Parser or the context. They are more like __name__ and __protected__, but it is unclear if claiming dunder names are appropriate here.

Predicate parser drops failures of inner parser when predicate is false

In Parsita 1, the pred parser discards all errors from a successful run of the inner parser if the predicate turns out to be false. (The predicate failure is the only failure.) This is not how Parsita promises to work. It promises to always report the furthest error, which is usually what the user wants. This matters when one parser parses a prefix of another parser (basically all parsers of mathematical expressions). The short parser succeeds, the long parser fails. This is normally fine because, when the overall parser fails to consume the full input, the long parser's message is displayed because it made more progress before the ultimate failure.

Below is a parser (clearly a subset of a full expression parser) that demostrates this unexpected behavior.

from parsita import *


def is_static(expression):
    if isinstance(expression, str):
        return False
    if isinstance(expression, int):
        return True
    elif isinstance(expression, list):
        return all(is_static(e) for e in expression)


class PredParsers(ParserContext):
    name = reg(r"[a-zA-Z]+")
    integer = reg(r"[0-9]+") > int
    function = name >> "(" >> repsep(expression, ",") << ")"
    expression = function | name | integer
    static_expression = pred(expression, is_static, "static expression")

print(PredParsers.static_expression.parse("1"))
# Success(1)
print(PredParsers.static_expression.parse("sin(0)"))
# Success([0])
print(PredParsers.static_expression.parse("a"))
# Expected static expression but found 'a'
# Line 1, character 1
# a
# ^
print(PredParsers.static_expression.parse("f(a,1)"))
# Expected static expression but found 'f'
# Line 1, character 1
# f(a,1)
# ^
print(PredParsers.static_expression.parse("f(2,1)"))
# Success([2, 1])
print(PredParsers.static_expression.parse("f(2,1"))
# Expected static expression but found 'f'
# Line 1, character 1
# f(2,1
# ^

This last error message shows the problem. The farther error is clearly the issue, but the early success takes precedence. This is easy to fix in Parsita 1; it's just a missing .merge(status):

class PredicateParser(Generic[Input, Output], Parser[Input, Input]):
    def __init__(self, parser: Parser[Input, Output], predicate: Callable[[Output], bool], description: str):
        super().__init__()
        self.parser = parser
        self.predicate = predicate
        self.description = description

    def consume(self, reader: Reader[Input]):
        remainder = reader
        status = self.parser.consume(remainder)
        if isinstance(status, Continue):
            if self.predicate(status.value):
                return status
            else:
                return Backtrack(remainder, lambda: self.description)#.merge(status)
        else:
            return status

Sure enough, this makes pred report the farthest error.

print(PredParsers.static_expression.parse("f(2,1"))
# Expected ',' or ')' but found end of source

However, this now breaks the pred parser error reporting. Because the pred parser fails at the start of the input, the inner parser always makes more progress than pred so any way that the parser could be extended becomes the farther error. The pred parser error message is actually hard to trigger and, in fact, cannot be triggered by static_expressions in the example at all.

print(PredParsers.static_expression.parse("a"))
# Expected '(' but found end of source

This was discovered because Parsita 2 fixes the original bug (because merges no longer need to be manually applied) and I was relying on pred reporting an error at least sometimes. The expected behavior of pred could be this:

  • If the inner parser fails, it backtracks like normal.
  • If the predicate passes, it continues like normal.
  • If the predicate fails, the failure is reported at the end of the consumed input, not the beginning. Note that this completely messes up the concept of "actual" for the predicate parser unless we add the ability to customize the actual message on a per-parser basis.

There is a question here on whether or not failures that tie the predicate parser should be superseded by the predicate parser. For example,

print(PredParsers.static_expression.parse("a"))
# Expected '(' or static expression but found end of source  # allowing ties
# Expected static expression but found end of source  # superseding ties

For this particular parser, superseding ties is clearly better, but it is difficult to convince myself that superseding ties is always the correct behavior for pred. This would be the only parser that supersedes other failures. While unusual, it kind of makes sense given that pred applies a post-processing step and therefore acts as a kind of checkpoint on the parser.

In the same vein, there is also a question of whether or not the predicate parser should clear any errors generated by the inner parser at the end of the input it consumed.

Ultimately, the pred parser may not be completely sane because it is not really a parser, but a post-processor of parsed input. I have only used the pred parser where I should probably be using a type checker on top of the AST, but that was too much work.

Implement a packrat parser

Parser combinators go into infinite recursion if any left-recursive rules are encountered. Parsita converts such a situation into a good error message, but still errors. Errors on left recursion are not user-friendly because they are prevalent in mathematical expressions and are the only context free grammar rules disallowed by parser combinators that the typical user will attempt to define.

The solution to allowing left recursion in parser combinators is to implement the engine as a packrat parser. This is a major undertaking, but would eliminate the left-recursion problem. Unlike other user-friendly initiatives in Parsita, this one would probably have a positive impact on performance.

Add falible conversion parser

Right now, the conversion parser (>) transforms parsed text into Python objects. The main limitation of this is that it cannot handle failure. This came up in #23 in the rejection of reserved keywords.

class Percent:
  def __init__(number: int):
    if not 0 <= number <= 100:
      raise ValueError(f'Not a percentage between 0 and 100: {number}')
    self.number = number

percent = (reg(r'[0-9]+') > int) > Percent

This example raises an exception rather than returning a Failure, which is bad. Now this particular example is not great because it can be fixed with pred pretty easily.

class Percent:
  def __init__(number: int):
    self.number = number

percent = pred(reg(r'[0-9]+') > int, lambda x: 0 <= x <= 100, 'number between 0 and 100') > Percent

However, if the logic is inherent to the construction of the object, this gets awkward because the logic for validating the object and the logic for constructing the object must be separated or even duplicated. What we need is a conversion parser that is falible, it does not have to return a value to be wrapped in Success, but return a Result that can be either Success or Failure.

Scala's parser combinators have the into[U](f: T => Parser[U]) method, also known as the flatMap method or >> operator. The function takes the successful value of the previous parser and then returns a Parser of some other result value. This is actually quite a bit more powerful than what I described in the previous paragraph, but by returning the success or failure parsers, we get the desired outcome. This example uses the >= operator to represent falible conversion.

class Percent:
  def __init__(number: int):
    self.number = number

def to_percent(number: int):
  if not 0 <= number <= 100:
    return failure('number between 0 and 100')
  else:
    return success(Percent(number))
  
percent = reg(r'[0-9]+') >= to_percent

I may want to make this simply less powerful in the interest of usability but, boy, is the less powerful version a lot like the more powerful version:

class Percent:
  def __init__(number: int):
    self.number = number

def to_percent(number: int):
  if not 0 <= number <= 100:
    return Failure('number between 0 and 100')
  else:
    return Success(Percent(number))
  
percent = reg(r'[0-9]+') >= to_percent

Support Mypy Type Checking

When adding parsita to a project that is type checked with mypy, we had to add the following configuration to our mypy configuration:

[tool.mypy]
strict = true
disallow_subclassing_any = false

Note that disallow_subclassing_any is not enabled by default, but we use strict = true above to provide better type checking in our Python projects.

Otherwise we get a lot of errors of the form:

 error: Class cannot subclass "TextParsers" (has type "Any")  [misc]

I'm not sure if this is something that can be fixed upstream in parsita, but at a minimum having a type checking page in the documentation with known issues like this could help future users.

Environment

  • Python 3.9
  • parsita 1.7.1
  • mypy 0.982

Repeated parsers should detect infinite looping

It is fairly easy to write a parser that loops infinitely by accident. For example, say someone wishes to parse a list of numbers that can be in any of these three formats: no decimal (123), leading decimal (.456), and middle decimal (6.28). Someone might write the following parser to accomplish this.

from parsita import *

class NumbersParsers(TextParsers):
    whole_part = reg('\d+')
    decimal_part = reg('\.\d+')
    number = opt(whole_part) & opt(decimal_part)
    numbers = rep(number)

Ignoring the conceptual problems with this parser, the main problem is that this parser always hangs.

print(NumbersParsers.numbers.parse('1 .1 2.2'))  # spins forever

The reason is that opt() & opt() can match an empty string. And because matching an empty string consumes no input, rep will never terminate. This can happen in rep1 also, as well as repsep and rep1sep if the separator can be empty.

If no input was consumed in an iteration, then all future iterations will also consume no input. It should be possible to detect this infinite recursion because the reader before an iteration will be equal to a reader after an iteration. Parsita should raise an exception telling the user that this happened rather than quietly spinning forever.

It should be noted that this could also happen in recursive user-defined parsers. It would be harder to detect this case.

AlternativeParser failure should list all possibilities

When all alternatives of an AlternativeParser fail, Parsita reports only the first parser as being expected at the failing position. At least Parsita reports the first one rather than the last one like many parsing libraries. It would be even nicer if the failure message reported all the possibilities that failed at the same position. This could be done by having Status.message be a list rather than a single message.

Annotate Result.or_die()

Result.or_die() needs to be annotated with Output in order to properly support autocomplete in IDEs.

Alternate Parser example does not work

Copying from the examples:

from parsita import *
class NumberParsers(TextParsers):
    integer = reg(r'[-+]?[0-9]+') > int
    real = reg(r'[+-]?\d+\.\d+(e[+-]?\d+)?') | 'nan' | 'inf' > float
    number = integer | real
    
assert NumberParsers.number.parse('4.0000') == Success(4.0)

Fails with error:

AssertionError                            Traceback (most recent call last)
<ipython-input-3-e8cd6909f576> in <module>
      6     number = integer | real
      7 
----> 8 assert NumberParsers.number.parse('4.0000') == Success(4.0)

Switching number = real | integer works as advertised.

I'm running python 3.7.4 and parsita 1.3.2 on Mac OS 10.13.6.

Contribution question

Hi there!

Unbeknownst to you, I've been using your library for what feels like a long time now. (Congratulations, your code is part of the enormous system that drives Alexa!)

I was talking to a colleague about your library, and part of the discussion includes, "I've got a bunch of parsers that are important, but aren't part of the library." The colleague asked, "why haven't you submitted them back?" and my answer was pretty unsatisfying (to me at least): "I don't know if he's looking for contributions like this."

So I'm explicitly asking the question: I've got some additional parsers that are pretty useful when working with the library, would you like me to submit a full package? (including documentation)

Here's a brief rundown of what I'm talking about:

def excluding(parser: Parser[Input, Output]) -> ExcludingParser:
    """Match anything unless it matches the provided parser

    This matches all text until text that is matched by the provided parser is encountered.

    Args:
        :param parser: a parser that parses terms that you don't want to capture
    """
    if isinstance(parser, str):
        parser = lit(parser)
    return ExcludingParser(parser)


def at_least(n: int, parser: [Input, Output]) -> RepeatedAtLeastNParser:
    """Match a parser at least n times

    This matches ``parser`` multiple times in a row. If it matches as least
    n times, it returns a list of values that represents each time ``parser`` matched. If it
    matches ``parser`` less than n times it fails.

    Args:
        :param parser: Parser or literal
        :param n: count of minimum matches
    """
    if isinstance(parser, str):
        parser = lit(parser)
    return RepeatedAtLeastNParser(n, parser)


def check(parser: Parser[Input, Output]) -> ExcludingParser:
    """Evaluates to see if you're on the right track without consuming input

    This will match text against the provided parser, and continue if that parser can move forward, else it will backtrack

    Args:
        :param parser: a parser that parses terms that you want to make sure are present
    """
    if isinstance(parser, str):
        parser = lit(parser)
    return CheckParser(parser)

def debug(
        parser: Parser[Input, Output], verbose: bool = False,
        callback: Callable[[Parser[Input, Input], Reader[Input]], None]  = None
) -> DebugParser:
    """Lets you set breakpoints and print parser progress

    You can use the verbose flag to print messages as the parser is being evaluated

    You can use the callback method to insert a callback that will execute before the parser is evaluated, the call will include the reader

    Args:
        :param parser: a parser that parses terms that you want to make sure are present
        :param verbose: write progress messages to stdout
        :param callback: calls this function before evaluating the provided parser
    """
    if isinstance(parser, str):
        parser = lit(parser)
    return DebugParser(parser, verbose, callback)


# note, I think you've already produced a "longest" parser that replicates this, in the latest version
def best(*parsers: Parser[Input, Output]) -> BestAlternativeParser:
    """Will return the furthest progress from any list of parsers

    This will try each parser provided, and return the result of the one that made it the farthest

    Args:
        :param parsers: a list of parsers or a single AlternativeParser

    """
    processed_parsers = []
    for parser in parsers:
        if isinstance(parser, str):
            processed_parsers.append(lit(parser))
        else:
            processed_parsers.append(parser)
    first_parser = processed_parsers[0]
    if len(processed_parsers) == 1 and isinstance(first_parser, AlternativeParser):
        processed_parsers = first_parser.parsers
    return BestAlternativeParser(*processed_parsers)


def track(parser: Parser[Input, Output]) -> ExcludingParser:
    """Tracks the current depth of the start of the successful parser

    This will match text against the provided parser, and accept if that parser can move forward, else it will backtrack.
    the result it returns will include an integer that specifies the depth that the match began

    Args:
        :param parser: a parser that parses terms that you want to make sure are present
    """
    if isinstance(parser, str):
        parser = lit(parser)
    return TrackParser(parser)

# Note: This might be better implemented as a flag on repsep
def repwksep(
    parser: Union[Parser, Sequence[Input]], separator: Union[Parser, Sequence[Input]]
) -> RepeatWithKeptSeparatorsParser:
    """Match a parser zero or more times separated by another parser, and keeps the separators.

    This matches repeated sequences of ``parser`` separated by ``separator``. A
    list is returned containing tuples of matched values and separators If there are no matches, an empty
    list is returned.

    Args:
        parser: Parser or literal
        separator: Parser or literal
    """
    if isinstance(parser, str):
        parser = lit(parser)
    if isinstance(separator, str):
        separator = lit(separator)
    return RepeatWithKeptSeparatorsParser(parser, separator)


def tag(
    parser: Parser, tag_contents: str
) -> TagParser:
    """
    Prepends the output of the provided parser with static content
    
    Useful for identifying which parser was successful.
    
    :param parser: 
    :param tag_contents: the result that should be prepended to the output stream 
    :return: 
    """

def msg(
    parser: Parser, message_contents: str
) -> MessageParser:
    """
    Replace the name of the provided parser

    Useful for controlling the output messages of failed parsers

    :param parser:
    :param message_contents:
    :return:
    """

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.