Giter Club home page Giter Club logo

questionary's Introduction

Questionary

Version License Continuous Integration Coverage Supported Python Versions Documentation

✨ Questionary is a Python library for effortlessly building pretty command line interfaces ✨

Example

import questionary

questionary.text("What's your first name").ask()
questionary.password("What's your secret?").ask()
questionary.confirm("Are you amazed?").ask()

questionary.select(
    "What do you want to do?",
    choices=["Order a pizza", "Make a reservation", "Ask for opening hours"],
).ask()

questionary.rawselect(
    "What do you want to do?",
    choices=["Order a pizza", "Make a reservation", "Ask for opening hours"],
).ask()

questionary.checkbox(
    "Select toppings", choices=["foo", "bar", "bazz"]
).ask()

questionary.path("Path to the projects version file").ask()

Used and supported by

Features

Questionary supports the following input prompts:

There is also a helper to print formatted text for when you want to spice up your printed messages a bit.

Installation

Use the package manager pip to install Questionary:

pip install questionary

✨🎂✨

Usage

import questionary

questionary.select(
    "What do you want to do?",
    choices=[
        'Order a pizza',
        'Make a reservation',
        'Ask for opening hours'
    ]).ask()  # returns value of selection

That's all it takes to create a prompt! Have a look at the documentation for some more examples.

Documentation

Documentation for Questionary is available here.

Support

Please open an issue with enough information for us to reproduce your problem. A minimal, reproducible example would be very helpful.

Contributing

Contributions are very much welcomed and appreciated. Head over to the documentation on how to contribute.

Authors and Acknowledgment

Questionary is written and maintained by Tom Bocklisch and Kian Cross.

It is based on the great work by Oyetoke Toby and Mark Fink.

License

Licensed under the MIT License. Copyright 2021 Tom Bocklisch.

questionary's People

Contributors

amnuts avatar baturayo avatar beliaev-maksim avatar bosd avatar bramver avatar brianpugh avatar crazyivan359 avatar dependabot[bot] avatar eliedeloumeau avatar erohmensing avatar fantasquex avatar fossabot avatar frankli0324 avatar heeplr avatar jayvdb avatar joaquingx avatar kiancross avatar layday avatar lee-w avatar nicdom avatar sisp avatar taspotts avatar tmbo avatar viniciusdc avatar weiduhuo avatar wochinge avatar xmatthias avatar yajo avatar ywkim avatar zylvian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

questionary's Issues

Hidden and Blink prompt style flags have no effect

Hi,

I use questionary in several of my projects (thanks for that, it's a great prompt library!) and have found an issue when styling the prompts.

It seems that that passing the hidden or blink flags to prompt_toolkit.styles.Style has no effect. (See docs of the flags I'm referring too here)

See this example code:

python: 3.7.3
questionary: 1.1.1
prompt_toolkit: 2.0.9
import questionary as prompt
from prompt_toolkit.styles import Style

custom_style_fancy = Style([
    ('qmark', 'fg:#673ab7 reverse'),    # Reverse appears to work
    ('question', 'hidden'),  # Hidden Not working
    ('answer', 'fg:#f44336 bold'),
    ('pointer', 'fg:#673ab7 hidden'),  # Not working with color
    ('selected', 'fg:#cc5454 blink'),  # Not working
    ('separator', 'fg:#cc5454'),
    # Not working
    ('instruction', 'hidden')
])

check_box = prompt.checkbox(
    "Title", ["Choice 1", "Choice 2", "Choice 3"], style=custom_style_fancy)
c_result = check_box.ask()

text = prompt.text("A Question", style=custom_style_fancy)
t_result = text.ask()

print("Results:", c_result, t_result)

See results:
questionary_issue

I'm happy to provide any other info/help you need.

How to use rawselect without CPR?

I love questionary! It is certainly the right way forward in the command line dialogs space.

I tried out the rawselect.py example and have problems on simple terminals (TERM=dumb or TERM=unkown) which do not cater for CPRs (cursor position requests). In this case questionary prints out

WARNING: your terminal doesn't support cursor position requests (CPR).

The warning comes from prompt_toolkit I suspect, but is there a way to tell questionary to simplify the prompting so that no CPRs are required? After all the user just will type a number to select an option!

Pre-selected value in list

Hi. I'm sorry if this has already been answered before...And thank you for questionary btw.

Is it possible in a select to have a preselected option? I have found the default-settings but it doesn't behave the way I would expect.

image
Here review is the default, but the marker stays on test. And if the user hits enter he will select test and not review.

In my scenario I will cache the last environment the user worked on and let him hit enter next time, instead of having to select from the list each time.

A long question only displays as many lines as the term height

Just write a long question in a short terminal:

prompt({'type': 'text',  'name':'q1', 'message': '''
this is a long question
line 2
3
4
5
65
6
7
8
9
9
we
asdf
a
sdf
12132
12
1231
231
231
23
123
123
12
31
23
123
'''})

You have to resize the term to see the full question!:

Peek 22-10-2020 12-16

User Cancellation

I see that KeyboardInterrupt is caught and None is returned by all of the prompts. Is it safe to assume that if None is returned by prompt, that the user canceled, or are there cases where None is returned as valid input?

Hiding cursor on select

Is it possible to hide the cursor for a select? If not, could it be possible?

image

Having the cursor there creates a visual distraction and I think also that it indicates that it is possible to type in text where the cursor is located.

Adopting a type checker

I wonder if there would be any interest in adopting a type checker. Questionary appears to be extensively annotated so this shouldn't prove too onerous. The four established type checkers are mypy, Pyright, pytype and pyre - would you have a preference for one of those?

Customize class:instruction on prompt creation

Just another day when it comes to my mind "why not navigate with j and k in questionary.select" and it really worked, then I found #2
The default hint text didn't mention that, so I looked into the library and found that get_prompt_tokens() literally returns "(Use arrow keys)".
A customizable text plz?

RunTime Error 'Application.run_async' was never awaited

Hallo Questionary,
I have been using questionary for a while now, but when I started linking an additional lib, a runtime warning/error gets triggered. (Although it is a warning it leads to immediate failure)
In the traceback there is mention of the ask_unsafe function. I wonder, can one from the python.api switch between the multiple ask functions that are available in question.py?

A short test script:

#!/usr/bin/python3
import subprocess
from questionary import prompt, Separator
from datalad import api as datalad

question = [{'type': 'list', 'name': 'choice', 'message': 'Menu', 'choices': ['Red pill','Blue pill']}]
print("You have to take four pills")
for i in range(4):
  answer = prompt(question)
  name =  answer['choice'].replace(' ','_')+str(i)
  print("Create path: "+name)
  ds = datalad.create(name, cfg_proc='text2git')

The trace is

Traceback (most recent call last):
  File "short.py", line 9, in <module>
    answer = prompt(question)
  File "/usr/local/lib/python3.8/dist-packages/questionary/prompt.py", line 97, in prompt
    answer = question.unsafe_ask(patch_stdout)
  File "/usr/local/lib/python3.8/dist-packages/questionary/question.py", line 59, in unsafe_ask
    return self.application.run()
  File "/usr/local/lib/python3.8/dist-packages/prompt_toolkit/application/application.py", line 816, in run
    return loop.run_until_complete(
  File "/usr/lib/python3.8/asyncio/base_events.py", line 591, in run_until_complete
    self._check_closed()
  File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
    raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
sys:1: RuntimeWarning: coroutine 'Application.run_async' was never awaited

unable to run examples / causing exceptions

I seem to be unable to run any of the examples.

I don't think it's related to my environment - i'm using a brand-new 3.8.0 Virtual environment, created for this sole purpose.

Installation steps:

git clone https://github.com/tmbo/questionary
cd questionary
# Below is identical to pip install -e .
python setup.py develop 

python examples/confirm.py

environment:

$ pip freeze
prompt-toolkit==3.0.3
-e [email protected]:tmbo/questionary.git@e66adfcc1d2b56e8d8d559a4e1bf9dcff23eb3b7#egg=questionary
wcwidth==0.1.8

Error received:

# Same happens with other examples, too
$python examples/confirm.py

Traceback (most recent call last):
  File "examples/confirm.py", line 7, in <module>
    import questionary
  File "/home/matt/github/questionary/questionary/__init__.py", line 2, in <module>
    from prompt_toolkit.validation import Validator, ValidationError
  File "/home/matt/.pyenv/versions/questionary/lib/python3.8/site-packages/prompt_toolkit-3.0.3-py3.8.egg/prompt_toolkit/__init__.py", line 16, in <module>
    from .application import Application
  File "/home/matt/.pyenv/versions/questionary/lib/python3.8/site-packages/prompt_toolkit-3.0.3-py3.8.egg/prompt_toolkit/application/__init__.py", line 1, in <module>
    from .application import Application
  File "/home/matt/.pyenv/versions/questionary/lib/python3.8/site-packages/prompt_toolkit-3.0.3-py3.8.egg/prompt_toolkit/application/application.py", line 1, in <module>
    import asyncio
  File "/home/matt/.pyenv/versions/3.8.0/lib/python3.8/asyncio/__init__.py", line 8, in <module>
    from .base_events import *
  File "/home/matt/.pyenv/versions/3.8.0/lib/python3.8/asyncio/base_events.py", line 23, in <module>
    import socket
  File "/home/matt/.pyenv/versions/3.8.0/lib/python3.8/socket.py", line 52, in <module>
    import os, sys, io, selectors
  File "/home/matt/.pyenv/versions/3.8.0/lib/python3.8/selectors.py", line 12, in <module>
    import select
  File "/home/matt/github/questionary/examples/select.py", line 9, in <module>
    from questionary import Separator, Choice, prompt
  File "/home/matt/github/questionary/questionary/prompt.py", line 3, in <module>
    from prompt_toolkit.output import ColorDepth
  File "/home/matt/.pyenv/versions/questionary/lib/python3.8/site-packages/prompt_toolkit-3.0.3-py3.8.egg/prompt_toolkit/output/__init__.py", line 3, in <module>
    from .defaults import create_output
  File "/home/matt/.pyenv/versions/questionary/lib/python3.8/site-packages/prompt_toolkit-3.0.3-py3.8.egg/prompt_toolkit/output/defaults.py", line 4, in <module>
    from prompt_toolkit.patch_stdout import StdoutProxy
  File "/home/matt/.pyenv/versions/questionary/lib/python3.8/site-packages/prompt_toolkit-3.0.3-py3.8.egg/prompt_toolkit/patch_stdout.py", line 22, in <module>
    from asyncio import get_event_loop
ImportError: cannot import name 'get_event_loop' from partially initialized module 'asyncio' (most likely due to a circular import) (/home/matt/.pyenv/versions/3.8.0/lib/python3.8/asyncio/__init__.py)

I'm not certain what is causing this, as running a python shell and importing questionary there does work - and also running it from within the project I've integrated questionary it works fine (however i import only the prompt in that case).

Strangely, it works when running the scripts as module

python -m examples.confirm

Feature Request: Custom Keybinding

Would it be possible to use j and k for list navigation instead? the javascript inquirer seems to support this.

My dev time is quite limited but would otherwise love to contribute.

Enhancement - let choices be a dictionary of {key:value}, shows value but answer is key

would be cool if this could work like a <select> box. having an id and a label type thing. right now it just returns the selected text and using this trick #53 you could get what index you need but that now means setting up 2 lists, 1 with the label and another with the id of the selected choice and then going through the motions of getting the index of the choice and then finding the id from the id list using the index

while this IS possible i think a lot of use cases might be improved if it had an id and label. maybe pass it a choices dictionary

"What toppings do you want?"
{
    "cheese": "Super creamy nice cheese",
    "meat": "our mystry meat package"
    "vegetables": "only the freshest veggies"
}

shows as:

What toppings do you want?
> Super creamy nice cheese
   our mystry meat package
   only the freshest veggies

and answer is

"cheese"

this could be a rather cool enhancement


potentially add a flag to use the key as the shortcut key as well would be amazing.


basically being able to specify our own "shortcut" values instead of 0 - 9 then a - z etc.

Synchronous questionary calls in an async function

Hello,
Is there a way to run synchronous questionary calls in an async function?
I need to make something like this work:

import asyncio
import questionary

async def main() -> None:
    questionary.select(
        "What do you want to do?",
        choices=[
            "Order a pizza",
            "Make a reservation",
            "Ask for opening hours",
        ],
    ).ask()

asyncio.run(main())

In my application I'm not calling questionary directly (the test above is an oversimplified example): I need to support different toolkits, so I created a wrapper around questionary.
I know questionary has an async API, but I'd need to expose it in the wrapper and add await to every call, without any real need for an async API: only to make it work.

Thank you, cheers

TypeError: descriptor '__subclasses__' of 'type' object needs an argument when imported in python 3.5.2

This is a known bug in Python 3.5.2 as discussed in https://stackoverflow.com/questions/42942867/optionaltypefoo-raises-typeerror-in-python-3-5-2

~/.local/lib/python3.5/site-packages/questionary/prompts/common.py in <module>
    246 def build_validator(validate: Union[Type[Validator],
    247                                     Callable[[Text], bool],
--> 248                                     None]
    249                     ) -> Optional[Validator]:
    250     if validate:

/usr/lib/python3.5/typing.py in __getitem__(self, parameters)
    550             parameters = (parameters,)
    551         return self.__class__(self.__name__, self.__bases__,
--> 552                               dict(self.__dict__), parameters, _root=True)
    553 
    554     def __eq__(self, other):

/usr/lib/python3.5/typing.py in __new__(cls, name, bases, namespace, parameters, _root)
    510                 continue
    511             if any(isinstance(t2, type) and issubclass(t1, t2)
--> 512                    for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
    513                 all_params.remove(t1)
    514         # It's not a union if there's only one type left.

/usr/lib/python3.5/typing.py in <genexpr>(.0)
    510                 continue
    511             if any(isinstance(t2, type) and issubclass(t1, t2)
--> 512                    for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
    513                 all_params.remove(t1)
    514         # It's not a union if there's only one type left.

/usr/lib/python3.5/typing.py in __subclasscheck__(self, cls)
   1075                     return True
   1076                 # If we break out of the loop, the superclass gets a chance.
-> 1077         if super().__subclasscheck__(cls):
   1078             return True
   1079         if self.__extra__ is None or isinstance(cls, GenericMeta):

/usr/lib/python3.5/abc.py in __subclasscheck__(cls, subclass)
    223                 return True
    224         # Check if it's a subclass of a subclass (recursive)
--> 225         for scls in cls.__subclasses__():
    226             if issubclass(subclass, scls):
    227                 cls._abc_cache.add(subclass)

TypeError: descriptor '__subclasses__' of 'type' object needs an argument

Do you consider comment that type hint ->Optional[Validator] out for Python 3.5.2 compatibility so more people can use this wonderful library? Thanks a lot.

Possible CPR (cursor position requests) issue

First off, I'm not sure if this is a bug/issue in this project or not.

Whenever I use the select, rawselect or checkbox question types, the list of selections is displayed half-a-screen down from the question text.

I used PDB to step through the code and it didn't reproduce the issue, but instead I got the log message, "WARNING: your terminal doesn't support cursor position requests (CPR)". The log message comes from prompt_toolkit.application.application (line 724). From briefly looking through that code, I noticed that it wasn't reproducing the issue because the CPR request timed out, due to how long it took me to step through the code. If I run the same test through PDB, but quickly step, it reproduces.

Do you have any experience with this type of issue? Or have any idea why the cursor position request would return a position so far down from where the question text is being displayed?

I tested in xterm/bash and a regular virtual terminal/bash on the following platform.
Linux mjk 4.20.0-arch1-1-ARCH #1 SMP PREEMPT Mon Dec 24 03:00:40 UTC 2018 x86_64 GNU/Linux

Thanks!

Testing questionary flows

It would be great if questionary would have an easy way to test input flows

For example (reusing my example flow from #34):

from questionary import Separator, test_prompt 
questions = [
        {
            "type": "confirm",
            "name": "conditional_step",
            "message": "Would you like the next question?",
            "default": True,
        },
       {
            "type": "text",
            "name": "next_question",
            "message": "Name this library?",
            "when": lambda x: x['conditional_step'],
            "validate": lambda val: val == "questionary"
        },
       {
            "type": "select",
            "name": "second_question",
            "message": "Select item",
            "choices": [
                "item1",
                "item2",
                Separator(),
                "other",
            ],
        },
        {
            "type": "text",
            "name": "second_question",
            "message": "Insert free text",
            "when": lambda x: x["second_question"] == "other"
        },
]
inputs = ["Yes", "questionary", "other", "free text something"]
vals = test_prompt(questions, inputs)

assert vals['second_question'] == "free text something"
. . .

Now by calling test_prompt() with an input string (or List - which is probably easier to compile) we can run through the whole workflow, and verify that all keys are populated as expected.
This would allow proper CI for more complex flows ... which base one input on top of the other as in the question-flow above.
I suspect it would be possible by mocking some internals of questionary - but i see this as a dangerous approach as every minor change in these mocked functions would probably lead to an error in my tests.

Most of the code / logic should already be available as part of the tests for questionary - however that's not available when installing from pypi...

Is it possible to have checkbox choices spaced over multiple columns on screen?

Hi.

I just found your lovely lib. Looks great! I want to upgrade the CLI of an old app of mine and offload a bunch of wacky code to a more polished 3rd party lib.

Now, I want to use the checkbox() command and have two questions:

  1. Is it possible to distribute the checkbox items over multiple columns on the screen? I have potentially about 50 options and could place them nicely in a table with say 4 columns...
  2. Can I chain checkbox selections (think tabs or pages) so I can define a backwards key within questionary or do I do this logic outside of it?

Cheers,
Christian

Custom validation error message

I'd like to be able to add a custom error message to the "validation" field, to replace the glaring "invalid input" on the bottom of the prompt.

j/k to navigate up/down in select?

This being a command line tool, after all, I was surprised j/k wasn't the primary mean of navigation up and down already - would appreciate if it was supported :)

Return to previous question in a form using a special key

Hi.

I do have a sequence of 5 checkbox queries the user needs to answer. Is there a way to tell questionary to go back to the previous question (i.e. using ESC or some other key) and pose the previous question again? Ideally, while preselecting the previous user selections?

Cheers
C

prompt-toolkit v3 compatibility

The autocomplete tests are failing with prompt-toolkit==3.0.2, ostensibly because it's not registering any of the tab escapes. I did some bisecting and this absolutely massive (!) commit is the one that broke it. If you'd like to get support for v3 in, I can try digging a bit deeper.

support Python 2.0

We currently using PyInquirer, and we would like to switch to questionary, but we need python 2.0 support. In your opinion what needs to be done to support Python 2.0? We would be happy to contribute.
Thanks

New question type: Autocomplete

I would like to add a new question type, or maybe is an enhancement to text question type. Basically I want to add the option to autocomplete text, in my opinion it has a lot of advantages:

  • For a list with more than 15+ items, select or another question type would be cumbersome.
  • If the prompt is used continously for the same person, i think that this option would be his best option.
    etc

I was learning about prompt_toolkit and seems quite possible(https://python-prompt-toolkit.readthedocs.io/en/master/pages/asking_for_input.html#autocompletion). What do you think @tmbo? If you think it is a good idea im ready to contribute :)

Validate_while_typing not working when select are included in questions

When running a prompt with a series of questions, I normally will use the validate_while_typing=False parameter to avoid validation errors imidiately when people start typing. This comes as you know from prompt toolkit itself, and are not implemented by Questionary.

But as soon as I introduce a select, things will fail like this:

questions = [
        {
            'type': 'text',
            'name': 'role_name',
            'message': 'Role name',
        },
        {
            'type': 'text',
            'name': 'role_description',
            'message': 'Role description',
        },
        {
            "type": "select",
            "name": "theme",
            "message": "What do you want to do?",
            "choices": [
                "Order a pizza",
                "Make a reservation",
                "Ask for opening hours",
                {"name": "Contact support", "disabled": "Unavailable at this time"},
                "Talk to the receptionist",
            ],
        }
    ]
    answers = prompt(questions, validate_while_typing=False)

Giving the error:

    answers = prompt(questions, validate_while_typing=False)
  File "python3.8/site-packages/questionary/prompt.py", line 95, in prompt
    question = create_question_func(**_kwargs)
  File "python3.8/site-packages/questionary/prompts/select.py", line 169, in select
    Application(layout=layout, key_bindings=bindings, style=merged_style, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'validate_while_typing'

It would be nice if turning validate_while_typing on and off also inside prompt. In fact, I think validate_while_typing should be False by default. It makes no sense to show an error to a user until he/she has been given a first chance to make it right.

Print an output line with question styling

I tried going through the README, issues, examples and code, but couldn't really find a way.
It's a bit weird to have a question and input that's nicely styled, and then as soon as I need to print something (like a confirmation, or just extra information), it falls out of style.

One could argue it's not in the scope of this project, but I believe it's related enough (and should be easy enough to implement with what's already here).

Feature Request: Validation

I understand questionary as a growing up Python-equivalent of inquirerer.js. Since the landscape of available CLI modules seems to be somewhat scattered around unresolved dependency updates (e.g. like with whaaaaat, pyinquirer ...), I pretty much like what questionary aims at.
However, one of the features I currently miss in general is input validation since I would like to prevent the user from proceeding to the next question if the input of the last question is invalid.

List[] arguments in API

Hello,

TL;DR: Please consider using Sequence[] types in place of List[] types in the arguments of functions in the public API.

All questionary functions have argument types like List[Union[str, Choice, Dict[str, Any]]]: the problem is that Listis invariant, so mypy complains if I pass e.g. a List[str]. The same happens with something like a_dictionary.keys(), because its type is something like KeysView[...].

see: https://mypy.readthedocs.io/en/stable/common_issues.html#invariance-vs-covariance

Thanks, cheers

Recent Breaking Change to Choice Api

A breaking change was made to the Choice object api at some point between questionary versions 1.6.0 and 1.8.0 when specifying the value parameter.

Previously, in v1.6.0, the expected behavior was for the selected choice to return the provided value= "as is" with no modifications.

However, in v1.8.0, anything passed to value is now type cast as a string, leading to breaking changes for applications expecting the returned value to be of the same type originally passed in to the value parameter. (See BradenM/micropy-cli#184, the issue that led to finding this.)

Quick search through the commit history points to this commit: 604c112 from #14 as a likely culprit.

I'm not sure if Questionary follows semantic versioning or not, but if so that PR (if it is actually the culprit) probably should have bumped the project version up by a major version given that the change is not backwards compatible w/ existing projects using v1.6.0.

Questionary v1.6.0

Python 3.8.3 (default, May 22 2020, 23:45:08)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: from questionary import Choice

In [2]: import questionary as prompt

In [3]: choices = [Choice("one", value=1)]

In [4]: prompt_ch = prompt.checkbox("Choose a value", choices=choices).ask()
? Choose a value  [one]

In [5]: prompt_ch
Out[5]: [1]

In [6]: type(prompt_ch[0])
Out[6]: int

In [7]: prompt.__version__
Out[7]: '1.6.0'

Questionary v1.8.0

Python 3.8.3 (default, May 22 2020, 23:45:08)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.19.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: from questionary import Choice

In [2]: import questionary as prompt

In [3]: choices = [Choice("one", value=1)]

In [4]: prompt_ch = prompt.checkbox("Choose a value", choices=choices).ask()
? Choose a value  [one]

In [5]: prompt_ch
Out[5]: ['1']

In [6]: type(prompt_ch[0])
Out[6]: str

In [7]: prompt.__version__
Out[7]: '1.8.0'

Color of answer for text prompt doesn't change

I've just switched over from PyInquirer to questionary and it was a seamless transition. Kudos to you for creating questionary. One thing I noticed under PyInquirer and a text prompt is that when I get the answer, the answer is high lighted in a different color. Under questionary, that color doesn't change. Am I missing something? I would think that the answer should be able to be styled as the other prompts but I can't get it working no matter what I do.

Thanks.

Type hinting not working

Hi, thanks for this library, it's very nice.

In my project mypy keeps complaining:

import questionary

questionary.select(etc..)
$ poetry run mypy -p mypackage
error: Module has no attribute "select"

I've only been able to have mypy working by adding the __all__ attribute to questionary/__init__.py, with all the needed exports.

Maybe I am missing something?

Thank you, cheers

Improved documentation

I think this is a great library - however documentation is somewhat lacking.

I've been able to build up a rather nice workflow now - but most of it i had to piece together from the sourcecode.

Mainly, it's about the configuration dictionary way:

from questionary import Separator, prompt
questions = [
        {
            "type": "confirm",
            "name": "conditional_step",
            "message": "Would you like the next question?",
            "default": True,
        },
       {
            "type": "text",
            "name": "next_question",
            "message": "Name this library?",
            "when": lambda x: x['conditional_step'],
            "validate": lambda val: val == "questionary"
        },
       {
            "type": "select",
            "name": "second_question",
            "message": "Select item",
            "choices": [
                "item1",
                "item2",
                Separator(),
                "other",
            ],
        },
        {
            "type": "text",
            "name": "second_question",
            "message": "Insert free text",
            "when": lambda x: x["second_question"] == "other"
        },
]
prompt(questions)

For the above dictionary - the following points are missing / non-obvious in the docs

  • "validate": key name when used as dictionary
  • `"validate": can be a function, does not have to be a ValidatorClass
  • "when": conditional - the readme.md only points this out as "skipif"
  • "second_question": Names can be reused in this way, to offer "choices" - or a "other" - free text form - which will end in the same dictionary key - "second_question" in the above example.

While the examples cover some of this - i think it would be better to have these very cool and important features documented properly.

Feature request: file selection dialog

It would be great to have a file selection prompt for questionary.

Here's something I put together in a hurry that uses the built-in select prompt, but it's nowhere near perfect. For instance, it's recursive which I would not do if implementing seriously.

from typing import Optional, List, Union
from pathlib import Path

from questionary import Choice, Separator, select


def questionary_file_select(
    message: str = 'Select a file', starting_dir: Path = Path(), glob: str = '*.*'
) -> Optional[Path]:
    """Prompt the user to select a file from the filesystem using CLI.

    The function recursively asks for a file until one is selected, so it's possible to traverse the directory structure
    by selecting a folder instead (or the parent folder). A custom glob pattern for the files list can be set to filter
    by file extension for instance.

    Args:
        message (str, optional): Instruction message for the user. Defaults to 'Select a file'.
        starting_dir (Path, optional): Starting location for the file browsing. Defaults to
            ``Path()``.
        glob (str, optional): Glob pattern for files listing, e.g. to filter by extension. Defaults to ``'*.*'``.

    Returns:
        Optional[Path]: the user-select file as a Path object or None if the user cancels
    """
    choices: List = [Choice('Cancel'), Choice('..', value=starting_dir.parent)]

    dirs = filter(lambda x: x.is_dir(), sorted(starting_dir.glob('*')))
    for directory in dirs:
        choices.append(Choice(directory.stem, value=directory))

    choices.append(Separator())

    files = filter(lambda x: x.is_file(), sorted(starting_dir.glob(glob)))
    for item in files:
        choices.append(Choice(str(item.relative_to(starting_dir)), value=item))

    answer: Union[str, Path] = select(message, choices=choices).ask()
    if isinstance(answer, str):  # user chose Cancel
        return None
    if answer.is_file():
        return answer
    else:
        return questionary_file_select(message, answer, glob)

Programmatically define pointed-at option for select

Let me preface this by saying that I love questionary and I'm thankful to you guys for making it :)

I would like to suggest adding a feature to the select prompt. Currently, I think (by looking at the code) that there is no way to change which item in the select is pointed at (has the pointer symbol) when it gets initialized. It's always the first selectable choice that gets pointed at.

I would like to be able to start the select in a state where the pointer is on another choice (for instance the first one that has a value matching the default argument of the select call).
Another option would be to have an additional argument for Choice objects where we can say we want it to be pointed at initially.

Passing the default argument highlights the Choice that corresponds, but the pointer is still in the first position.

Thanks!

Allow "confirmatory" enter to confirm dialog

When using the prompt-type confirm - it's really odd to me that pressing Y / n will immediately confirm the action and move forward.

I think the current approach of auto-confirm will work fine for single, confirmatory questions - but can be a problem for workflows.

For that, i'd propose to have an option like "autoenter" - which should default to true (to keep existing behaviour) - however it'll allow myself to have the user press Y / n - think about it - and then press enter explicitly.

The way current way is a problem as i find myself pressing "y" - "Enter" (obviously out of habit).

Since I'm building a workflow of multiple consecutive actions (all with defaults), i find myself constantly skipping / confirming the question right after the Confirm question - and i suspect that our users will face the same issue - if they're used to bash-like cli interfaces, which usually require a "n\r" - not just a "n".

Hope this makes sense

How to write tests for applications that use questionary?

Dear @tmbo ,

I am trying to test an python application of mine which uses questionary.
However, I am lost when trying to test the application.

from click.testing import CliRunner
runner = CliRunner()
result = runner.invoke(myapplicationfunction, input='\n')

The code above should call myapplicationfunction which opens questionary prompts and then simply enter once, which is equivalent to \n.
The same thing works when running echo -e "\n" | python filewhichcallsmyapplicationfunction.py

However, when running it via the runner it does not work. Reason being: it just passes stdin.

The error message is:

E               io.UnsupportedOperation: Stdin is not a terminal.

bla/site-packages/prompt_toolkit/input/vt100.py:57: UnsupportedOperation

How do I test such scripts, which simply open questionary prompts?
Do you have a pytest example?

Error importing on Python 3.5.2

I am using Questionary 1.2.0 with Python 3.5.2, and during the import i am getting this error:

Traceback (most recent call last):
  File "/home/undermon/Documentos/Desenvolvimento/Python/cli_tests/cli.py", line 1, in <module>
    import questionary
  File "/home/undermon/Documentos/Desenvolvimento/Python/cli_tests/venv_test/lib/python3.5/site-packages/questionary/__init__.py", line 7, in <module>
    from questionary.prompt import prompt
  File "/home/undermon/Documentos/Desenvolvimento/Python/cli_tests/venv_test/lib/python3.5/site-packages/questionary/prompt.py", line 8, in <module>
    from questionary.prompts import AVAILABLE_PROMPTS, prompt_by_name
  File "/home/undermon/Documentos/Desenvolvimento/Python/cli_tests/venv_test/lib/python3.5/site-packages/questionary/prompts/__init__.py", line 2, in <module>
    from questionary.prompts import text
  File "/home/undermon/Documentos/Desenvolvimento/Python/cli_tests/venv_test/lib/python3.5/site-packages/questionary/prompts/text.py", line 21, in <module>
    style: Optional[Style] = None,
  File "/usr/lib/python3.5/typing.py", line 552, in __getitem__
    dict(self.__dict__), parameters, _root=True)
  File "/usr/lib/python3.5/typing.py", line 512, in __new__
    for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
  File "/usr/lib/python3.5/typing.py", line 512, in <genexpr>
    for t2 in all_params - {t1} if not isinstance(t2, TypeVar)):
  File "/usr/lib/python3.5/typing.py", line 1077, in __subclasscheck__
    if super().__subclasscheck__(cls):
  File "/usr/lib/python3.5/abc.py", line 225, in __subclasscheck__
    for scls in cls.__subclasses__():
TypeError: descriptor '__subclasses__' of 'type' object needs an argument

Process returned 1 (0x1)	execution time : 0.265 s

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.