Giter Club home page Giter Club logo

microsoft / promptflow Goto Github PK

View Code? Open in Web Editor NEW
8.1K 92.0 675.0 106.37 MB

Build high-quality LLM apps - from prototyping, testing to production deployment and monitoring.

Home Page: https://microsoft.github.io/promptflow/

License: MIT License

Python 99.12% Jinja 0.60% Shell 0.04% PowerShell 0.06% HTML 0.02% Dockerfile 0.01% Rich Text Format 0.11% Batchfile 0.01% VBScript 0.01% Jupyter Notebook 0.02%
ai llm chatgpt gpt prompt prompt-engineering ai-application-development ai-applications

promptflow's Introduction

Prompt flow

Python package Python PyPI - Downloads CLI vsc extension

Doc Issue Discussions CONTRIBUTING License: MIT

Welcome to join us to make prompt flow better by participating discussions, opening issues, submitting PRs.

Prompt flow is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

With prompt flow, you will be able to:

  • Create and iteratively develop flow
    • Create executable flows that link LLMs, prompts, Python code and other tools together.
    • Debug and iterate your flows, especially the interaction with LLMs with ease.
  • Evaluate flow quality and performance
    • Evaluate your flow's quality and performance with larger datasets.
    • Integrate the testing and evaluation into your CI/CD system to ensure quality of your flow.
  • Streamlined development cycle for production
    • Deploy your flow to the serving platform you choose or integrate into your app's code base easily.
    • (Optional but highly recommended) Collaborate with your team by leveraging the cloud version of Prompt flow in Azure AI.

Installation

To get started quickly, you can use a pre-built development environment. Click the button below to open the repo in GitHub Codespaces, and then continue the readme!

Open in GitHub Codespaces

If you want to get started in your local environment, first install the packages:

Ensure you have a python environment, python=3.9 is recommended.

pip install promptflow promptflow-tools

Quick Start ⚡

Create a chatbot with prompt flow

Run the command to initiate a prompt flow from a chat template, it creates folder named my_chatbot and generates required files within it:

pf flow init --flow ./my_chatbot --type chat

Setup a connection for your API key

For OpenAI key, establish a connection by running the command, using the openai.yaml file in the my_chatbot folder, which stores your OpenAI key (override keys and name with --set to avoid yaml file changes):

pf connection create --file ./my_chatbot/openai.yaml --set api_key=<your_api_key> --name open_ai_connection

For Azure OpenAI key, establish the connection by running the command, using the azure_openai.yaml file:

pf connection create --file ./my_chatbot/azure_openai.yaml --set api_key=<your_api_key> api_base=<your_api_base> --name open_ai_connection

Chat with your flow

In the my_chatbot folder, there's a flow.dag.yaml file that outlines the flow, including inputs/outputs, nodes, connection, and the LLM model, etc

Note that in the chat node, we're using a connection named open_ai_connection (specified in connection field) and the gpt-35-turbo model (specified in deployment_name field). The deployment_name filed is to specify the OpenAI model, or the Azure OpenAI deployment resource.

Interact with your chatbot by running: (press Ctrl + C to end the session)

pf flow test --flow ./my_chatbot --interactive

Core value: ensuring "High Quality” from prototype to production

Explore our 15-minute tutorial that guides you through prompt tuning ➡ batch testing ➡ evaluation, all designed to ensure high quality ready for production.

Next Step! Continue with the Tutorial 👇 section to delve deeper into prompt flow.

Tutorial 🏃‍♂️

Prompt flow is a tool designed to build high quality LLM apps, the development process in prompt flow follows these steps: develop a flow, improve the flow quality, deploy the flow to production.

Develop your own LLM apps

VS Code Extension

We also offer a VS Code extension (a flow designer) for an interactive flow development experience with UI.

vsc

You can install it from the visualstudio marketplace.

Deep delve into flow development

Getting started with prompt flow: A step by step guidance to invoke your first flow run.

Learn from use cases

Tutorial: Chat with PDF: An end-to-end tutorial on how to build a high quality chat application with prompt flow, including flow development and evaluation with metrics.

More examples can be found here. We welcome contributions of new use cases!

Setup for contributors

If you're interested in contributing, please start with our dev setup guide: dev_setup.md.

Next Step! Continue with the Contributing 👇 section to contribute to prompt flow.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Code of Conduct

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Data Collection

The software may collect information about you and your use of the software and send it to Microsoft if configured to enable telemetry. Microsoft may use this information to provide services and improve our products and services. You may turn on the telemetry as described in the repository. There are also some features in the software that may enable you and Microsoft to collect data from users of your applications. If you use these features, you must comply with applicable law, including providing appropriate notices to users of your applications together with a copy of Microsoft's privacy statement. Our privacy statement is located at https://go.microsoft.com/fwlink/?LinkID=824704. You can learn more about data collection and use in the help documentation and our privacy statement. Your use of the software operates as your consent to these practices.

Telemetry Configuration

Telemetry collection is on by default.

To opt out, please run pf config set telemetry.enabled=false to turn it off.

License

Copyright (c) Microsoft Corporation. All rights reserved.

Licensed under the MIT license.

promptflow's People

Contributors

0mza987 avatar 16oeahr avatar brynn-code avatar chenslucky avatar chjinche avatar chw-microsoft avatar codingrabbitt1 avatar crazygao avatar d-w- avatar dorisjoy avatar elliotzh avatar gjwoods avatar guming-learning avatar hhhilulu avatar huaiyan avatar jasmin3q avatar jiazengcindy avatar lalala123123 avatar liucheng-ms avatar liuyuhang13 avatar lumoslnt avatar melionel avatar peiwengaoms avatar riddlexu avatar stephen1993 avatar thy09 avatar wangchao1230 avatar wxpjimmy avatar yingchen1996 avatar zhengfeiwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

promptflow's Issues

Deploy flow to Azure managed online endpoint

Is your feature request related to a problem? Please describe.
Is it possible to use sdk/cli to deploy flow to Azure managed online endpoint? This would be very useful for CI/CD process

Describe the solution you'd like
If possible would be great to have an example in the docs (similar to azure app service deployment example)

[BUG] unable to run basic-chat sample

Describe the bug
install 0.1.0b5. Run basic chat test and failed with error message Exception: OpenAI API hits InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. follow the quick start guide
  2. create new environment
  3. install pf packages
  4. create pf connections
  5. clone promptlfow repo
    run basic chat: python.exe -m promptflow._cli._pf.entry flow test --flow d:\code\pftest\promptflow\examples\flows\chat\basic-chat --interactive
    ===================================
    Welcome to chat flow, default_flow.
    Press Enter to send your message.
    You can quit with ctrl+C.
    ===================================
    User: test

Flow test failed with UserErrorException: Exception: OpenAI API hits InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'> [Error reference: https://platform.openai.com/docs/guides/error-codes/api-errors]

Expected behavior
Able to run the flow

Screenshots
If applicable, add screenshots to help explain your problem.

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: [e.g. 0.0.102309906]
  • Operating System: [e.g. Ubuntu 20.04, Windows 11]
  • Python Version using python --version: [e.g. python==3.10.12]

pf: 0.1.0b5
Windows 10
Python 3.11.4

Additional context
Add any other context about the problem here.

[BUG] [VSCode Extension] pf connection create fails from cli with keyring.alt installed , but works from UI that uses pfutil.py connection upsert

Describe the bug
pf connection create cli command with keyring.alt is failing but with UI triggered command that uses following -- it works:

/home/cicorias/g/cse/oneweek
/2023/llmops/prompt-flow-llmops/venv/bin/python /home/cicorias/.vscode-server/extensi
ons/prompt-flow.prompt-flow-1.0.7/pfutil/pfutil.py connection upsert -f /home/cicoria
s/.promptflow/temp/new_AzureOpenAI_connection.yaml -o /tmp/pf-upsertConnection-f41634
9a-5a60-45a2-ace6-1c5e8b5774cc.json

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. use ubuntu 20.04 in WSL
  2. using the cli try to create connection ie. pf connection create --file azure_openai.yaml...

Screenshots
image

image image
  1. On the VSCode primary side bar > the Prompt flow pane > quick access section. Find the "install dependencies" action. Please it and attach the screenshots there.
  2. Please provide other snapshots about the key steps to repro the issue.

Environment Information

  • Promptflow Package Version using pf -v: [e.g. 0.0.102309906] 0.1.0b5
  • Operating System: [e.g. Ubuntu 20.04, Windows 11] Ubuntu 22.04.3 LTS
  • Python Version using python --version: [e.g. python==3.10.12] Python 3.10.9
  • VS Code version. 1.82.0
  • Prompt Flow extension version. v1.0.7
  • On the VS Code bottom pane > Output pivot > "prompt flow" channel, find the error message could be relevant to the issue and past them here. That would be helpful for our trouble shooting.
  • If your code to repro the issue is public and you want to share us, please share the link.
2023-09-12 17:29:51.808 Command failed: /home/cicorias/g/cse/oneweek/2023/llmops/prompt-flow-llmops/venv/bin/python /home/cicorias/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.7/pfutil/pfutil.py connection get -n "open_ai_connection" -o /tmp/pf-getConnectionWithoutSecret-ca3699c5-6e66-4aaa-a60d-7201b3989084.json
Traceback (most recent call last):
  File "/home/cicorias/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.7/pfutil/pfutil.py", line 4, in <module>
    main()
  File "/home/cicorias/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.7/pfutil/_pf_vsc_main.py", line 32, in main
    operate_connection(args)
  File "/home/cicorias/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.7/pfutil/_pf_vsc_connection.py", line 31, in operate_connection
    get_connection(args)
  File "/home/cicorias/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.7/pfutil/_pf_vsc_connection.py", line 77, in get_connection
    connection_with_secret = convert_orm_object_to_connection(ORMConnection.get(connection_name), include_secret=include_secret)
  File "/home/cicorias/g/cse/oneweek/2023/llmops/prompt-flow-llmops/venv/lib/python3.10/site-packages/promptflow/_sdk/_orm/retry.py", line 43, in f_retry
    return f(*args, **kwargs)
  File "/home/cicorias/g/cse/oneweek/2023/llmops/prompt-flow-llmops/venv/lib/python3.10/site-packages/promptflow/_sdk/_orm/connection.py", line 52, in get
    raise ConnectionNotFoundError(f"Connection {name!r} is not found.")
promptflow._sdk._errors.ConnectionNotFoundError: Connection 'open_ai_connection' is not found.


Additional context
Add any other context about the problem here.

running with debug

image

[BUG] vector_db_lookup should not require the vector field to be returned by the search

Describe the bug
The vector_db_lookup tool for Azure Cognitive Search will throw a UserErrorException: KeyError if the vector_field is not included as part of the "select" search_params. There are scenarios where you don't want to include the vectors with your search results because the vectors are so large compared to the rest of the response object. The vector_db_lookup tool should support this option and not throw a KeyError.

How To Reproduce the bug

  1. follow https://microsoft.github.io/promptflow/reference/tools-reference/vector_db_lookup_tool.html to set up a vector_db_lookup tool in your flow
  2. Add the object {"select":"field_name_1, field_name_2, etc,"} to the search_params field. Replace the list of field names with the names of retrievable fields in your Azure Cognitive Search Index.
  3. Do NOT include the field name you entered for vector_field. (in my screen shot, you will see that content_vector is not included in search_params. If I include it, then the vector_search node will pass.)
  4. attempt to run a vector search using the tool.

Expected behavior
The vector_db_lookup tool should return the search response without throwing a KeyError

Screenshots
Screenshot 2023-09-12 at 4 27 32 PM

Screenshot 2023-09-12 at 4 30 17 PM

Running Information:

  • Promptflow Package Version using 0.1.0b5
  • promptflow-vectordb: 0.1.1
  • Operating System: MacOS
  • Python Version using 3.10.4
  • VS Code version 1.80.1
  • Prompt Flow extension version 1.0.7

logs:

2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.tool INFO     [VectorDBLookup] EmbeddingStore.Tool.Init started
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] EmbeddingStore.Client.Init started
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] Agent instance: CogSearchClient initialized
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] EmbeddingStore.Client.Init completed
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.tool INFO     [VectorDBLookup] Adapter instance: ConnectionBasedAdapter initialized
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] EmbeddingStore.Client.Load started
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] EmbeddingStore.Client.Load completed
2023-09-12 16:26:38 -0500   79043 promptflow_vectordb.tool INFO     [VectorDBLookup] EmbeddingStore.Tool.Init completed
2023-09-12 16:26:39 -0500   79043 promptflow_vectordb.tool INFO     [VectorDBLookup] EmbeddingStore.Tool.Search started
2023-09-12 16:26:39 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] EmbeddingStore.Client.SearchByEmbedding started
2023-09-12 16:26:39 -0500   79043 promptflow_vectordb.service INFO     [VectorDBLookup] EmbeddingStore.Client.SearchByEmbedding failed
2023-09-12 16:26:39 -0500   79043 promptflow_vectordb.tool INFO     [VectorDBLookup] EmbeddingStore.Tool.Search failed
Traceback (most recent call last):
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/core/utils/retry_utils.py", line 19, in wrapper
    return func(*args, **kwargs)
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/tool/vector_db_lookup.py", line 106, in search
    return _do_search()
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/core/logging/utils.py", line 97, in wrapper
    res = func(*args, **kwargs)
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/tool/vector_db_lookup.py", line 93, in _do_search
    return self.__adapter.search(
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/tool/adapter/connection_based_adapter.py", line 60, in search
    obj_list = self.__store.search_by_embedding(
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/core/logging/utils.py", line 97, in wrapper
    res = func(*args, **kwargs)
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/service/client/embeddingstore_client.py", line 103, in search_by_embedding
    return self.__agent.search_by_embedding(
  File "/Users/jessicaern/.pyenv/versions/3.10.4/lib/python3.10/site-packages/promptflow_vectordb/service/client/agent/cog_search_client.py", line 64, in search_by_embedding
    item.vector = item.original_entity[vector_field]
KeyError: 'content_vector'

Flow test failed with UserErrorException: KeyError: Execution failure in 'vector_search': (KeyError) 'content_vector'

[Feature Request] Expanded Language Support for Enhanced Flexibility

We acknowledge the widespread popularity of Python, particularly in data science and machine learning domains, it's important to recognize that the demand for building LLM-based applications extends to various programming languages and scenarios, encompassing both server-side and client-side development. Therefore, it is highly recommended that Prompt Flow be enhanced to support multiple programming languages such as JavaScript, .NET, and other popular languages, allowing developers the flexibility to leverage its capabilities across diverse technical environments.

[Feature Request] When batch run, in logs.txt let me know how many records in my batch run dataset, what is the current progress. for example 3/10,

Is your feature request related to a problem? Please describe.
when batch run, in logs.txt let me know how many records in my batch run dataset, what is the current progress.

Describe the solution you'd like
I want something like tqdm package. tell me the progress (in percentage or/and a/b), elapsed time, estimated remaining time and average time to complete one record of data.
image

Describe alternatives you've considered
No. Currently in my flow, I printed a text to logs, it is long, so it is hard for me to know how much time I need to wait for the batch run to finish. I have to go through with naked eye among the long log to find the output of one step to get the path, and search in my data.jsonl to figure out the process. It is exhausting.

Additional context
Add any other context or screenshots about the feature request here.

[Feature Request] Multi-Modality Support for More Comprehensive Applications

While Prompt Flow already offers impressive flexibility regarding data types within the overall stack, encompassing flow input/output and node intermediate data, it is crucial to address the limited support for visualization and tracing, which primarily revolves around text-oriented capabilities.

To further empower developers in creating comprehensive multi-modality applications, the following enhancements are highly recommended, although not exclusively limited to:

  • Media Type Specification: Extend the functionality of Prompt Flow to enable users to specify the media type (e.g., image, video, audio) for flow input/output, facilitating appropriate visualization and interpretation.

  • Expanded Testing/Chat Interface: Enhance the test/chat interface to accommodate inputting varying media types, ensuring seamless interaction with diverse modalities throughout the development process.

  • Media Type Specification for Node Intermediate Data: Provide developers with the ability to specify the media type for node intermediate data and ensure proper visualization within the Visual Studio Code (VSC) extension and portal, promoting a more integrated and holistic development experience.

  • Streaming Output for Multimedia Types: Particularly for media types such as video and audio, enable the streaming output capability, allowing for real-time and dynamic visualization that caters to the specific demands of multimedia applications.

[BUG] Not able to select embedding api in llm node

Describe the bug
Not able to select embedding api in llm node. Can repro with both AOAI and OAI connection.

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Add an llm node in AML Studio.
  2. Select any AOAI or OAI connection with embedding deployment.
  3. Click on the Api dropdown. Only see "chat" and "completion"

Expected behavior
See embedding option in the dropdown.

Screenshots
image

Running Information(please complete the following information):
You don't need PF CLI or the code-first experience to repro the issue.

  • Promptflow Package Version using pf -v: [e.g. 0.0.102309906]
  • Operating System: [e.g. Ubuntu 20.04, Windows 11]
  • Python Version using python --version: [e.g. python==3.10.12]

Additional context

  • Update or recreate the connection doesn't solve the issue.

[Feature Request] Achieve dynamic flow connections with logic conditions

Currently the node construction operates on a rudimentary fixed DAG. Is it possible to incorporate functionalities such as "if-else" and "for-loop" logics and branches (or even directed graph with cycles), to introduce dynamism into the workflow, thereby opening the door to boundless creative possibilities?

[BUG] Doesn't see "Visual editor" option when open flow.dag.yaml in VSCode

Describe the bug
According to the documentation, there should be a "Visual editor" option when open flow.dag.yaml file in VSCode. However, I didn't see "Visual editor" in my VSCode.

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Follow this documentation to create an init flow via pf.
  2. Once the project is created, open the flow.dag.yaml file in VSCode.
  3. Didn't see any "Visual editor" option in the editor.

Expected behavior
See "Visual editor" on the top-left corner of the file editor.

Screenshots
image

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b5
  • Operating System: Windows 11
  • Python Version using python --version: Python 3.11.4
  • VSCode version: 1.82.0

Additional context
Wonder if it's virtual env related.

  • Prompt flow VSCode extension is installed.
  • I use a virtual environment with VSCode workspace. promptflow and promptflow-tools are installed in the virtual env.
  • I can see the Prompt flow VSCode extension complains about missing promptflow and promptflow-tools. Maybe the extension does not support virtual env?!
    image

[BUG] [VSCode Extension] - Web VScode - Compute Instance

Describe the bug
Unable to create AzureOpenAI connection in VSCode Web

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Install PromptFlow extension
  2. Create Connection

Screenshots

  1. On the VSCode primary side bar > the Prompt flow pane > quick access section. Find the "install dependencies" action. Please it and attach the screenshots there.
image image

Environment Information

  • Promptflow Package Version using pf -v:
  • Operating System: Ubuntu
  • Python Version using python --version: [e.g. python==3.10.12]
  • VS Code version.
  • Prompt Flow extension version.
  • On the VS Code bottom pane > Output pivot > "prompt flow" channel, find the error message could be relevant to the issue and past them here. That would be helpful for our trouble shooting.
  • If your code to repro the issue is public and you want to share us, please share the link.

Additional context

Attempted to pip install keyring, but the problem still persists.
Error:
/anaconda/envs/azureml_py310_sdkv2/bin/python /home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/pfutil.py connection upsert -f /home/azureuser/.promptflow/temp/new_AzureOpenAI_connection.yaml -o /tmp/pf-upsertConnection-f4aa11eb-a780-42f1-8663-2da7e0f386bc.json
Traceback (most recent call last):
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 155, in _get_from_keyring
return keyring.get_password(KEYRING_SYSTEM, KEYRING_ENCRYPTION_KEY_NAME)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/keyring/core.py", line 56, in get_password
return get_keyring().get_password(service_name, username)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/keyring/backends/fail.py", line 28, in get_password
raise NoKeyringError(msg)
keyring.errors.NoKeyringError: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See https://pypi.org/project/keyring for details.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/_pf_vsc_connection.py", line 52, in upsert_connection
upsert_connection_new(args)
File "/home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/_pf_vsc_connection.py", line 65, in upsert_connection_new
connection = _upsert_connection_from_file(args.file)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_cli/_pf/_connection.py", line 210, in _upsert_connection_from_file
connection = _client.connections.create_or_update(connection)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/operations/_connection_operations.py", line 68, in create_or_update
orm_object = connection._to_orm_object()
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/entities/_connection.py", line 256, in _to_orm_object
secrets = self._validate_and_encrypt_secrets()
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/entities/_connection.py", line 137, in _validate_and_encrypt_secrets
encrypt_secrets[k] = encrypt_secret_value(v)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 181, in encrypt_secret_value
encryption_key = get_encryption_key(generate_if_not_found=True)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 164, in get_encryption_key
ENCRYPTION_KEY_IN_KEY_RING = _get_from_keyring()
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 157, in _get_from_keyring
raise StoreConnectionEncryptionKeyError(
promptflow._sdk._errors.StoreConnectionEncryptionKeyError: System keyring backend service not found in your operating system. See https://pypi.org/project/keyring/ to install requirement for different operating system, or 'pip install keyrings.alt' to use the third-party backend. Reach more detail about this error at https://microsoft.github.io/promptflow/how-to-guides/faq.html#connection-creation-failed-with-storeconnectionencryptionkeyerror

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 155, in _get_from_keyring
return keyring.get_password(KEYRING_SYSTEM, KEYRING_ENCRYPTION_KEY_NAME)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/keyring/core.py", line 56, in get_password
return get_keyring().get_password(service_name, username)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/keyring/backends/fail.py", line 28, in get_password
raise NoKeyringError(msg)
keyring.errors.NoKeyringError: No recommended backend was available. Install a recommended 3rd party backend package; or, install the keyrings.alt package if you want to use the non-recommended backends. See https://pypi.org/project/keyring for details.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/pfutil.py", line 3, in
main()
File "/home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/_pf_vsc_main.py", line 32, in main
operate_connection(args)
File "/home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/_pf_vsc_connection.py", line 29, in operate_connection
upsert_connection(args)
File "/home/azureuser/.vscode-server/extensions/prompt-flow.prompt-flow-1.0.5/pfutil/_pf_vsc_connection.py", line 58, in upsert_connection
connection = ConnectionOperations().create_or_update(connection)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/operations/_connection_operations.py", line 68, in create_or_update
orm_object = connection._to_orm_object()
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/entities/_connection.py", line 256, in _to_orm_object
secrets = self._validate_and_encrypt_secrets()
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/entities/_connection.py", line 137, in _validate_and_encrypt_secrets
encrypt_secrets[k] = encrypt_secret_value(v)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 181, in encrypt_secret_value
encryption_key = get_encryption_key(generate_if_not_found=True)
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 164, in get_encryption_key
ENCRYPTION_KEY_IN_KEY_RING = _get_from_keyring()
File "/anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/promptflow/_sdk/_utils.py", line 157, in _get_from_keyring
raise StoreConnectionEncryptionKeyError(
promptflow._sdk._errors.StoreConnectionEncryptionKeyError: System keyring backend service not found in your operating system. See https://pypi.org/project/keyring/ to install requirement for different operating system, or 'pip install keyrings.alt' to use the third-party backend. Reach more detail about this error at https://microsoft.github.io/promptflow/how-to-guides/faq.html#connection-creation-failed-with-storeconnectionencryptionkeyerror

Tasks

No tasks being tracked yet.

[BUG] The instructions in "Link LLM node with multi-output upstream node" did not work

Describe the bug
Here is the link to the instructions. I followed the instructions to link one key of a dictionary output to a node input but it did not link properly.

How To Reproduce the bug
Here is my flow:

  1. The node "get_entity" produces a dictionary output with three keys: compounds, count, has_entity. The output looks correct to me.

image

  1. The node "extract_property" is configured to take ${get_entity.output.compounds} as the input "entities". Unfortunately, after run, I saw the entire output from "get_entity" (instead of the value of "compounds") is copied into the input "entities".

image

Expected behavior
The input "entities" should be the value of the key "compounds" in ${get_entity.output}, not the entire ${get_entity.output}

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b6
  • Operating System: Ubuntu 20.04
  • Python Version using python --version: 3.9.17

[Feature Request] Support flow parameters

Is your feature request related to a problem? Please describe.
When running flows locally or in the cloud, I might want to set some values differently. Maybe a parameter that points to a different model. At the moment, this can only be done, to some extent, with variants, which may be extremely verbose for tiny changes some times. Additionally, they require the YAML to be heavily modified to test new parameter combinations.

Describe the solution you'd like
I would like some input parameters to a flow that can be reference throughout the DAG YAML, that can have defaults but can be passed at run time. That way, I could for instance test very quickly different things, without having to heavily change the YAML. For transparency, these could be logged at the beginning of an execution or set as tags of the flow when running in the cloud.

Describe alternatives you've considered
To some extent, this can be solved with variants, but if a flow is complex, the YAML becomes huge for maybe just changing one little number on one step.

[BUG] [VSCode Extension] Running bulk test vs code interactive window errors not show

Describe the bug
I was running the code below to do a bulk test using the vs code python interactive window (shift-enter to execute the cells).
image
The flow has some issues. But from the VS code interactive window, the cell execution never stops.

How To Reproduce the Bug
Steps to reproduce the behavior, how frequently can you experience the bug: every time.

Screenshots
image

image

Environment Information

  • Operating System: Windows 11
  • Python Version using python --version: 3.9.17
  • VS Code version.
  • Prompt Flow extension version.
  • If your code to repro the issue is public and you want to share us, please share the link.

Additional context
Add any other context about the problem here.

Action required: migrate or opt-out of migration to GitHub inside Microsoft

Migrate non-Open Source or non-External Collaboration repositories to GitHub inside Microsoft

In order to protect and secure Microsoft, private or internal repositories in GitHub for Open Source which are not related to open source projects or require collaboration with 3rd parties (customer, partners, etc.) must be migrated to GitHub inside Microsoft a.k.a GitHub Enterprise Cloud with Enterprise Managed User (GHEC EMU).

Action

✍️ Please RSVP to opt-in or opt-out of the migration to GitHub inside Microsoft.

❗Only users with admin permission in the repository are allowed to respond. Failure to provide a response will result to your repository getting automatically archived.🔒

Instructions

Reply with a comment on this issue containing one of the following optin or optout command options below.

✅ Opt-in to migrate

@gimsvc optin --date <target_migration_date in mm-dd-yyyy format>

Example: @gimsvc optin --date 03-15-2023

OR

❌ Opt-out of migration

@gimsvc optout --reason <staging|collaboration|delete|other>

Example: @gimsvc optout --reason staging

Options:

  • staging : This repository will ship as Open Source or go public
  • collaboration : Used for external or 3rd party collaboration with customers, partners, suppliers, etc.
  • delete : This repository will be deleted because it is no longer needed.
  • other : Other reasons not specified

Need more help? 🖐️

[BUG] Unexpected Generator From a Test Result - Request for Docs Update? (SDK vs CLI)

Describe the bug
Using a node that uses the SDK for the AOAI tool returns what I expect, similar to how the examples/tutorials show. Using the package/tool in the flow.dag.yaml returns the .

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Try to mimic what's shown in this example Basic Flow with Builtin LLM:
    inputs:
      question:
        type: string
    outputs:
      answer:
        type: string
        reference: ${answer_the_question_with_context.output}
    nodes:
      ...
      - name: answer_the_question_with_context
        type: llm
        source:
          type: code
          path: step05_answer_question_with_context.jinja2
        inputs:
          deployment_name: gpt-35-turbo
          temperature: 0
          max_tokens: 1000
          prompt_text: ${prompt_variants.output}
        connection: some-aoai-connection
        api: chat
  2. Test the flow in a jupyter notebook:
    test_result = pf.test(flow="some-flow-name", inputs={"question":"What are bugs?")
    print(f"Flow outputs: {test_result}")
    Flow outputs: {'answer': <generator object generate_from_proxy at 0x177971740>}
    
  3. Test the flow using the CLI:
    $ pf flow test --flow some-flow-name --inputs question="What are bugs?"
    {
        "answer": "Blah blah blah I'm a relevant answer"
    }
    

Expected behavior

I'd suspect that both would return the same thing, but it seems that when printed -- which many of the example notebooks do with the test result -- there's no indication that some formatting will be required.

Workaround ended up being:

test_result = [i for i in pf.test(flow=FLOW_NAME, inputs=inputs)["answer"]]
print(f"Flow outputs: {''.join(test_result)}")

If one doesn't use the built in LLM package and instead use the SDK in a python code node, this error doesn't come up.

inputs:
  question:
    type: string
outputs:
  answer:
    type: string
    reference: ${answer_the_question_with_context.output}
nodes:
...
- name: answer_the_question_with_context
  type: python
  source:
    type: code
    path: step05_answer_question_with_context.py
  inputs:
    prompt_text: ${STEP_FOUR.output}
    aoai_connection: some-aoai-connection

step05_answer_question_with_context.py: Connects to AOAI

from promptflow import tool
from promptflow.connections import AzureOpenAIConnection
from promptflow.tools import AzureOpenAI
import os
from jinja2 import Template as jinja_template

@tool
def answer_question_with_context(
    prompt_text: jinja_template, 
    aoai_connection: AzureOpenAIConnection
) -> str:

    aoai_gpt_deployment = os.getenv("AZURE_OPENAI_GPT_DEPLOYMENT")

    response = AzureOpenAI(connection=aoai_connection).chat(
        prompt_text, 
        deployment_name=aoai_gpt_deployment, 
        temperature=0, 
        max_tokens=1000
    )

    return response

I think this all comes from being bleeding/cutting edge.

Screenshots
If applicable, add screenshots to help explain your problem.

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b6
  • Operating System: Mac OS 13.5.2
  • Python Version using python --version: Python 3.9.17

Additional context
Branching from #561.

[BUG] Missing WEBSITES_PORT setting for App Service deployment

Describe the bug
Deploying to App Service as described in the documentation results in long startup times.

The Dockerfile created by pf flow build exposes port 8080. By default Azure App Service assumes containers are listening to port 80, otherwise an extra WEBSITES_PORT setting is required according to the documentation. Not having it configured results in Azure App Services waiting for the container to be reachable at the wrong port until a timeout expires, before trying the correct port (see log below). Adding the WEBSITES_PORT=8080 setting in deploy.sh would prevent this situation.

2023-09-14T19:25:43.960Z INFO - Starting container for site
2023-09-14T19:25:43.961Z INFO - docker run -d --expose=80 --name myapp_0_e72dfc5f -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=mysite -e WEBSITE_AUTH_ENABLED=False -e PORT=8080 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=mysite.azurewebsites.net -e WEBSITE_INSTANCE_ID=xxx -e WEBSITE_USE_DIAGNOSTIC_SERVER=False xxx.azurecr.io/myimage:1.3 bash start.sh

2023-09-14T19:25:43.961Z INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2023-09-14T19:25:45.658Z INFO - Initiating warmup request to container myapp_0_e72dfc5f for site mysite
2023-09-14T19:26:02.569Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 16.9110536 sec
2023-09-14T19:26:20.122Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 34.463459 sec
2023-09-14T19:26:38.662Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 53.0033335 sec
2023-09-14T19:26:54.717Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 69.0580589 sec
2023-09-14T19:27:10.089Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 84.4306526 sec
2023-09-14T19:27:25.342Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 99.6837565 sec
2023-09-14T19:27:40.408Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 114.7497201 sec
2023-09-14T19:27:55.484Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 129.8255082 sec
2023-09-14T19:28:10.550Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 144.8916428 sec
2023-09-14T19:28:25.612Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 159.9534253 sec
2023-09-14T19:28:40.677Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 175.0181097 sec
2023-09-14T19:28:56.754Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 191.0958969 sec
2023-09-14T19:29:11.820Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 206.1612047 sec
2023-09-14T19:29:27.793Z INFO - Waiting for response to warmup request for container myapp_0_e72dfc5f. Elapsed time = 222.1347732 sec
2023-09-14T19:29:35.861Z ERROR - Container myapp_0_e72dfc5f for site mysite did not start within expected time limit. Elapsed time = 230.2019503 sec
2023-09-14T19:29:36.102Z ERROR - Container myapp_0_e72dfc5f didn't respond to HTTP pings on port: 8080, failing site start. See container logs for debugging.
2023-09-14T19:29:36.133Z INFO - Stopping site mysite because it failed during startup.
2023-09-14T19:29:50.829Z INFO - Pulling image: xxx.azurecr.io/myimage:1.3
2023-09-14T19:29:51.179Z INFO - 1.3 Pulling from myimage
2023-09-14T19:29:51.182Z INFO - Digest: sha256:b768583310fb3530bad5427b6df9a304893329eb02c5ad5bd0a6b533ed9b5954
2023-09-14T19:29:51.183Z INFO - Status: Image is up to date for xxx.azurecr.io/myimage:1.3
2023-09-14T19:29:51.185Z INFO - Pull Image successful, Time taken: 0 Minutes and 0 Seconds
2023-09-14T19:29:51.202Z INFO - Starting container for site
2023-09-14T19:29:51.203Z INFO - docker run -d --expose=8080 --name mysite_0_cc56b549 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=mysite -e WEBSITE_AUTH_ENABLED=False -e PORT=8080 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=mysite.azurewebsites.net -e WEBSITE_INSTANCE_ID=xxx -e WEBSITE_USE_DIAGNOSTIC_SERVER=False xxx.azurecr.io/myimage:1.3 bash start.sh

2023-09-14T19:29:51.204Z INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2023-09-14T19:29:51.948Z INFO - Initiating warmup request to container mysite_0_cc56b549 for site mysite
2023-09-14T19:29:55.976Z INFO - Container mysite_0_cc56b549 for site mysite initialized successfully and is ready to serve requests.

How To Reproduce the bug
Steps to reproduce the behavior:

  1. Create a flow
  2. Follow documentation to deploy App Service.
  3. App takes long time to be reachable

Expected behavior
App Service deployment is reachable faster

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b5
  • Operating System: MacOS 13.5.2
  • Python Version using python --version: 3.11.5

[Feature Request] Remove errors when they are fixed

Is your feature request related to a problem? Please describe.
While trying to learn and use the code-first experience, I was confused by the Prompt flow errors in the IDE. After check the details of each error, it seems like a point in time error during my flow development. However, even my flow is fixed and can run successfully, the accumulated PF errors don't go away. It's annoying to dismiss them one by one.
image

Describe the solution you'd like
Don't show errors when they are no longer valid (or have been fixed).

Describe alternatives you've considered
It might be a more natural place to show PF errors in VSCode Problem instead of having a pop up.
image

Additional context
N/A

[BUG] CLI build issues when connection contains spaces

Describe the bug
Few things do not work when running pf flow build having a connection with a space in the name (e.g. "Azure OpenAI"):

  • The generated start.sh script does not escape the connection file path correctly:
    pf connection create --file /connections/Azure OpenAI.yaml
  • The api_key for connection file (e.g. connections/Azure OpenAI.yaml) is invalid as it contains spaces:
    api_key: ${env:AZURE OPENAI_API_KEY}

How To Reproduce the bug
Steps to reproduce the behavior:

  1. Create a simple flow
  2. Configure a connection with a space in the name
  3. Run pf flow build --source <flow_dir> -f docker --output out
  4. Observe the aforementioned files

Expected behavior
The aforementioned files should be generated correctly

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b5
  • Operating System: MacOS 13.5.2
  • Python Version using python --version: 3.11.5

The exention connection panel “always loading” [BUG][vscode extension]

Describe the bug
I cannot create connections through the plugin's connection panel, and the pannel is always waiting for fetching connections, and the progress message in the bottom status bar keeps saying "Fetching connections or Validating connection configurations".
Does anybody know how to solve this issue?

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:
1、open the Prompt flow for vscode extension, not do anything, the connection pannel show below
image
2、click "+" select openai or azure, then click the "Create connection", No Task Effect。

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
image

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b5
  • Operating System: [e.g. WSL: Ubuntu 20.04]
  • Python Version using python --version: python==3.10.11

[BUG] [VSCode Extension] python interpreter selector not persistented to flow editor

Describe the bug
A clear and concise description of the bug.
when I go to install dependencies -- select python interpterter, choose the python encv
image

then I open a flow dag yaml, the runtime is not chosen

image

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:
1.

Screenshots

  1. On the VSCode primary side bar > the Prompt flow pane > quick access section. Find the "install dependencies" action. Please it and attach the screenshots there.
  2. Please provide other snapshots about the key steps to repro the issue.

Environment Information

  • Promptflow Package Version using pf -v: 0.0.102528530
  • Operating System: [e.g. Ubuntu 20.04, Windows 11] Win11
  • Python Version using python --version: [e.g. python==3.10.12]Python 3.9.17
  • VS Code version 1.81.1
  • Prompt Flow extension version. . 1.0.4
  • On the VS Code bottom pane > Output pivot > "prompt flow" channel, find the error message could be relevant to the issue and past them here. That would be helpful for our trouble shooting.
  • If your code to repro the issue is public and you want to share us, please share the link.

Additional context
Add any other context about the problem here.

[BUG] Code-first with VSCode Web: Missing instructions on dev env setup

Describe the bug
Cannot create folder and virtual env by following the instructions. Suspect we might miss guidance for developers who are not familar with AML env (according to the doc, AML knowledge isn't required 😄).

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Create a new project in the new portal.
  2. Click on the "Work in VS Code (Web)" button.
  3. Select compute and click on "Launch".
  4. The portal will open VSCode web in a new window. Sign-in with my credential.
    5-1. Try to create a src/ folder via UI or CLI. => permission denied.
    5-2. Try to create a virtual env. => conda isn't installed

Expected behavior

  1. Have permission to add folders/files.
  2. conda is pre-installed.

Screenshots
image

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: N/A (Not install PF yet)
  • Operating System: Windows 11 + VSCode Web. The VSCode Web env seems run on Linux.
  • Python Version using python --version: the VSCode Web env has Python 3.10.12 (main, Jul 28 2023, 05:35:08) [GCC 12.2.0] on linux.

Additional context
It's not clear to developer what is the next step after open an empty PF project in VSCode Web. Recommend adding a PF welcome page to guide users through the necessary steps.

Does not support uploading datastore with PDFs[BUG]

Describe the bug
PromptFlow will throw an error if you point to an azure DataStore that contains structured data (such as PDFs). Instead for structured data, you have to manually upload the files instead of pointing to a DataStore

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Go to Data in the AML workspace
  2. Create a new Data connection
  3. Create from Azure Storage
  4. Create a new Datastore
  5. Create the datastore from uploaded PDFs in Azure Blob Storage
  6. Store Path leave as default (/)
  7. An error will be thrown

Expected behavior
A data asset should be able to be created from structured data such as PDFs

Screenshots
Screenshot 2023-09-11 at 11 12 32 AM

Screenshot 2023-09-11 at 11 12 40 AM

[BUG] Run/debug single node doesn't work with the chat-with-pdf example

Describe the bug
Run/debug single node doesn't work with the chat-with-pdf example, since it depends on environment variables that was setup in the setup_env step.

How To Reproduce the bug
It's consistent repro:

  1. Open the chat-with-pdf example mentioned in the e2e tutorial with the vscode extension
  2. Follow the tutorial to run the flow successfully
  3. Then try to run a single node, which will fail by complaining a missing environment variable etc.

Expected behavior
Single node run/debug can work. I believe this requires some redesign of the sample but would be nice to leverage the single node debug capability.

Screenshots
CleanShot 2023-09-09 at 12 52 58@2x

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b5
  • Operating System: macOS 13.5
  • Python Version using python --version: Python 3.9.17
  • VS code extension: 1.0.7

Additional context

[BUG] Missing apiType error when creating Cognitive Search Prompt Flow connection using the UI

Describe the bug
When creating a new "Add Cognitive search connection" in prompt flow UI, we get a 400 bad request with this message:
"Required metadata property ApiType is missing".
When we look at the openAI connection, it does have an "apiType" textBox, but not for Cognitive Search. Not sure while it would be required anyway.
I manually modified the payload to include "api_type":"azure" but I still get the exact same error message.

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. In prompt flow UI, try to create a new Cognitive search connection. It fails everytime.

Expected behavior
Successfully Add a Cognitive Search connection in prompt flow without having to specify apiType.

Screenshots
Screenshot 2023-09-19 at 7 49 51 AM

[Feature Request] Dynamic list capability support in tool contract

I noticed that the LLM has the capability to dynamically list the deployment names based on the AOAI connection provided. It would be advantageous to incorporate this capability into the tool contract, allowing other tools, including custom tools, to benefit from it. For instance, the "Vector index lookup" tool could have the "path" input dynamically list the available ML Indexes in the workspace.

In summary, any tool should be able to define the listing experience and the corresponding request to enable the dynamic listing feature.

image

[BUG] [VSCode Extension] No action prompt in the newly opened window to let me know I need switch the conda env

Describe the bug
No action prompt in the newly opened window to let me know I need switch the conda env.

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. click Create new flow
  2. select empty flow
  3. create

This will open a new window to access flow folder, but in the new window, the pf conda env needs to be activated manually, otherwise I don't know what to do in the dag.yaml file, the button is missing, and there is no message prompting me to switch kernels, especially the empty one, only input and output are there,,,,

Screenshots
image

Environment Information

  • Promptflow Package Version: 0.1.0b5
  • Operating System: Windows 11
  • Python Version: python==3.9
  • VS Code version.
  • Prompt Flow extension version. v1.0.7
  • On the VS Code bottom pane > Output pivot > "prompt flow" channel, find the error message could be relevant to the issue and past them here. That would be helpful for our trouble shooting.
  • If your code to repro the issue is public and you want to share us, please share the link.

Additional context
Add any other context about the problem here.

[BUG] AML Studio Web version of Prompt Flow does not log or trace to Azure Application Insights

During the Microsoft Global Hackathon 2023 it was discovered that when using the Web version of PromptFlow, logging and tracing does not reach App Insights.

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Use the following python packages: opencensus-ext-azure, opencensus-ext-logging. Consider the "chat_flow_with_vectordb_tool" flow on the branch https://github.com/microsoft/prompt-flow-llmops/tree/feature/logging_and_tracing

  2. Engage the flow on local VSCode or web version of VSCode using cloud compute, verify a trace appears on AppInsights. Check for traces by using the search query "traces" in LogAnalytics.

  3. Engage the flow on Azure Machine Learning Studio, observe no trace appears on AppInsights

Expected behavior

When flow is engaged using Azure Machine Learning Studio's version of PromptFlow, simple traces appear in AppInsights.

Screenshots

Evidence showing working version on VSCode https://github.com/microsoft/prompt-flow-llmops/issues/3

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.0.105586882
  • Operating System: MacOS 13.6 using DevContainer with $ uname -a returning : Linux d3f8f1336671 6.3.13-linuxkit #1 SMP PREEMPT Thu Sep 7 07:48:47 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  • Python Version using python --version: Python 3.9.16

[BUG] Bug report for Prompt flow VS Code extension

Describe the bug
I'm experiencing an issue with integrating Semantic Kernel plugins into Prompt Flow. I followed the Semantic Kernel tutorial and created some plugins using the VSCode extension (example: 'FunSkill' > 'Excuses'). However, when I attempt to add these plugins to a prompt flow in Prompt Flow, the interface does not allow me to select the folder, skprompt.txt or config.json files associated with my Semantic Function.

How To Reproduce the bug

  1. Create a new node in the prompt flow by clicking the "+" button.
  2. Select "Prompt Tool" from the options.
  3. Name the node.
  4. Try to select an existing file for the node.
  5. Notice that only .jinja2 files can be selected, and not .txt or .json files related to VSCode-extension-generated Semantic functions.

Expected behavior
I expect to be able to select the .txt prompt file of my 'Excuses' semantic function or the relevant .json configuration file. The interface should support integration with Semantic Kernel plugins as per the tutorial.

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: [e.g. 0.0.102309906]
  • Operating System: [e.g. Ubuntu 20.04, Windows 11]
  • Python Version using python --version: [e.g. python==3.10.12]

Additional context
The discrepancy between the templating in .jinja2 files ({{text}}) and skprompt.txt files ({{$input}}) further complicates the issue. Its not clear whether Semantic Kernel plugins can be integrated into Prompt Flow, but if not that raises the question as to why Prompt Flow was advertised in the Semantic Kernel tutorial as a solution to generate and evaluate chains of prompts.

Screenshots
Screenshots showing how I can't add existing Semantic Functions to the Prompt Flow interface:
image
imageimage
image
image

Screenshots showing the disrepancy between templating for Prompt Flow .jinja2 files and skprompt.txt files:
image
image

[BUG] Streaming requires Generator where Iterator should suffice

Describe the bug
For the user code to provide streaming content, instead of the more generic Iterator protocol, Promptflow relies on the more specific GeneratorType. That is requiring the user to unnecessarily wrap their iterators in generators to stream content in more generic scenarios (such as streaming langchain results). Is there a reason PF depends on GeneratorType instead of the iterator protocol? Also, see this note in the python source:

# Iterators in Python aren't a matter of type but of protocol.  A large
# and changing number of builtin types implement *some* flavor of
# iterator.  Don't check the type!  Use hasattr to check for both
# "__iter__" and "__next__" attributes instead.

Concrete Issue:
When returning an iterator from my tool to enable streaming, I get the following error when running pf flow test

Flow test failed with UserErrorException: Exception: The output 'answer' for flow is incorrect. The output value is 
not JSON serializable. JSON dump failed: (TypeError) Object of type WordIterator is not JSON serializable. Please 
verify your flow output and make sure the value serializable.

How To Reproduce the bug
Here is my flow.dag.yaml:

id: template_chat_flow
name: Template Chat Flow
inputs:
  chat_history:
    type: list
    default: []
  question:
    type: string
    default: What is the meaning of life?
outputs:
  answer:
    type: string
    is_chat_output: true
    reference: ${stream.output}
nodes:
- name: stream
  type: python
  source:
    type: code
    path: stream.py
  inputs:
    input: ${inputs.question}

And here is my stream.py

from promptflow import tool

class WordIterator:
    def __init__(self, input: str):
        self.input = input

    def __iter__(self):
        return self

    def __next__(self):
        if self.input:
            word, *rest = self.input.split(" ")
            self.input = " ".join(rest)
            return f"{word} "
        else:
            raise StopIteration  

@tool
def my_python_tool(input: str):
    iterator = WordIterator(input)
    assert hasattr(iterator, "__iter__")
    assert hasattr(iterator, "__next__")
    return iterator

With the above PF run pf flow test --flow . -- I get this error:

2023-09-19 11:01:17 +0200   42558 execution          INFO     Start to run 1 nodes with concurrency level 16.
2023-09-19 11:01:17 +0200   42558 execution.flow     INFO     Executing node stream. node run id: c4ddaddd-6a38-44fd-9ab1-3258fb88bb37_stream_0
2023-09-19 11:01:17 +0200   42558 execution.flow     WARNING  Output of stream is not json serializable, use str to store it.
2023-09-19 11:01:17 +0200   42558 execution.flow     INFO     Node stream completes.
Flow test failed with UserErrorException: Exception: The output 'answer' for flow is incorrect. The output value is not JSON serializable. JSON dump failed: (TypeError) Object of type WordIterator is not JSON serializable. Please verify your flow output and make sure the value serializable.

Expected behavior
It should read out the iterator and stream the chunks to the caller -- pf flow test --flow . should look like this:

2023-09-19 10:34:05 +0200   40375 execution          INFO     Start to run 1 nodes with concurrency level 16.
2023-09-19 10:34:05 +0200   40375 execution.flow     INFO     Executing node stream. node run id: 24e60c4d-606a-4fc5-8e4c-cc4a5c41d6c8_stream_0
2023-09-19 10:34:05 +0200   40375 execution.flow     WARNING  Output of stream is not json serializable, use str to store it.
2023-09-19 10:34:05 +0200   40375 execution.flow     INFO     Node stream completes.
{
    "answer": "What is the meaning of life? "
}

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b6
  • Operating System:
ProductName:            macOS
ProductVersion:         13.5.2
BuildVersion:           22G91
  • Python Version using python --version: Python 3.10.12

Additional context
If I wrap the iterator in a generator, everything works as expected:

from promptflow import tool

class WordIterator:
    def __init__(self, input: str):
        self.input = input

    def __iter__(self):
        return self

    def __next__(self):
        if self.input:
            word, *rest = self.input.split(" ")
            self.input = " ".join(rest)
            return f"{word} "
        else:
            raise StopIteration  
    
    def to_generator(self):
        try:
            while True:
                yield next(self)
        except StopIteration:
            pass

@tool
def my_python_tool(input: str):
    iterator = WordIterator(input).to_generator()
    assert hasattr(iterator, "__iter__")
    assert hasattr(iterator, "__next__")
    return iterator

[Feature Request] Allow AzureML data assets as non-data inputs

Is your feature request related to a problem? Please describe.
When things start to complicate beyond the examples provided already, there are many cases where one might want to inject some data and/or file to some of the nodes in a flow that are not input records. An example, is to inject a custom vector index store. I was not able to do that.

Describe the solution you'd like
I would like to be able to mount AzureML registered data assets that are not input data but some other artifacts that are required by my flow.

[Feature Request] Point local run outputs to flow folder

Is your feature request related to a problem? Please describe.
At the moment, when running flows locally, the outputs are saved using Path().home() which ends up putting the outputs in a hard to access place. Whilst they can be found thanks to the outputs in std out, they are hard to recover later on which hinders the local experience for inspecting results.

Describe the solution you'd like
Always save the outputs int the <flow_directory>/.promptflow like other artifacts that appear there. Thus, they are always easily recoverable.

Describe alternatives you've considered
Make it a parameter that you can pass it to the flow to save logs wherever you want them.

Additional context
I create this PR (#506) but it has been ignored without any feedback, so maybe the process should start with an issue.

[BUG] Missing `promptflow.core` module when running a `promptflow.tool`

Describe the bug
When trying to run a python node, any promptflow.tool import will fail

How To Reproduce the bug
Steps to reproduce the behavior, how frequent can you experience the bug:

  1. Have a flow step that eventually requires something from a promptflow.tool. In my case, I was using a prompt step in a node variant:
    node_variants:
      prompt_variants:
        default_variant_id: variant_0
        variants:
          variant_0:
            node:
              type: prompt ## THIS STEP
              source:
                type: code
                path: 04_prompt_variant_0.jinja2
              inputs:
                contexts: ${generate_prompt_context.output}
                question: ${inputs.question}
  2. See:
    [Stack trace that goes through tool resolving]
    
    File ~/[...]env-3dot9/lib/python3.9/site-packages/promptflow/executor/_tool_resolver.py:117, in ToolResolver.resolve_tool_by_node(self, node, convert_input_types)
        115     raise NotImplementedError(f"Tool source type {node.source.type} for python tool is not supported yet.")
        116 elif node.type is ToolType.PROMPT:
    --> 117     return self._resolve_prompt_node(node)
        118 elif node.type is ToolType.LLM:
        119     return self._resolve_llm_node(node, convert_input_types=convert_input_types)
    
    File ~/[...]/env-3dot9/lib/python3.9/site-packages/promptflow/executor/_tool_resolver.py:151, in ToolResolver._resolve_prompt_node(self, node)
        149 prompt_tpl = self._load_source_content(node)
        150 prompt_tpl_inputs = get_inputs_for_prompt_template(prompt_tpl)
    --> 151 from promptflow.tools.template_rendering import render_template_jinja2
        153 params = inspect.signature(render_template_jinja2).parameters
        154 param_names = [name for name, param in params.items() if param.kind != inspect.Parameter.VAR_KEYWORD]
    
    File ~/[...]/env-3dot9/lib/python3.9/site-packages/promptflow/tools/__init__.py:1
    ----> 1 from .aoai import AzureOpenAI  # noqa: F401
          2 from .azure_content_safety import AzureContentSafety  # noqa: F401
          3 from .azure_detect import AzureDetect  # noqa: F401
    
    File ~/[...]/env-3dot9/lib/python3.9/site-packages/promptflow/tools/aoai.py:8
          6 from promptflow.connections import AzureOpenAIConnection
          7 from promptflow.contracts.types import PromptTemplate
    ----> 8 from promptflow.core.cache_manager import enable_cache
          9 from promptflow.core.tool import ToolProvider, tool
         10 from promptflow.core.tools_manager import register_api_method, register_apis
    
    ModuleNotFoundError: No module named 'promptflow.core'
  3. Inspect the promptflow package in the environment:
    env-3dot9/lib/python3.9/site-packages/promptflow/
    ├── [... ## OTHER FILES AND DIRECTORIES ##]
    ├── _core
    │   ├── __init__.py
    │   ├── __pycache__
    │   ├── _errors.py
    │   ├── cache_manager.py
    │   ├── connection_manager.py
    │   ├── flow_execution_context.py
    │   ├── generator_proxy.py
    │   ├── log_manager.py
    │   ├── metric_logger.py
    │   ├── openai_injector.py
    │   ├── operation_context.py
    │   ├── run_tracker.py
    │   ├── thread_local_singleton.py
    │   ├── tool.py
    │   ├── tool_meta_generator.py
    │   ├── tools_manager.py
    │   └── tracer.py
    ├── [... ## OTHER DIRECTORIES ##]
    └── tools
        ├── __init__.py
        ├── __pycache__
        ├── aoai.py
        ├── azure_content_safety.py
        ├── azure_detect.py
        ├── azure_form_recognizer.py
        ├── azure_language_detector.py
        ├── azure_translator.py
        ├── common.py
        ├── embedding.py
        ├── exception.py
        ├── list.py
        ├── openai.py
        ├── serpapi.py
        ├── template_rendering.py
        └── yamls
  4. See that the package for core may have been renamed _core but the tools have not been updated accordingly.

Expected behavior
I'd expect that the tools that used anything under the core module would work because they reference the right imports.

Screenshots
N/a, see all the text ^

Running Information(please complete the following information):

  • Promptflow Package Version using pf -v: 0.1.0b3
  • Operating System: Mac 13.5.1
  • Python Version using python --version: 3.9.17

Additional context
I think I fixed it locally by fixing import paths. I'll be making a PR soon.

[BUG] [VSCode Extension] Bulk test with existing run auto-generate yaml missing data entry

Describe the bug
When the user clicks the bulk test from vs code extension and selects from the existing run, a yaml file will be created automatically. However, in this auto-generated yaml file, it missed one entry:
image

the data entry is missing. The user is likely to forget to add the data. And click "run", the error will show no data is included.

Screenshots

  1. On the VSCode primary side bar > the Prompt flow pane > quick access section. Find the "install dependencies" action. Please it and attach the screenshots there.
    image

  2. Please provide other snapshots about the key steps to repro the issue.

Environment Information

  • Operating System: [e.g. Ubuntu 20.04, Windows 11]
  • Python Version using python --version: [e.g. python==3.9]

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.