ml-tooling / opyrator Goto Github PK
View Code? Open in Web Editor NEW🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more.
Home Page: https://opyrator-playground.mltooling.org
License: MIT License
🪄 Turns your machine learning code into microservices with web API, interactive GUI, and more.
Home Page: https://opyrator-playground.mltooling.org
License: MIT License
Feature description:
Problem and motivation:
Is this something you're interested in working on?
Describe the bug:
Error parsing Markdown or HTML in this string:
the above error occurs when trying to show svg result image, and svg image not showing or downloading
I'm not used to be in web programming and pydantic models, so if I did something wrong,
you could just say "study more!" and ignore it
Thank you
Expected behaviour:
svg image should be shown
Steps to reproduce the issue:
Here is my code
from pydantic import BaseModel, Field
from opyrator.components.types import FileContent
import time
import time
import os, subprocess, shutil
from Bio import SeqIO
from ete3 import Tree, TreeStyle, NodeStyle, TextFace
import pandas as pd
import copy
class Input(BaseModel):
tree: str = Field(
...,
title="Tree Input",
description="Copy your newick tree format"
)
percent_identity: float = Field(
order = 0,
description= "Percent identity, should be lower than 100, 99 ~ 99.6 recommended"
)
class Output(BaseModel):
image_file: FileContent = Field(..., mime_type="image/svg+xml")
def flatten(tree, accession, percent_identity=99):
cutoff = float((100-percent_identity)/2)
t = Tree(tree)
ts = TreeStyle()
#ts.scale = 100
#ts.show_branch_length = True
ts.allow_face_overlap = True
name_list = []
dict_trees = {}
for leaf in t:
name_list.append(leaf.name)
dict_trees[leaf.name] = 0
cnt = 1
colors = ["#fbb4ae",
"#b3cde3",
"#ccebc5",
"#decbe4",
"#fed9a6",
"#ffffcc",
"#e5d8bd",
]
for node in t.traverse():
node.img_style["size"]=2
def go_down(node, dist):
color = colors[cnt%len(colors)]
for leaf in node:
branch_sum = 0
new_leaf = leaf
while(new_leaf!=node):
try:
branch_sum += new_leaf.dist
new_leaf = new_leaf.up
except:
break
if branch_sum < cutoff:
dict_trees[leaf.name] = cnt
def go_up(node, dist):
if not(node.is_root()):
if dist+node.up.dist < cutoff: # more way to go up
go_up(node.up, dist+node.up.dist)
else:
go_down(node.up, 0)
for name in name_list:
color = colors[cnt%len(colors)]
if dict_trees[name] == 0: # if not node has already searched
node = t.search_nodes(name=name)[0]
dict_trees[name] = cnt
go_up(node, node.dist)
cnt += 1
for name in name_list:
#print(name)
node = t.search_nodes(name=name)[0]
cnt = dict_trees[name]
color = colors[cnt%len(colors)]
node.img_style["bgcolor"] = color
back = TextFace(f"group {cnt}")
blank = TextFace(f" ")
node.add_face(blank,column=1,position="branch-right")
node.add_face(back,column=2,position="branch-right")
t.render(f"{accession}.svg",tree_style=ts)
dict_new = {"leaf":[],"group":[]}
for key in dict_trees.keys():
dict_new["leaf"].append(key)
dict_new["group"].append(dict_trees[key])
df = pd.DataFrame(dict_new)
df.to_excel(f"{accession}.xlsx")
def Tree_grouper(input: Input,) -> Output:
accession = time.time()
flatten(input.tree, accession,input.percent_identity)
print(f"{accession}.svg")
with open(f"{accession}.svg") as f:
svg = f.read()
return Output(image_file=svg)
(for the libraries, use [pip install biopython], [pip install ete3] and [pip install PyQt5])
Input
Tree
(TM327_Metarhizium:0.15152639840759060674,(TM369_Metapochonia:0.04593148843963373862,TM330_Pochonia:0.02745325322091625442)45:0.00107340303595512832,TM328_Metarhizium:0.09865295544240419712);
Percent identity
99
Expected output
A phylogenetic tree image
Technical details:
Possible Fix:
Additional context:
After the problem happens, qobject starttimer problem also happens in serverside
In the fresh environment, using Python 3.9.7 on macOS Monterey 12.2.1 (x86) (tested on M1 chip, same issues)
No extra packages were installed, only Opyrator and its dependencies
The code I'm trying to run:
from pydantic import BaseModel
class Input(BaseModel):
message: str
class Output(BaseModel):
message: str
def hello_world(input: Input) -> Output:
"""Returns the `message` of the input data."""
return Output(message=input.message)
$ opyrator launch-ui my_opyrator:hello_world
ModuleNotFoundError: No module named 'streamlit.report_thread'
$ opyrator launch-api my_opyrator:hello_world
ImportError: cannot import name 'graphql' from 'starlette' (/Users/sean/python_envs/opyrator-env/lib/python3.9/site-packages/starlette/init.py)
Some folks on the internet suggest downgrading streamlit, but it seems like the issue still persists.
I've looked into the package itself and didn't find either report_thread.py or "report_thread" in the python files
i want use microserver deploy my deeplearning applictions, this is a good ideal ?
I'm currently using flask deploy server ,I have a hunch that there will be more models, code complexity and envs compatible It's a big problem。
Can Opyrator help me solve these problems?
**Feature description:
Dear People from opyrator,
Absolutely awesome framework !
At Pytorch Lightning, we are working on a tasks based framework called LightningFlash: https://github.com/PytorchLightning/lightning-flash.
It would be great to collaborate and make them available throught opyrator.
So people can train, finetune, and predict from the UI, play with the transforms and visualise them etc ...
Best,
T.C
Problem and motivation:
Is this something you're interested in working on?
Feature description:
Finalize capabilities to package and export a compatible function into a self-contained zip-file.
The export can be executed via command line:
opyrator export my_opyrator:hello_world my-opyrator.zip
This exported zip-file packages relevant source code and data artifacts into a single file which can be shared, stored, and used for launching the API or UI.
External requirements are automatically discovered from the working directory based on the following files: Pipfile
(Pipenv environment), environment.yml
(Conda environment), pyproject.toml
(Poetry dependencies), requirements.txt
(PIP requirements), setup.py
(Python project requirements), packages.txt
(apt-get packages), or discovered via pipreqs as fallback. However, external requirements are only included as instructions and are not packaged into the ZIP file. If you want to export your Opyrator fully self-contained including all requirements or even the Python interpreter itself, please refer to the Docker or PEX export options.
As a side note, Opyrators exported as ZIP files are (mini) Python libraries that can be pip-installed, imported, and used from other Python code:
pip install my-opyrator.zip
This looks great and my first thought was, wow, this looks like a great alternative to Flask! but how would i implement things like authentication & connections to a database -- things that would likely be needed for any real "production" implementation?
Feature description:
Opyrator inserts its name to every project titles and does not allow a change. However, this is undesired and there should be an alternative place to give credits. Like on the footer, as streamlit does.
Problem and motivation:
The cause of behavior is on src/opyrator/ui/streamlit_ui.py file, line 27:
st.set_page_config(page_title="Opyrator", page_icon=":arrow_forward:")
and lines 828-830:
title = opyrator.name
if "opyrator" not in opyrator.name.lower():
title += " - Opyrator"
Since streamlit allows page config to be set once, there is no way but altering the library code manually.
An example:
This over-branding behavior should change. A footer reference text along with uri is enough, as people are quite used to see credits at the end of the page.
Is this something you're interested in working on?
Yes
Thanks.
Can I run multiple models ?
opyrator launch-ui test:mul_models
Hello,
I tried to run the basic function,
from pydantic import BaseModel
class Input(BaseModel):
message: str
class Output(BaseModel):
message: str
def hello_world(input: Input) -> Output:
"""Returns the `message` of the input data."""
return Output(message=input.message)
and got this error: 'PYTHONPATH' is not recognized as an internal command or external, an executable program or a batch file.
Hi, i played a bit with the project and noticed one potential issue. In this function, the mime type could be manipulated by remote user, hence he could upload any file with a manipulated MIME header. The description of such potential vulnerability is here. One could use magic code to check the uploaded file type rather than rely on the MIME or extension
https://play.mltooling.com/opyrator/demos/playground/ isn't returning anything for me
Expected behaviour:
Steps to reproduce the issue:
Technical details:
Possible Fix:
Additional context:
Feature description:
Finalize auto-generation of python client for Opyrator.
Every deployed Opyrator provides a Python client library via an endpoint method which can be installed with pip:
pip install http://my-opyrator:8080/client
And used in your code, as shown below:
from my_opyrator import Client, Input
opyrator_client = Client("http://my-opyrator:8080")
result = opyrator_client.call(Input(text="hello", wait=1))
Please fix the compatibility with latest streamlit upgrade.
Thank you,
I've tested the launch-ui, everything works fine.
But when I go to use the api, and tried with postman, I encountered this problem:
{
"detail": [
{
"loc": [
"body"
],
"msg": "value is not a valid dict",
"type": "type_error.dict"
}
]
}
I think the definition is not compatible with FastAPI? Should I change that into some Form format?
class Input(BaseModel):
file: FileContent = Field(..., mime_type="application/x-sqlite3")
calc_type: str = 'PF'
Append:
When change the api into http://127.0.0.1:8080/call/
the response goes into:
WARNING: Invalid HTTP request received.
Traceback (most recent call last):
File "/Users/dragonszy/miniconda3/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 132, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 212, in httptools.parser.parser.HttpParser.feed_data
httptools.parser.errors.HttpParserInvalidMethodError: Invalid method encountered
INFO: 127.0.0.1:54894 - "POST /call HTTP/1.1" 422 Unprocessable Entity
INFO: 127.0.0.1:54993 - "POST /call HTTP/1.1" 422 Unprocessable Entity
Great framework, really user friendly! Have some suggestions during usage.
Now ui and api need to run with different ports,
opyrator launch-ui conversion:convert
opyrator launch-api conversion:convert
Is there a way to run them together
opyrator launch conversion:convert
So that I can access GET /
for ui, GET /docs
for docs, POST /call
for apis.
Feature description:
Finalize capabilities to export an Opyrator to a PEX file.
PEX is a tool to create self-contained executable Python environments that contain all relevant python dependencies.
The export can be executed via command line:
opyrator export my_opyrator:hello_world --format=pex my-opyrator.pex
Feature description: Have a functionality to take a picture/record a video from bowser by accessing webcam
Problem and motivation: most AI services I work on depending on taking a picture/record a video as an input
Is this something you're interested in working on? Yes
Hello World no go:
Technical details:
I have followed the instructions on the Getting Started page, no go
https://github.com/ml-tooling/opyrator#getting-started
Created the file and run as instructed but I get this...
2021-05-01 10:16:31.675 An update to the [server] config option section was detected. To have these changes be reflected, please restart streamlit.
I ran "streamlit hello
"and that is working fine
I wonder if it is the very new version of python?
I am open to being stupid, that's OK, but this looks pretty cool and I want it to work.
Feature description:
Finalize capabilities to deploy an Opyrator to a cloud platform.
Rolling out your Opyrators for production usage might require additional features such as SSL, authentication, API tokens, unlimited scalability, load balancing, and monitoring. Therefore, we provide capabilities to easily deploy your Opyrators directly on scalable and secure cloud platforms without any major overhead.
The deployment can be executed via command line:
opyrator deploy my_opyrator:hello_world <deployment-provider> <deployment-provider-options>
Feature description:
Finalize capabilities to export an opyrator to a Docker image.
The export can be executed via command line:
opyrator export my_opyrator:hello_world --format=docker my-opyrator-image:latest
💡 The Docker export requires that Docker is installed on your machine.
After the successful export, the Docker image can be run as shown below:
docker run -p 8080:8080 my-opyrator-image:latest
Running your Opyrator within this Docker image has the advantage that only a single port is required to be exposed. The separation between UI and API is done via URL paths: http://localhost:8080/api
(API); http://localhost:8080/ui
(UI). The UI is automatically configured to use the API for all function calls.
Describe the issue:
I need to generate a FastAPI code from a simple python function. Its look like Opyrator do that.
But I do not know if we can access to the source code generated with FastAPI annotation ?
Is the underlying dynamic code generated is usable and where is it ?
Thank you
Hi,
I have been doodling with opyrator, I think it is awesome.
I got stuck in trying to make the output for the user is to download an excel file generated following the model you have in upscaling the image and spleeter example
`
class HelixPredictOutput(BaseModel):
upscaled_image_file: FileContent = Field(
...,
mime_type="application/excel",
description="excel file download",
)
def image_super_resolution(
input: HelixPredictInput,
) -> HelixPredictOutput:
with open("aa.xlsx", "rb") as f:
excel_outfile = f.read()
return HelixPredictOutput(
upscaled_image_file=excel_outfile)
`
But that gave me a weird zip file when I press on the download button that was unsupported. Is there anyway to make it download the excel file?
Feature description:
Opyrator provides a growing collection of pre-defined components (input- and output models) for common tasks. Some of these components also provide more advanced UIs and Visualizations. You can reuse these components to speed up your development and, thereby, keep your Opyrators compatible to other functionality improvements or other Opyrators.
Dear guys,
I am interested in using opyrator. In our use case we would like to upload timeseries data in csv or excel files.
Is this supported?
And If not, can you recommend tools that are able to read generic csv files that can be added to opyrator?
Thanks a lot for discussing
Daniel
Describe your request:
The README mentions "Audio Seperation" (also appears in the screenshot located below), but as I far as I understand it should be Audio Separation.
Anyway, just a tiny little issue for an otherwise great project :)
Hi.
It is very useful to access Swagger UI by opyrator launch-api ~
But I need to use Swagger API when I'm running opyrator launch-ui
. I edited the /openapi.json by indicating server url.
I tried a lot by adding /call
behind the server url but it doesn't work..
403 Forbidden
error just appears to me.
Could you let me how to get response of POST request in running 'launch-ui'?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.