gradio-app / gradio Goto Github PK
View Code? Open in Web Editor NEWBuild and share delightful machine learning apps, all in Python. ๐ Star to support our work!
Home Page: http://www.gradio.app
License: Apache License 2.0
Build and share delightful machine learning apps, all in Python. ๐ Star to support our work!
Home Page: http://www.gradio.app
License: Apache License 2.0
The latest version is 0.7.8, however https://gradio.app/api/pkg-version returns 0.5.0.
BTW, for some reason, the link is unstable to access from China.
So I suggest using a more stable API, or not even check the version if it is not updated in time automatically.
I just upgraded from gradio 1.0.7 to 1.1.0 and now I can't launch my demo which was working before.
When I call:
gr.Interface(fn=ask, inputs=["textbox", "text"], outputs="text").launch()
I get the following error:
File "/Users/elhutton/opt/anaconda3/envs/agent-assist/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-27-5dcf77e302d7>", line 1, in <module>
gr.Interface(fn=ask, inputs=["textbox", "text"], outputs="text").launch()
File "/Users/elhutton/opt/anaconda3/envs/agent-assist/lib/python3.7/site-packages/gradio/interface.py", line 265, in launch
server_port=self.server_port)
ValueError: too many values to unpack (expected 2)
I am running on MacOS Catalina 10.15.6, working in a conda virtual env with python 3.7.7. I upgraded via pip install gradio --upgrade
Please help! Thanks
In the Image interface in the Input, can you provide support for 'tiff files'
Right now, every time that a Gradio interface object is launched, a new port is used. E.g. starts from 7860, then goes to 7861, 7862, etc. This is because each interface object keeps its port open, which is wasteful.
Close the port when the interface object is deleted, or is overwritten, so that ports are closed and can be reused
When I set up JSON or HTML as output type the submit button didn't appear. How can I solve it
I installed gradio.
Can't import it
Any solution?
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\gradio_init_.py", line 1, in
from gradio.interface import * # This makes it possible to importInterface
asgradio.Interface
.
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\gradio\interface.py", line 11, in
from gradio import networking, strings, utils
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\gradio\networking.py", line 14, in
from gradio.tunneling import create_tunnel
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\gradio\tunneling.py", line 12, in
import paramiko
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\paramiko_init_.py", line 22, in
from paramiko.transport import SecurityOptions, Transport
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\paramiko\transport.py", line 129, in
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\paramiko\transport.py", line 190, in Transport
if KexCurve25519.is_available():
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\paramiko\kex_curve25519.py", line 30, in is_available
X25519PrivateKey.generate()
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\cryptography\hazmat\primitives\asymmetric\x25519.py", line 38, in generate
from cryptography.hazmat.backends.openssl.backend import backend
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\cryptography\hazmat\backends\openssl_init_.py", line 7, in
from cryptography.hazmat.backends.openssl.backend import backend
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\cryptography\hazmat\backends\openssl\backend.py", line 74, in
from cryptography.hazmat.bindings.openssl import binding
File "C:\Users\Moughees\AppData\Local\Continuum\anaconda3\lib\site-packages\cryptography\hazmat\bindings\openssl\binding.py", line 15, in
from cryptography.hazmat.bindings._openssl import ffi, lib
ImportError: DLL load failed: The specified procedure could not be found.
When run from jupyter notebook, flask outputs lots of http request status lines, which clutters the jupyter notebook. Please suppress
When multiple output Images present, the Flag will save all images under the same name.
Test code to reproduce the bug:
import gradio as gr
import numpy as np
def show(x):
return np.zeros((20,20,3)), np.ones((40,40,3))
io = gr.Interface(show, gr.inputs.Slider(4, 6, step=1), [gr.outputs.Image(label='o1'), gr.outputs.Image(label='o2')])
io.launch()
Expected behavior:
Images should have prefix name as their label names.
Could there be an option to disable the input and output interfaces being automatically the same size? For example, when there are many UI components in the output interface, and not as many in the input interface, the input interface is still forced to be the same size as the output interface. Is there an option to disable that, if the user desired?
When I run the demo notebook (https://colab.research.google.com/drive/1BAw8QGFNqeKf1V0E3TLCzWyV6NM_qv_i?authuser=1#scrollTo=od6vQzqDGKoJ) here in colab notebook, I often get this issue:
Hi all,
First of all very nice work. I would like to mention that the image input component doesn't seem to allow one to transmit large images (> 1MB or so) to the machine running the demo. It simply displays in red, "error" at the top right of the image input on the web demo, and I don't get any data or messages on the machine hosting the demo. Can you please look into this?
Thanks,
Bhavan
My model is a video based classification model, Gradio is currently supported?
One more little question, how do I set up POST?
Hi Gradio team, this is fantastic work.
I am noticing a huge latency difference. My ML model takes around 5 seconds to run when called in python without the demo. But when I measure the latency of the same function when called through the hosted demo it takes around 15 seconds for the same exact input. And this 15 seconds doesn't include the I/O latency of demo. This is very strange. Any thoughts why this could be happening?
Thanks,
Bhavan
How to take multiple inputs from user?
like age, salary, gender, etc
Simplest demo
import gradio as gr
def greet(name):
return "Hello " + name + "!"
gr.Interface(fn=greet, inputs="text", outputs="text").launch()
responds on submit button much faster if network connection is not available for app.
Why?
i. e.
python demo.py
response time 1.26s
sandbox-exec -f nonetwork.sb python demo.py
response time 0.014s
Previously, used to be able to quit a server running from terminal with CTRL-C. No longer can do.
Hi, thanks for the great work
wonder if there is any way to select a TXT/CSV file as input ?
When run in a jupyternotebook the launch of the interface floods the cell with printing
"{'title': None, 'description': None, 'thumbnail': None, 'input_interface': [<gradio.inputs.Sketchpad object at 0x0000025F74CC0688>], 'output_interface': [<gradio.outputs.Label object at 0x0000025F74CC07C8>]}"
It looks like you manually implement image augmentations.
You may try to look at https://github.com/albu/albumentations
First of all, thank you very much for building such an awesome and easy-to-use app.
I'm trying to use gradio for our Object Detection Framework. Everything works smoothly except the output image is not shown in the output placeholder. I can see it in the debug output in the Colab notebook. Here is my notebook, and some screenshots.
Thank you in advance for your help.
NB: https://colab.research.google.com/drive/1Xi5F5ddJIcgilHIG4bnk3tcGcx10FYP0?usp=sharing
As you might notice, my output is PIL Image:
Output Image: (432, 288) <class 'PIL.Image.Image'>
<Figure size 432x288 with 0 Axes>
For TF 1.x models, interpretation doesn't set up the correct session
Trying to resize the image of the output . But can't figure it out.
I find the size of the received base64 encoded image, is much smaller than the original image.
Furthermore, I find the following code in the image_upload.js
file.
resizeImage.call(this, data, 600, 600, function(image_data) {
io.image_data = image_data
io.target.find(".image_preview").attr('src', image_data);
if (update_editor) {
io.tui_editor.loadImageFromURL(io.image_data, 'input').then(function (sizeValue) {
io.tui_editor.clearUndoStack();
io.tui_editor.ui.activeMenuEvent();
io.tui_editor.ui.resizeEditor({ imageSize: sizeValue });
});
}
So the maximum size of the received image is 600 * 600, right? Is it possible to undo this compression?
when trying things out on my end tonight, I noticed there wasn't an elegant way to stop the server from the command line for an existing application. I started peeking around inside Interface.launch, but I'm not sure where the best place would be to catch a ctrl+c would be -- there's a lot going on in that method to say the least. ๐
Thank you for building this useful tool.
For below points from README
(1) identifying and correcting mislabelled data;
(2) valuing each datapoint to automatically identify which data points should be labeled;**
Have you outsourced the code , if so could you please provide the details.
A sample notebook and documentation around it will be more helpful.
thanks again
Hari
checking the docs is audio data types supported for input and output for models?
When I run the latest version (1.2.2) of gradio
on Windows, I get the following error:
ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
Pretty sure it's because gradio
swtiched from the threading
module to the processing
module
I wanted to run a real-time classification task on webcam video, rather than selected frames. The API seems like it should allow this by just setting live=True
in the interface with a webcam input, but the resulting UI still uses the "click to snap" interface.
Code snippet:
def classify_image(inp):
return model.predict(inp)
image = gr.inputs.Image(label="Input Image", source="webcam")
label = gr.outputs.Label(num_top_classes=k)
gr.Interface(fn=classify_image, live=True, inputs=image, outputs=label).launch()
Using gradio 1.1.9 on python 3.7
i just tried running hello (which is in docs) with gradio but i cant see anything in URI
it just shows Running locally at: http://127.0.0.1:7868/
but if i open the link nothing will be there ..! help me
Received a bug request on: https://www.gradio.app/hub/hub-emotion-recognition
Line 194 in 04e3036
How can I embed the interface like you did in your website?
I want to create a gradio app within another web framework - React, Flask or Django
Can I package it in a docker container?
Hi, thanks for this fantastic work.
I noticed that there will be an error in the output if my wrap function spent too much time (like > 2mins).
Is there any way to loose the response time limitation?
(my wrap function like this :
def POC_input(input1):
time.sleep(135)
check=True
return [{"UnitTest":str(check)}, {"UnitTest_msg":'test'}]
Much thanks.
I have my classifier that takes text and returns either 0 or 1.
But when I run gradio as described below, I don't have a button to submit the text.
def predict(txt):
vec = text_prepare(txt)
return clf(vec) # 0 or 1
inputs = gr.inputs.Textbox(placeholder="Your text", label = "Text", lines=10)
label = gr.outputs.Label(label="Acceptability")
gr.Interface(fn = predict, inputs = inputs, outputs = label).launch()
Dear Abubakar Abid,
thank you for your great work.
Two questions:
The application would be an XML-export of volumetric image data (i.e., multiple *.tiff files and an XML file with the meta data).
Many thanks,
Maximilian
import gradio as gr
import pandas as pd
def predict(file_obj,Question):
df = pd.read_csv(file_obj.name,dtype=str)
return df
def main():
io = gr.Interface(predict, ["file",gr.inputs.Textbox(placeholder="Enter Question here...")], "dataframe")
io.launch()
if __name__ == "__main__":
main()
The file is not uploaded properly when the file size is small. I checked the temp file, it is empty.
You can use the code above to reproduce the issue.
Attached sample file as zip. Unzip before uploading
tapas.zip
Hello I want to know the devlopment process of gradio model on cloud server like heroku.
Its same like flask and streamlit or need some diffrent approach??
Hi,
I'm trying out gradio for the first time and loving it!
I often set up services and applications meant to be accessible inside my network. When it's a Flask app, I do this by setting Flask's host to "0.0.0.0" so that it listens on all IP addresses, and I'd love to be able to do that with gradio too.
Interface
argument server_name
to accomplish this, but it no longer works.networking.py
to pass the host
argument to Flask works: process = threading.Thread(target=app.run, kwargs={"port": port, "host":"0.0.0.0"})
It seems like this could be accomplished by either
Interface
argument like all_ips
or flask_host
that gets passed to networking.start_server
. This would make the function of that new argument very clear and easy to use, but it adds another argument which is not always ideal.server_name
argument and passing it to networking.start_server
. This might be a good use of that argument. In fact, server_name
never gets passed to the networking
module - it seems to be a argument without a meaningful use at the moment.Happy to try a PR if you like either of these ideas.
Something like:
import gradio as gr
r = gr.inputs.Slider(163, 255, label="R")
g = gr.inputs.Slider(119, 255, label="G")
b = gr.inputs.Slider(88, 255, label="B")
low_thresh = gr.inputs.Slider(175/255, label="Lower Threshold")
high_thresh = gr.inputs.Slider(0, 1, label="Upper Threshold")
gr.Interface(fn=[display_image, display_label], inputs=[r, g, b, low_thresh, high_thresh], outputs="plot", live=False).launch(inline=False, share=True)
Where display_image
, display_label
each returned a plot didn't work. I got blank outputs...
Under https://www.gradio.app/docs there is no mention of interpretation and how this feature can be used. Will these docs be updated soon?
Hi,
There will be an error in the output when uploading a zip(>100M), is this an upload size limit? May I modify this restriction?
Much thanks!!
Is there a way to use Gradio without hosting the application on a https://xxxxx.gradio.app URL? I would like to use the Gradio GUI/UI but in my own application.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.