kubeflow / batch-predict Goto Github PK
View Code? Open in Web Editor NEWRepository for batch predict
License: Apache License 2.0
Repository for batch predict
License: Apache License 2.0
I am getting "No module named" error when trying to submit a job on google cloud. any idea what I am missing here? It works fine when submitting from my local with DirectRunner.
ImportError: No module named kubeflow_batch_predict.dataflow.io.multifiles_source
I am trying to run this example https://github.com/kubeflow/examples/blob/master/object_detection/submit_batch_predict.md
with these parameters from batch_predict folder
python -m kubeflow_batch_predict.dataflow.batch_prediction_main --input_file_format tfrecord --output_file_format json --input_file_patterns gs://XYZ/batch_predict/object-detection-images.small.tfrecord --output_result_prefix gs://XYZ/batch_predict/batch_predict_out- --output_error_prefix gs://XYZ/batch_predict/batch_predict_error_out- --model_dir gs://XYZ/batch_predict/model/ --project --temp_location gs://XYZ/temp/ --staging_location gs://XYZ/staging/ --runner=DataflowRunner
We need E2E test for batch predict.
Unittests would also be nice.
/priority p1
/area inference
/area 0.4.0
See PR #2 ; doesn't look like the tide plugin is automatically merging PRs.
/priority p1
Starting to use dataflow for batch prediction jobs and this repo has the best example code that I have found! Really amazing and extends to sklearn models, which I love.
I'm curious if there is still development planned or if it has moved to another repo. It wasn't clear other than I haven't seen any action from y'all this year.
We should come up with a list of issues that need to be addressed in order to have an initial release of batch predict as part of our 0.2 release.
All such issues should be P1. Anything not needed for our initial release should be lower priority.
We should come up with a set of exit criterion for declaring batch predict 1.0
/area 0.4.0
/priority p1
/area inference
The module in question is kubeflow_batch_predict.dataflow.batch_prediction.py
and the DoFn is PredictionDoFn
.
This issue highlights a shortcoming of the current state and will serve as the primary discussion venue for the problem.
In its current state, this DoFn accepts a serialized JSON containing the following format for the examples:
{'instances': [ {'input': TFRecord}, ... ] }
where each item in the list is a dictionary containing the 'input' key and a TFRecord (could be base64 encoding of the TFRecord). This input does not allow for any extraneous top level keys and only yields back a list of inputs and outputs.
The need for extraneous keys exists because there might be extra metadata along with each element which might be needed to identify the input. For instance, consider a prediction task to embed "movies" as high-dimensional vectors. If we desire to write the final results into a CSV file, we would want each row to have extra metadata like "name" etc and we might want this to be passed around in a dict in the DataFlow pipeline (as PCollections
).
This is not possible currently because this would violate the input format to PredictionDoFn
and instead we would have to morph these values into something acceptable. This step is expected however any downstream DoFns that derive PCollections
from PredictionDoFn
will not be able to access any pre-transformed data. Instead all we will have is a list of high-dimensional vectors with no way to relate back to the actual human readable information like "name".
We need to come to a design spec surrounding this so as to accommodate the most generic use cases around Batch Prediction
How will users install and use the batch predict package? Will they just checkout the code and add it to their python path?
Will we provide a pip package?
Should we publish a pip package to make it easy to install the Kubeflow batch predict library?
/priority p1
/area 0.4.0
/area inference
The batch predict package was added in #2 But there are no unittests.
We should add unittests.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.