Comments (2)
The logics assume that the output tensor name is consistent with the feature name in output spec. The assumption is not correct.
The output tensor names must be consistent with the feature name in output spec. This is how Neuropod knows which tensor is which. This is a correct assumption because it's a requirement.
In Neuropod tensorflow backend, we actually map the tensor and the feature name based on the order.
When you say "based on the order", I assume you mean the order of things in the output spec. Please clarify if this isn't the ordering you're referring to.
The Neuropod TF backend does not depend on the order of items in the input/output spec.
As we've spoken about offline, we avoid depending on the order of items in the input/output spec because that introduces a brittle dependency on the order of items in the spec (same reason for not allowing return of a List[Tensor]
or Tuple[Tensor]
from TorchScript models). This can easily break, especially if there are several models using a centralized spec.
From what I can tell, the check you referenced in the issue is valid and necessary and we don't depend on the spec ordering in the TF backend. Feel free to comment if you were actually referring to something else.
I walked through some of the relevant parts of the code below and it should help clarify how inference with saved models works.
Here's a quick walkthrough of the relevant parts of the code:
-
The saved model is loaded and we set up a node name mapping that maps from the name of a Neuropod output to the corresponding node in the TF graph. For SavedModels, this is based on the signature of the saved model (for frozen graphs, it's explicitly specified).
neuropod/source/neuropod/backends/tensorflow/tf_backend.hh
Lines 48 to 49 in b18e281
neuropod/source/neuropod/backends/tensorflow/tf_backend.cc
Lines 157 to 170 in b18e281
-
In infer, we set up our
tensor_feeds
andtensor_fetches
. These are the inputs/outputs we want to use with a TF callable. More details about callables in the code snippet:neuropod/source/neuropod/backends/tensorflow/tf_backend.cc
Lines 320 to 331 in b18e281
-
We populate
tensor_fetches
(the outputs we want). Note that this is the step that would fail without the check you referenced in the issue
neuropod/source/neuropod/backends/tensorflow/tf_backend.cc
Lines 336 to 350 in b18e281
-
We get a callable with the feeds and fetches we populated
One thing to note here is that a callable is a way of running a TF subgraph given a set of inputs and outputs. What I think you may be referring to is that it takes an ordered list of feeds and an ordered list of fetches and accepts/produces Tensors in the same order. This order is based on tensor_feeds
and tensor_fetches
(which have consistent orderings because they're std::map
s).
- We loop over the outputs (which are in the same order as
tensor_fetches
) and return the outputneuropod/source/neuropod/backends/tensorflow/tf_backend.cc
Lines 387 to 396 in b18e281
So the "ordering" that it depends on is just an artifact of the way TF callables are run. It isn't based on anything outside of infer
. We still need the names in the SavedModel output signature to match the names in the Neuropod output spec.
from neuropod.
Feel free to reopen I missed something, but I'll close this for now
from neuropod.
Related Issues (20)
- Python built-in libraries can't be found when bundled python model and external environment have different versions HOT 1
- Tensorflow backend in IPE mode reports warning "expects data to be 64 byte aligned" on every inference
- Is nueropod designed to support tf.Example or sparse tensor ? HOT 3
- Add support for tf.SavedModel format HOT 7
- [TF] Add tests for saved model support for TF 1.x
- [Notice] Intent to drop support for Python 2.7 + Torch 1.1.0 + TF 1.12.0
- Installation of the torch package to isolated python environment HOT 2
- create_keras_neuropod() does not work with TF 2.6 (AttributeError: 'KerasTensor' object has no attribute 'graph') HOT 1
- Support Torchscript 1.9 in neuropod
- [Notice] Intent to drop support for Python 3.5 + Torch 1.2.0 + TF 1.13.1
- Support for newer versions of Torchscript HOT 14
- Loose the Neuropod backend version match HOT 2
- Support Dict[str, Union[List[str], torch.Tensor]] as torchscript model input type HOT 2
- User provided `node_name_mapping` is not used with TF SavedModels
- Slicing is not done correctly for torch script model HOT 6
- Things to figure out before a major release
- Have a way to set LD_PRELOAD for tensorflow_text HOT 2
- Unlicense dependency should not be included directly in the project HOT 2
- Is there a plan to support Python 3.9+
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from neuropod.