Tensorflow Serving - ResNet
TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs. Learn more about Tensorflow on their site: https://www.tensorflow.org/tfx/guide/serving****
Quick Start Guide
Running TensorFlow Serving to serve the TensorFlow ResNet model is, as usual, a single line trick.
CPU support:
curl -sfL https://get.k3ai.in | bash -s -- --cpu --plugin_tfs-resnetGPU support:
curl -sfL https://get.k3ai.in | bash -s -- --gpu --plugin_tfs-resnetTest the installation
For a full explanation of how to use Tensorflow Serving please take a look at the documentation site:
Step 1 - Prepare your client environment
To run any experiment against a remote inference server you have to have tensorflow-serving-api installed on your machine. As per official documentation here:https://www.tensorflow.org/tfx/serving/setup#tensorflow_serving_python_api_pip_package
As reference
pip install tensorflow-serving-apiStep 2
Clone the TensorFlow repository where we will find the test scripts
Step 3
Find your cluster IP where Tensorflow Serving service is exposed
You should have a similar output:
Take note of LoadBalancer Ingress IP
Step 4
We can now query the service at its external address from our local host.
Using gRPC:
Using REST Api:
You should have an output similar to this:
Last updated
Was this helpful?