TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. TensorFlow Serving makes it easy to deploy new algorithms and experiments, while keeping the same server architecture and APIs.
See the basic tutorial on the TensorFlow Serving site to learn how to export a trained TensorFlow model and build a server to serve the exported model.
See the advanced tutorial on the TensorFlow Serving site to learn how to build a server that dynamically discovers and serves new versions of a trained TensorFlow model.
See the serving inception tutorial on the TensorFlow Serving site to learn how to serve the inception model with TensorFlow Serving and Kubernetes.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.