This document provides answers to some of the frequently asked questions about TensorFlow. If you have a question that is not covered here, you might find an answer on one of the TensorFlow community resources.
Yes! TensorFlow gained support for distributed computation in version 0.8. TensorFlow now supports multiple devices (CPUs and GPUs) in one or more computers.
As of the 0.6.0 release timeframe (Early December 2015), we do support Python 3.3+.
See also the API documentation on building graphs.
c = tf.matmul(a, b)not execute the matrix multiplication immediately?
In the TensorFlow Python API,
tf.Tensor objects. A
Tensor object is a symbolic handle to the result of an operation, but does not actually hold the values of the operation's output. Instead, TensorFlow encourages users to build up complicated expressions (such as entire neural networks and its gradients) as a dataflow graph. You then offload the computation of the entire dataflow graph (or a subgraph of it) to a TensorFlow
tf.Session, which is able to execute the whole computation much more efficiently than executing the operations one-by-one.
The supported device names are
"/cpu:0") for the CPU device, and
"/gpu:i") for the ith GPU device.
To place a group of operations on a device, create them within a
with tf.device(name): context. See the how-to documentation on using GPUs with TensorFlow for details of how TensorFlow assigns operations to devices, and the CIFAR-10 tutorial for an example model that uses multiple GPUs.
See also the API documentation on running graphs.
Feeding is a mechanism in the TensorFlow Session API that allows you to substitute different values for one or more tensors at run time. The
feed_dict argument to
tf.Session.run is a dictionary that maps
tf.Tensor objects to numpy arrays (and some other types), which will be used as the values of those tensors in the execution of a step.
# Using `Session.run()`. sess = tf.Session() c = tf.constant(5.0) print(sess.run(c)) # Using `Tensor.eval()`. c = tf.constant(5.0) with tf.Session(): print(c.eval())
In the second example, the session acts as a context manager, which has the effect of installing it as the default session for the lifetime of the
with block. The context manager approach can lead to more concise code for simple use cases (like unit tests); if your code deals with multiple graphs and sessions, it may be more straightforward to make explicit calls to
Sessions can own resources, such as
tf.ReaderBase; and these resources can use a significant amount of memory. These resources (and the associated memory) are released when the session is closed, by calling
The intermediate tensors that are created as part of a call to
Session.run() will be freed at or before the end of the call.
The TensorFlow runtime parallelizes graph execution across many different dimensions:
tf.Session.runin parallel. This enables the runtime to get higher throughput, if a single step does not use all of the resources in your computer.
TensorFlow is designed to support multiple client languages. Currently, the best-supported client language is Python. Experimental interfaces for executing and constructing graphs are also available for C++, Java and Go.
TensorFlow also has a C-based client API to help build support for more client languages. We invite contributions of new language bindings.
TensorFlow supports multiple GPUs and CPUs. See the how-to documentation on using GPUs with TensorFlow for details of how TensorFlow assigns operations to devices, and the CIFAR-10 tutorial for an example model that uses multiple GPUs.
Note that TensorFlow only uses GPU devices with a compute capability greater than 3.5.
Session.run()hang when using a reader or a queue?
tf.QueueBase classes provide special operations that can block until input (or free space in a bounded queue) becomes available. These operations allow you to build sophisticated input pipelines, at the cost of making the TensorFlow computation somewhat more complicated. See the how-to documentation for using
QueueRunner objects to drive queues and readers for more information on how to use them.
Variables allow concurrent read and write operations. The value read from a variable may change if it is concurrently updated. By default, concurrent assignment operations to a variable are allowed to run with no mutual exclusion. To acquire a lock when assigning to a variable, pass
See also the
In TensorFlow, a tensor has both a static (inferred) shape and a dynamic (true) shape. The static shape can be read using the
tf.Tensor.get_shape method: this shape is inferred from the operations that were used to create the tensor, and may be partially complete. If the static shape is not fully defined, the dynamic shape of a
t can be determined by evaluating
x = tf.reshape(x)?
tf.Tensor.set_shape method updates the static shape of a
Tensor object, and it is typically used to provide additional shape information when this cannot be inferred directly. It does not change the dynamic shape of the tensor.
tf.reshape operation creates a new tensor with a different dynamic shape.
It is often useful to build a graph that works with variable batch sizes, for example so that the same code can be used for (mini-)batch training, and single-instance inference. The resulting graph can be saved as a protocol buffer and imported into another program.
When building a variable-size graph, the most important thing to remember is not to encode the batch size as a Python constant, but instead to use a symbolic
Tensor to represent it. The following tips may be useful:
batch_size = tf.shape(input) to extract the batch dimension from a
input, and store it in a
tf.reduce_mean instead of
tf.reduce_sum(...) / batch_size.
See the graph visualization tutorial.
Add summary ops to your TensorFlow graph, and write these summaries to a log directory. Then, start TensorBoard using
python tensorflow/tensorboard/tensorboard.py --logdir=path/to/log-directory
For more details, see the Summaries and TensorBoard tutorial.
You can change TensorBoard to serve on localhost rather than '0.0.0.0' by the flag --host=localhost. This should quiet any security warnings.
See the how-to documentation for adding a new operation to TensorFlow.
There are three main options for dealing with data in a custom format.
The easiest option is to write parsing code in Python that transforms the data into a numpy array. Then use
tf.data.Dataset.from_tensor_slices to create an input pipeline from the in-memory data.
If your data doesn't fit in memory, try doing the parsing in the Dataset pipeline. Start with an appropriate file reader, like
tf.data.TextLineDataset. Then convert the dataset by mapping mapping appropriate operations over it. Prefer predefined TensorFlow operations such as
If your data is not easily parsable with the built-in TensorFlow operations, consider converting it, offline, to a format that is easily parsable, such as
The more efficient method to customize the parsing behavior is to add a new op written in C++ that parses your data format. The guide to handling new data formats has more information about the steps for doing this.
The TensorFlow Python API adheres to the PEP8 conventions.* In particular, we use
CamelCase names for classes, and
snake_case names for functions, methods, and properties. We also adhere to the Google Python style guide.
The TensorFlow C++ code base adheres to the Google C++ style guide.
(* With one exception: we use 2-space indentation instead of 4-space indentation.)
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.