|View source on GitHub|
Just your regular densely-connected NN layer.
Compat aliases for migration
See Migration guide for more details.
tf.keras.layers.Dense( units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs )
Dense implements the operation:
output = activation(dot(input, kernel) + bias) where
activation is the element-wise activation function passed as the
kernel is a weights matrix created by the layer, and
bias is a bias vector created by the layer (only applicable if
Note: If the input to the layer has a rank greater than 2, then
Densecomputes the dot product between the
kernelalong the last axis of the
inputsand axis 1 of the
tf.tensordot). For example, if input has dimensions
(batch_size, d0, d1), then we create a
(d1, units), and the
kerneloperates along axis 2 of the
input, on every sub-tensor of shape
(1, 1, d1)(there are
batch_size * d0such sub-tensors). The output in this case will have shape
(batch_size, d0, units).
Besides, layer attributes cannot be modified after the layer has been called once (except the
# Create a `Sequential` model and add a Dense layer as the first layer. model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(16,))) model.add(tf.keras.layers.Dense(32, activation='relu')) # Now the model will take as input arrays of shape (None, 16) # and output arrays of shape (None, 32). # Note that after the first layer, you don't need to specify # the size of the input anymore: model.add(tf.keras.layers.Dense(32)) model.output_shape (None, 32)
| ||Positive integer, dimensionality of the output space.|
| || Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: |
| ||Boolean, whether the layer uses a bias vector.|
| || Initializer for the |
| ||Initializer for the bias vector.|
| || Regularizer function applied to the |
| ||Regularizer function applied to the bias vector.|
| ||Regularizer function applied to the output of the layer (its "activation").|
| || Constraint function applied to the |
| ||Constraint function applied to the bias vector.|
N-D tensor with shape:
(batch_size, ..., input_dim). The most common situation would be a 2D input with shape
N-D tensor with shape:
(batch_size, ..., units). For instance, for a 2D input with shape
(batch_size, input_dim), the output would have shape
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.