| View source on GitHub | 
Just your regular densely-connected NN layer.
tf.keras.layers.Dense(
    units,
    activation=None,
    use_bias=True,
    kernel_initializer='glorot_uniform',
    bias_initializer='zeros',
    kernel_regularizer=None,
    bias_regularizer=None,
    activity_regularizer=None,
    kernel_constraint=None,
    bias_constraint=None,
    **kwargs
)
  Dense implements the operation: output = activation(dot(input, kernel) + bias) where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). These are all attributes of Dense.
Note: If the input to the layer has a rank greater than 2, thenDensecomputes the dot product between theinputsand thekernelalong the last axis of theinputsand axis 0 of thekernel(usingtf.tensordot). For example, if input has dimensions(batch_size, d0, d1), then we create akernelwith shape(d1, units), and thekerneloperates along axis 2 of theinput, on every sub-tensor of shape(1, 1, d1)(there arebatch_size * d0such sub-tensors). The output in this case will have shape(batch_size, d0, units).
Besides, layer attributes cannot be modified after the layer has been called once (except the trainable attribute). When a popular kwarg input_shape is passed, then keras will create an input layer to insert before the current layer. This can be treated equivalent to explicitly defining an InputLayer.
# Create a `Sequential` model and add a Dense layer as the first layer. model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(16,))) model.add(tf.keras.layers.Dense(32, activation='relu')) # Now the model will take as input arrays of shape (None, 16) # and output arrays of shape (None, 32). # Note that after the first layer, you don't need to specify # the size of the input anymore: model.add(tf.keras.layers.Dense(32)) model.output_shape (None, 32)
| Args | |
|---|---|
| units | Positive integer, dimensionality of the output space. | 
| activation | Activation function to use. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x). | 
| use_bias | Boolean, whether the layer uses a bias vector. | 
| kernel_initializer | Initializer for the kernelweights matrix. | 
| bias_initializer | Initializer for the bias vector. | 
| kernel_regularizer | Regularizer function applied to the kernelweights matrix. | 
| bias_regularizer | Regularizer function applied to the bias vector. | 
| activity_regularizer | Regularizer function applied to the output of the layer (its "activation"). | 
| kernel_constraint | Constraint function applied to the kernelweights matrix. | 
| bias_constraint | Constraint function applied to the bias vector. | 
N-D tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).
N-D tensor with shape: (batch_size, ..., units). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, units).
    © 2022 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
    https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/keras/layers/Dense