# W3cubDocs

/TensorFlow Python

## Class `Adamax`

Inherits From: `Optimizer`

It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper.

#### Arguments:

• `lr`: float >= 0. Learning rate. beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.
• `epsilon`: float >= 0. Fuzz factor. If `None`, defaults to `K.epsilon()`.
• `decay`: float >= 0. Learning rate decay over each update.

## Methods

### `__init__`

```__init__(
lr=0.002,
beta_1=0.9,
beta_2=0.999,
epsilon=None,
decay=0.0,
**kwargs
)
```

Initialize self. See help(type(self)) for accurate signature.

### `from_config`

```from_config(
cls,
config
)
```

### `get_config`

```get_config()
```

### `get_gradients`

```get_gradients(
loss,
params
)
```

Returns gradients of `loss` with respect to `params`.

#### Arguments:

• `loss`: Loss tensor.
• `params`: List of variables.

#### Raises:

• `ValueError`: In case any gradient cannot be computed (e.g. if gradient function not implemented).

### `get_updates`

```get_updates(
loss,
params
)
```

### `get_weights`

```get_weights()
```

Returns the current value of the weights of the optimizer.

#### Returns:

A list of numpy arrays.

### `set_weights`

```set_weights(weights)
```

Sets the weights of the optimizer, from Numpy arrays.

Should only be called after computing the gradients (otherwise the optimizer has no weights).

#### Arguments:

• `weights`: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the optimizer (i.e. it should match the output of `get_weights`).

#### Raises:

• `ValueError`: in case of incompatible weight shapes.