Op@Operator(group="train") public final class ResourceSparseApplyAdagradDa extends PrimitiveOp
| Modifier and Type | Class | Description |
|---|---|---|
static class |
ResourceSparseApplyAdagradDa.Options |
Optional attributes for
ResourceSparseApplyAdagradDa |
operation| Modifier and Type | Method | Description |
|---|---|---|
static <T,U extends java.lang.Number> |
create(Scope scope,
Operand<?> var,
Operand<?> gradientAccumulator,
Operand<?> gradientSquaredAccumulator,
Operand<T> grad,
Operand<U> indices,
Operand<T> lr,
Operand<T> l1,
Operand<T> l2,
Operand<java.lang.Long> globalStep,
ResourceSparseApplyAdagradDa.Options... options) |
Factory method to create a class to wrap a new ResourceSparseApplyAdagradDa operation to the graph.
|
static ResourceSparseApplyAdagradDa.Options |
useLocking(java.lang.Boolean useLocking) |
clone, finalize, getClass, notify, notifyAll, wait, wait, waitequals, hashCode, toStringpublic static <T,U extends java.lang.Number> ResourceSparseApplyAdagradDa create(Scope scope, Operand<?> var, Operand<?> gradientAccumulator, Operand<?> gradientSquaredAccumulator, Operand<T> grad, Operand<U> indices, Operand<T> lr, Operand<T> l1, Operand<T> l2, Operand<java.lang.Long> globalStep, ResourceSparseApplyAdagradDa.Options... options)
scope - current graph scopevar - Should be from a Variable().gradientAccumulator - Should be from a Variable().gradientSquaredAccumulator - Should be from a Variable().grad - The gradient.indices - A vector of indices into the first dimension of var and accum.lr - Learning rate. Must be a scalar.l1 - L1 regularization. Must be a scalar.l2 - L2 regularization. Must be a scalar.globalStep - Training step number. Must be a scalar.options - carries optional attributes valuespublic static ResourceSparseApplyAdagradDa.Options useLocking(java.lang.Boolean useLocking)
useLocking - If True, updating of the var and accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.