public class L1Regularization$ extends Object implements RegularizationPenalty
The regularization function is the
w being the weight vector.
L_1 penalty can be used to drive a number of the solution coefficients to 0, thereby
producing sparse solutions.
|Modifier and Type||Field and Description|
Static reference to the singleton instance of this Scala object.
|Constructor and Description|
|Modifier and Type||Method and Description|
Adds regularization to the loss value
Calculates the new weights based on the gradient and L1 regularization penalty
public static final L1Regularization$ MODULE$
public Vector takeStep(Vector weightVector, Vector gradient, double regularizationConstant, double learningRate)
Uses the proximal gradient method with L1 regularization to update weights.
The updated weight
w - learningRate * gradient is shrunk towards zero
by applying the proximal operator
signum(w) * max(0.0, abs(w) - shrinkageVal)
w is the weight vector,
lambda is the regularization parameter,
weightVector- The weights to be updated
gradient- The gradient according to which we will update the weights
regularizationConstant- The regularization parameter to be applied
learningRate- The effective step size for this iteration
public double regLoss(double oldLoss, Vector weightVector, double regularizationConstant)
The updated loss is
oldLoss + lambda * ||w||_1 where
w is the weight vector and
lambda is the regularization parameter
Copyright © 2014–2018 The Apache Software Foundation. All rights reserved.