loss-functions | R Documentation |
Loss functions
loss_binary_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
label_smoothing = 0,
axis = -1L,
...,
reduction = "auto",
name = "binary_crossentropy"
)
loss_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
label_smoothing = 0L,
axis = -1L,
...,
reduction = "auto",
name = "categorical_crossentropy"
)
loss_categorical_hinge(
y_true,
y_pred,
...,
reduction = "auto",
name = "categorical_hinge"
)
loss_cosine_similarity(
y_true,
y_pred,
axis = -1L,
...,
reduction = "auto",
name = "cosine_similarity"
)
loss_hinge(y_true, y_pred, ..., reduction = "auto", name = "hinge")
loss_huber(
y_true,
y_pred,
delta = 1,
...,
reduction = "auto",
name = "huber_loss"
)
loss_kullback_leibler_divergence(
y_true,
y_pred,
...,
reduction = "auto",
name = "kl_divergence"
)
loss_kl_divergence(
y_true,
y_pred,
...,
reduction = "auto",
name = "kl_divergence"
)
loss_logcosh(y_true, y_pred, ..., reduction = "auto", name = "log_cosh")
loss_mean_absolute_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_absolute_error"
)
loss_mean_absolute_percentage_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_absolute_percentage_error"
)
loss_mean_squared_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_squared_error"
)
loss_mean_squared_logarithmic_error(
y_true,
y_pred,
...,
reduction = "auto",
name = "mean_squared_logarithmic_error"
)
loss_poisson(y_true, y_pred, ..., reduction = "auto", name = "poisson")
loss_sparse_categorical_crossentropy(
y_true,
y_pred,
from_logits = FALSE,
axis = -1L,
...,
reduction = "auto",
name = "sparse_categorical_crossentropy"
)
loss_squared_hinge(
y_true,
y_pred,
...,
reduction = "auto",
name = "squared_hinge"
)
y_true |
Ground truth values. shape = |
y_pred |
The predicted values. shape = |
from_logits |
Whether |
label_smoothing |
Float in |
axis |
The axis along which to compute crossentropy (the features axis).
Axis is 1-based (e.g, first axis is |
... |
Additional arguments passed on to the Python callable (for forward and backwards compatibility). |
reduction |
Only applicable if |
name |
Only applicable if |
delta |
A float, the point where the Huber loss function changes from a quadratic to linear. |
Loss functions for model training. These are typically supplied in
the loss
parameter of the compile.keras.engine.training.Model()
function.
If called with y_true
and y_pred
, then the corresponding loss is
evaluated and the result returned (as a tensor). Alternatively, if y_true
and y_pred
are missing, then a callable is returned that will compute the
loss function and, by default, reduce the loss to a scalar tensor; see the
reduction
parameter for details. (The callable is a typically a class
instance that inherits from keras$losses$Loss
).
Computes the binary crossentropy loss.
label_smoothing
details: Float in [0, 1]
. If > 0
then smooth the labels
by squeezing them towards 0.5 That is, using 1. - 0.5 * label_smoothing
for the target class and 0.5 * label_smoothing
for the non-target class.
Computes the categorical crossentropy loss.
When using the categorical_crossentropy loss, your targets should be in
categorical format (e.g. if you have 10 classes, the target for each sample
should be a 10-dimensional vector that is all-zeros except for a 1 at the
index corresponding to the class of the sample). In order to convert
integer targets into categorical targets, you can use the Keras utility
function to_categorical()
:
categorical_labels <- to_categorical(int_labels, num_classes = NULL)
Computes Huber loss value.
For each value x in error = y_true - y_pred
:
loss = 0.5 * x^2 if |x| <= d loss = d * |x| - 0.5 * d^2 if |x| > d
where d is delta
. See: https://en.wikipedia.org/wiki/Huber_loss
Logarithm of the hyperbolic cosine of the prediction error.
log(cosh(x))
is approximately equal to (x ** 2) / 2
for small x
and
to abs(x) - log(2)
for large x
. This means that 'logcosh' works mostly
like the mean squared error, but will not be so strongly affected by the
occasional wildly incorrect prediction. However, it may return NaNs if the
intermediate value cosh(y_pred - y_true)
is too large to be represented
in the chosen precision.
compile.keras.engine.training.Model()
,
loss_binary_crossentropy()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.