in zero_inflated_lognormal.py line 76:
regression_loss = -tf.keras.backend.mean(
positive * tfd.LogNormal(loc=loc, scale=scale).log_prob(safe_labels),
axis=-1)
return classification_loss + regression_loss
In the paper, the Loss equals CrossEntropyLoss + LogNormalLoss, so why there is a minus in front of the LogNormalLoss?