Python code seems to me easier to understand than mathematical formula, especially when running and changing them. Like this (using PyTorch)? 이 것은 다중 클래스 분류에서 매우 자주 사용되는 목적 함수입니다. -1 * log(0.60) = 0.51 -1 * log(1 - 0.20) = 0.22 -1 * log(0.70) = 0.36 ----- total BCE = 1.09 mean BCE = 1.09 / 3 = 0.3633 In words, for an item, if the target is 1, the binary cross entropy is minus the log of the computed output. For y =1, the loss is as high as the value of x . 仔细看看,是不是就是等同于log_softmax和nll_loss两个步骤。 所以Pytorch中的F.cross_entropy会自动调用上面介绍的log_softmax和nll_loss来计算交叉熵,其计算方式如下: NLLLoss 的 输入 是一个对数概率向量和一个目标标签(不需要是one-hot编码形式的). Yang Zhang. Medium - A Brief Overview of Loss Functions in Pytorch PyTorch Documentation - nn.modules.loss Medium - VISUALIZATION OF SOME LOSS FUNCTIONS FOR … さらにgpuからcpuに変えて0番目のインデックスを指定 sum_loss += loss. To help myself understand I wrote all of Pytorch’s loss functions in plain Python and Numpy while confirming the results are the same. For this implementation, I’ll use PyTorch Lightning which will keep the code short but still scalable. If x > 0 loss will be x itself (higher value), if 0