[UNet medical/TF] Loss update
This commit is contained in:
parent
8eca836a2e
commit
32e921cd99
|
@ -634,6 +634,5 @@ February 2020
|
|||
|
||||
### Known issues
|
||||
|
||||
* Some set-ups suffer from a `ncclCommInitRank failed: unhandled system error`. This is a known issue with NCCL 2.7.5. The issue is solved in NCCL 2.7.8, which can be applied by changing the first line the Dockerfile from `ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:20.06-tf2-py3` to `ARG FROM_IMAGE_NAME=nvcr.io/nvidia/tensorflow:20.08-tf2-py3` and rebuilding the docker image.
|
||||
* For TensorFlow 2.0 the training performance using AMP and XLA is around 30% lower than reported here. The issue was solved in TensorFlow 2.1.
|
||||
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
Pillow
|
||||
tf2onnx
|
||||
munch
|
||||
munch
|
|
@ -32,8 +32,8 @@ def partial_losses(predict, target):
|
|||
flat_labels = tf.reshape(target,
|
||||
[tf.shape(input=predict)[0], -1, n_classes])
|
||||
|
||||
crossentropy_loss = tf.reduce_mean(input_tensor=tf.keras.backend.binary_crossentropy(output=flat_logits,
|
||||
target=flat_labels),
|
||||
crossentropy_loss = tf.reduce_mean(input_tensor=tf.nn.softmax_cross_entropy_with_logits(logits=flat_logits,
|
||||
labels=flat_labels),
|
||||
name='cross_loss_ref')
|
||||
|
||||
dice_loss = tf.reduce_mean(input_tensor=1 - dice_coef(tf.keras.activations.softmax(flat_logits, axis=-1),
|
||||
|
|
Loading…
Reference in a new issue