Go to file
2021-08-23 07:45:19 +09:00
samples upsampling method changed + face model added 2021-02-21 15:04:52 +09:00
.gitignore Initial commit 2021-02-16 20:34:22 +09:00
convert_weights.py typo fixed 2021-02-17 14:03:56 +09:00
LICENSE Create LICENSE 2021-08-23 07:45:19 +09:00
model.py additional cli 2021-03-03 19:44:57 +09:00
README.md add video source 2021-08-19 07:42:48 +09:00
test.py additional cli 2021-03-03 19:44:57 +09:00
test_faces.ipynb upsampling method changed + face model added 2021-02-21 15:04:52 +09:00

PyTorch Implementation of AnimeGANv2

Weight Conversion from the Original Repo (Requires TensorFlow 1.x)

git clone https://github.com/TachibanaYoshino/AnimeGANv2
python convert_weights.py

Inference

python test.py --input_dir [image_folder_path] --device [cpu/cuda]

Results from converted [Paprika] style model

(input image, original tensorflow result, pytorch result from left to right)

     

Note: Training code not included / Tested on RTX3090 + PyTorch1.7.1 / Results from converted weights slightly different due to the bilinear upsample issue

Additional Models

Webtoon Face [ckpt]

samples

Works best on 256x256 face images. Distilled from webtoon face model with L2 + VGG + GAN Loss and CelebA-HQ images. See test_faces.ipynb for details.

 

Face Portrait v1 [ckpt]

samples

Works best on 512x512 face images. (WIP)

Colab

samples

📺

sample