animegan2-pytorch/README.md

141 lines
4.9 KiB
Markdown
Raw Permalink Normal View History

2021-02-16 13:00:00 +01:00
## PyTorch Implementation of [AnimeGANv2](https://github.com/TachibanaYoshino/AnimeGANv2)
2021-02-16 12:59:02 +01:00
2021-11-07 04:49:58 +01:00
**Updates**
* `2021-10-17` Add weights for [FacePortraitV2](#additional-model-weights)
2021-11-07 04:59:23 +01:00
* `2021-11-07` Thanks to [ak92501](https://twitter.com/ak92501), a web demo is integrated to [Huggingface Spaces](https://huggingface.co/spaces) with [Gradio](https://github.com/gradio-app/gradio).
See demo: [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/akhaliq/AnimeGANv2)
2021-11-07 04:49:58 +01:00
2021-11-07 08:48:29 +01:00
* `2021-11-07` Thanks to [xhlulu](https://github.com/xhlulu), the `torch.hub` model is now available. See [Torch Hub Usage](#torch-hub-usage).
* `2021-11-07` Add FacePortraitV2 style demo to a telegram bot. See [@face2stickerbot](https://t.me/face2stickerbot) by [sxela](https://github.com/sxela)
2021-11-07 08:42:16 +01:00
## Basic Usage
2021-11-06 22:18:45 +01:00
**Weight Conversion from the Original Repo (Requires TensorFlow 1.x)**
2021-02-16 12:59:02 +01:00
```
git clone https://github.com/TachibanaYoshino/AnimeGANv2
python convert_weights.py
```
**Inference**
```
2021-02-22 23:47:03 +01:00
python test.py --input_dir [image_folder_path] --device [cpu/cuda]
2021-02-16 12:59:02 +01:00
```
2021-07-27 12:14:23 +02:00
**Results from converted [[Paprika]](https://drive.google.com/file/d/1K_xN32uoQKI8XmNYNLTX5gDn1UnQVe5I/view?usp=sharing) style model**
2021-02-16 12:59:02 +01:00
(input image, original tensorflow result, pytorch result from left to right)
2021-02-16 13:00:24 +01:00
<img src="./samples/compare/1.jpg" width="960"> &nbsp;
<img src="./samples/compare/2.jpg" width="960"> &nbsp;
<img src="./samples/compare/3.jpg" width="960"> &nbsp;
2021-02-16 12:59:02 +01:00
**Note:** Training code not included / Results from converted weights slightly different due to the [bilinear upsample issue](https://github.com/pytorch/pytorch/issues/10604)
2021-07-27 12:14:23 +02:00
2021-11-07 08:42:16 +01:00
## Additional Model Weights
2021-07-27 12:14:23 +02:00
2021-07-27 12:25:13 +02:00
**Webtoon Face** [[ckpt]](https://drive.google.com/file/d/10T6F3-_RFOCJn6lMb-6mRmcISuYWJXGc)
2021-03-16 12:35:56 +01:00
2021-07-27 12:14:23 +02:00
<details>
<summary>samples</summary>
2021-03-16 12:35:56 +01:00
2021-10-17 11:00:41 +02:00
Trained on <b>256x256</b> face images. Distilled from [webtoon face model](https://github.com/bryandlee/naver-webtoon-faces/blob/master/README.md#face2webtoon) with L2 + VGG + GAN Loss and CelebA-HQ images. See `test_faces.ipynb` for details.
2021-03-16 12:35:56 +01:00
<img src="./samples/face_results.jpg" width="512"> &nbsp;
2021-07-27 12:14:23 +02:00
</details>
2021-03-16 12:35:56 +01:00
2021-07-27 12:25:13 +02:00
**Face Portrait v1** [[ckpt]](https://drive.google.com/file/d/1WK5Mdt6mwlcsqCZMHkCUSDJxN1UyFi0-)
2021-07-27 12:14:23 +02:00
<details>
<summary>samples</summary>
2021-10-17 11:00:41 +02:00
Trained on <b>512x512</b> face images.
2021-08-18 13:49:18 +02:00
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jCqcKekdtKzW7cxiw_bjbbfLsPh-dEds?usp=sharing)
2021-07-27 12:14:23 +02:00
![samples](https://user-images.githubusercontent.com/26464535/127134790-93595da2-4f8b-4aca-a9d7-98699c5e6914.jpg)
2021-08-19 00:42:48 +02:00
[📺](https://youtu.be/CbMfI-HNCzw?t=317)
2021-08-18 13:15:31 +02:00
2021-08-19 00:42:48 +02:00
![sample](https://user-images.githubusercontent.com/26464535/129888683-98bb6283-7bb8-4d1a-a04a-e795f5858dcf.gif)
2021-07-27 12:14:23 +02:00
</details>
2021-03-16 12:35:56 +01:00
2021-10-17 11:22:40 +02:00
**Face Portrait v2** [[ckpt]](https://drive.google.com/uc?id=18H3iK09_d54qEDoWIc82SyWB2xun4gjU)
2021-10-17 11:00:41 +02:00
<details>
<summary>samples</summary>
2021-10-17 13:18:11 +02:00
Trained on <b>512x512</b> face images. Compared to v1, `🔻beautify` `🔺robustness`
2021-10-17 11:00:41 +02:00
2021-10-17 11:22:40 +02:00
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1jCqcKekdtKzW7cxiw_bjbbfLsPh-dEds?usp=sharing)
2021-10-17 11:00:41 +02:00
![face_portrait_v2_0](https://user-images.githubusercontent.com/26464535/137619176-59620b59-4e20-4d98-9559-a424f86b7f24.jpg)
![face_portrait_v2_1](https://user-images.githubusercontent.com/26464535/137619181-a45c9230-f5e7-4f3c-8002-7c266f89de45.jpg)
🦑 🎮 🔥
![face_portrait_v2_squid_game](https://user-images.githubusercontent.com/26464535/137619183-20e94f11-7a8e-4c3e-9b45-378ab63827ca.jpg)
</details>
2021-11-07 08:42:16 +01:00
## Torch Hub Usage
You can load Animegan v2 via `torch.hub`:
```python
import torch
model = torch.hub.load('bryandlee/animegan2-pytorch', 'generator').eval()
# convert your image into tensor here
out = model(img_tensor)
```
You can load with various configs (more details in [the torch docs](https://pytorch.org/docs/stable/hub.html)):
```python
model = torch.hub.load(
"bryandlee/animegan2-pytorch:main",
2021-11-07 08:42:16 +01:00
"generator",
pretrained=True, # or give URL to a pretrained model
device="cuda", # or "cpu" if you don't have a GPU
progress=True, # show progress
)
```
Currently, the following `pretrained` shorthands are available:
```python
model = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="celeba_distill")
model = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v1")
model = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="face_paint_512_v2")
model = torch.hub.load("bryandlee/animegan2-pytorch:main", "generator", pretrained="paprika")
2021-11-07 08:42:16 +01:00
```
You can also load the `face2paint` util function. First, install dependencies:
```
pip install torchvision Pillow numpy
```
Then, import the function using `torch.hub`:
```python
face2paint = torch.hub.load(
'bryandlee/animegan2-pytorch:main', 'face2paint',
2021-11-07 08:42:16 +01:00
size=512, device="cpu"
)
img = Image.open(...).convert("RGB")
out = face2paint(model, img)
```