DeepLearningExamples/TensorFlow/LanguageModeling/BERT/data
Przemek Strzelczyk 9cd3946603 Updating BERT/TF
- Pre-training and Finetuning on BioMedical tasks and corpus
- Disabling Grappler Optimizations for improved performance
2019-11-04 23:18:08 +01:00
..
images Updating BERT/TF 2019-11-04 23:18:08 +01:00
__init__.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
bertPrep.py Updating BERT/TF 2019-11-04 23:18:08 +01:00
BookscorpusTextFormatting.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
BooksDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
create_biobert_datasets_from_start.sh Updating BERT/TF 2019-11-04 23:18:08 +01:00
create_datasets_from_start.sh [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
Downloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
GLUEDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
GooglePretrainedWeightDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
NVIDIAPretrainedWeightDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
PubMedDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
PubMedTextFormatting.py Updating BERT/TF 2019-11-04 23:18:08 +01:00
README.md Updating BERT with TRT-IS support and new results 2019-07-25 16:53:05 +02:00
SquadDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
TextSharding.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
WikicorpusTextFormatting.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00
WikiDownloader.py [BERT/TF] Added multi-node support 2019-09-13 19:12:50 +02:00

Steps to reproduce datasets from web

  1. Build the container
  • docker build -t bert_tf .
  1. Run the container interactively
  • nvidia-docker run -it --ipc=host bert_tf
  • Optional: Mount data volumes
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/download
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/extracted_articles
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/raw_data
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/intermediate_files
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/final_text_file_single
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/final_text_files_sharded
    • -v yourpath:/workspace/bert/data/wikipedia_corpus/final_tfrecords_sharded
    • -v yourpath:/workspace/bert/data/bookcorpus/download
    • -v yourpath:/workspace/bert/data/bookcorpus/final_text_file_single
    • -v yourpath:/workspace/bert/data/bookcorpus/final_text_files_sharded
    • -v yourpath:/workspace/bert/data/bookcorpus/final_tfrecords_sharded
  • Optional: Select visible GPUs
    • -e CUDA_VISIBLE_DEVICES=0

** Inside of the container starting here** 3) Download pretrained weights (they contain vocab files for preprocessing)

  • cd data/pretrained_models_google && python3 download_models.py
  1. "One-click" SQuAD download
  • cd /workspace/bert/data/squad && . squad_download.sh
  1. "One-click" Wikipedia data download and prep (provides tfrecords)
  • Set your configuration in data/wikipedia_corpus/config.sh
  • cd /data/wikipedia_corpus && ./run_preprocessing.sh
  1. "One-click" BookCorpus data download and prep (provided tfrecords)
  • Set your configuration in data/wikipedia_corpus/config.sh
  • cd /data/bookcorpus && ./run_preprocessing.sh