AttrDict遍历读取的文件内容
最近尝试读取了一下yaml数据,想遍历一下,这里分享一下我的代码,我的文件名为transformer.base.yaml内容为:
# The frequency to save trained models when training.save_step: 10000# The frequency to fetch and print output when training.print_step: 100# Path of the checkpoint, to resume the previous traininginit_from_checkpoint: ""# Path of the pretrain model, to better solve the current taskinit_from_pretrain_model: ""# Path of trained parameter, to make predictioninit_from_params: ""# The directory for saving modelsave_model: "trained_models"# The directory for saving inference model inference_model_dir: ""# Set seed for CE or debugrandom_seed: None# The pattern to match training data files.training_file: "zh-en/train.ch.bpe,zh-en/train.en.bpe"# The pattern to match validation data files.validation_file: "zh-en/dev.ch.bpe,zh-en/dev.en.bpe"# The pattern to match test data files.predict_file: "zh-en/test.ch.bpe"# The file to output the translation results of predict_file to.output_file: "predict.txt"# The path of vocabulary file of source language.src_vocab_fpath: "zh-en/vocab.ch.src"# The path of vocabulary file of target language.trg_vocab_fpath: "zh-en/vocab.en.tgt"# The , and tokens in the dictionary.special_token: ["", "", ""]# The directory to store data.root: None# Whether to use cudause_gpu: True# Args for reader, see reader.py for detailspool_size: 200000sort_type: "global"batch_size: 4096infer_batch_size: 32shuffle_batch: True# Data shuffle only works when sort_type is pool or noneshuffle: True# shuffle_seed must be set when shuffle is True and using multi-cards to train. # Otherwise, the number of batches cannot be guaranteed. shuffle_seed: 128# Hyparams for training:# The number of epoches for trainingepoch: 10# The hyper parameters for Adam optimizer.# This static learning_rate will be applied to the LearningRateScheduler# derived learning rate the to get the final learning rate.learning_rate: 2.0beta1: 0.9beta2: 0.997eps: 1e-9# The parameters for learning rate scheduling.warmup_steps: 8000# The weight used to mix up the ground-truth distribution and the fixed# uniform distribution in label smoothing when training.# Set this as zero if label smoothing is not wanted.label_smooth_eps: 0.1# Hyparams for generation:# The parameters for beam search.beam_size: 5max_out_len: 256# The number of decoded sentences to output.n_best: 1# Hyparams for model:# These following five vocabularies related configurations will be set# automatically according to the passed vocabulary path and special tokens.# Size of source word dictionary.src_vocab_size: 10000# Size of target word dictionaytrg_vocab_size: 10000# Used to pad vocab size to be multiple of pad_factor.pad_factor: 8# Index for tokenbos_idx: 0# Index for tokeneos_idx: 1# Index for tokenunk_idx: 2# Max length of sequences deciding the size of position encoding table.max_length: 256# The dimension for word embeddings, which is also the last dimension of# the input and output of multi-head attention, position-wise feed-forward# networks, encoder and decoder.d_model: 512# Size of the hidden layer in position-wise feed-forward networks.d_inner_hid: 2048# Number of head used in multi-head attention.n_head: 8# Number of sub-layers to be stacked in the encoder and decoder.n_layer: 6# Dropout rates.dropout: 0.1# The flag indicating whether to share embedding and softmax weights.# Vocabularies in source and target should be same for weight sharing.weight_sharing: False
读取代码为:
yaml_file = './transformer.base.yaml'with open(yaml_file, 'rt') as f: args = AttrDict(yaml.safe_load(f)) # pprint(args) print(args) for k,v in args.items(): # print(k,v) print('{}="{}"'.format(k,v))
跟字典一样就可以遍历
参考文献
关于解析:如何执行-遍历和搜索python字典
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
暂时没有评论,来抢沙发吧~