data_juicer.analysis.collector module

class data_juicer.analysis.collector.TextTokenDistCollector(tokenizer)[源代码]

基类:object

Tokenize and collect distribution of tokens for given dataset with a specified tokenizer.

__init__(tokenizer)[源代码]

Initialization method.

参数:

tokenizer -- tokenizer name on huggingface

collect(data_path, text_key, num_proc=1) Categorical[源代码]

Tokenize and collect tokens distribution of input dataset :param data_path: path to input dataset. :param text_key: field keys that will be considered into token counts. :param num_proc: number of processes to count tokens. :return: token distribution.