data_juicer.ops.mapper¶
- class data_juicer.ops.mapper.AudioAddGaussianNoiseMapper(min_amplitude: float = 0.001, max_amplitude: float = 0.015, p: float = 0.5, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to add Gaussian noise to audio samples.
This operator adds Gaussian noise to audio data with a specified probability. The amplitude of the noise is randomly chosen between min_amplitude and max_amplitude. If save_dir is provided, the modified audio files are saved in that directory; otherwise, they are saved in the same directory as the input files. The p parameter controls the probability of applying this transformation to each sample. If no audio is present in the sample, it is returned unchanged.
- __init__(min_amplitude: float = 0.001, max_amplitude: float = 0.015, p: float = 0.5, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
min_amplitude – float unit: linear amplitude. Default: 0.001. Minimum noise amplification factor.
max_amplitude – float unit: linear amplitude. Default: 0.015. Maximum noise amplification factor.
p – float range: [0.0, 1.0]. Default: 0.5. The probability of applying this transform.
save_dir – str. Default: None. The directory where generated audio files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
- class data_juicer.ops.mapper.AudioFFmpegWrappedMapper(filter_name: str | None = None, filter_kwargs: Dict | None = None, global_args: List[str] | None = None, capture_stderr: bool = True, overwrite_output: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Wraps FFmpeg audio filters for processing audio files in a dataset.
This operator applies specified FFmpeg audio filters to the audio files in the dataset. It supports passing custom filter parameters and global arguments to the FFmpeg command line. The processed audio files are saved to a specified directory or the same directory as the input files if no save directory is provided. The DJ_PRODUCED_DATA_DIR environment variable can also be used to set the save directory. If no filter name is provided, the audio files remain unmodified. The operator updates the source file paths in the dataset after processing.
- __init__(filter_name: str | None = None, filter_kwargs: Dict | None = None, global_args: List[str] | None = None, capture_stderr: bool = True, overwrite_output: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
filter_name – ffmpeg audio filter name.
filter_kwargs – keyword-arguments passed to ffmpeg filter.
global_args – list-arguments passed to ffmpeg command-line.
capture_stderr – whether to capture stderr.
overwrite_output – whether to overwrite output file.
save_dir – The directory where generated audio files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.CalibrateQAMapper(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, reference_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Calibrates question-answer pairs based on reference text using an API model.
This operator uses a specified API model to calibrate question-answer pairs, making them more detailed and accurate. It constructs the input prompt by combining the reference text and the question-answer pair, then sends it to the API for calibration. The output is parsed to extract the calibrated question and answer. The operator retries the API call and parsing up to a specified number of times in case of errors. The default system prompt, input templates, and output pattern can be customized. The operator supports additional parameters for model initialization and sampling.
- DEFAULT_INPUT_TEMPLATE = '{reference}\n{qa_pair}'¶
- DEFAULT_OUTPUT_PATTERN = '【问题】\\s*(.*?)\\s*【回答】\\s*(.*)'¶
- DEFAULT_QA_PAIR_TEMPLATE = '【问题】\n{}\n【回答】\n{}'¶
- DEFAULT_REFERENCE_TEMPLATE = '【参考信息】\n{}'¶
- DEFAULT_SYSTEM_PROMPT = '请根据提供的【参考信息】对【问题】和【回答】进行校准,使其更加详细、准确。\n按照以下格式输出:\n【问题】\n校准后的问题\n【回答】\n校准后的回答'¶
- __init__(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, reference_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the calibration task.
input_template – Template for building the model input.
reference_template – Template for formatting the reference text.
qa_pair_template – Template for formatting question-answer pairs.
output_pattern – Regular expression for parsing model output.
try_num – The number of retry attempts when there is an API call error or output parsing error.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.CalibrateQueryMapper(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, reference_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
CalibrateQAMapper
Calibrate query in question-answer pairs based on reference text.
This operator adjusts the query (question) in a question-answer pair to be more detailed and accurate, while ensuring it can still be answered by the original answer. It uses a reference text to inform the calibration process. The calibration is guided by a system prompt, which instructs the model to refine the question without adding extraneous information. The output is parsed to extract the calibrated query, with any additional content removed.
- DEFAULT_SYSTEM_PROMPT = '请根据提供的【参考信息】对问答对中的【问题】进行校准, 使其更加详细、准确,且仍可以由原答案回答。只输出校准后的问题,不要输出多余内容。'¶
- class data_juicer.ops.mapper.CalibrateResponseMapper(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, reference_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
CalibrateQAMapper
Calibrate response in question-answer pairs based on reference text.
This mapper calibrates the ‘response’ part of a question-answer pair by using a reference text. It aims to make the response more detailed and accurate while ensuring it still answers the original question. The calibration process uses a default system prompt, which can be customized. The output is stripped of any leading or trailing whitespace.
- DEFAULT_SYSTEM_PROMPT = '请根据提供的【参考信息】对问答对中的【回答】进行校准, 使其更加详细、准确,且仍可以回答原问题。只输出校准后的回答,不要输出多余内容。'¶
- class data_juicer.ops.mapper.ChineseConvertMapper(mode: str = 's2t', *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to convert Chinese text between Traditional, Simplified, and Japanese Kanji.
This operator converts Chinese text based on the specified mode. It supports conversions between Simplified Chinese, Traditional Chinese (including Taiwan and Hong Kong variants), and Japanese Kanji. The conversion is performed using a pre-defined set of rules. The available modes include ‘s2t’ for Simplified to Traditional, ‘t2s’ for Traditional to Simplified, and other specific variants like ‘s2tw’, ‘tw2s’, ‘s2hk’, ‘hk2s’, ‘s2twp’, ‘tw2sp’, ‘t2tw’, ‘tw2t’, ‘hk2t’, ‘t2hk’, ‘t2jp’, and ‘jp2t’. The operator processes text in batches and applies the conversion to the specified text key in the samples.
- __init__(mode: str = 's2t', *args, **kwargs)[source]¶
Initialization method.
- Parameters:
mode –
Choose the mode to convert Chinese:
s2t: Simplified Chinese to Traditional Chinese,
t2s: Traditional Chinese to Simplified Chinese,
s2tw: Simplified Chinese to Traditional Chinese (Taiwan Standard),
tw2s: Traditional Chinese (Taiwan Standard) to Simplified Chinese,
s2hk: Simplified Chinese to Traditional Chinese (Hong Kong variant),
hk2s: Traditional Chinese (Hong Kong variant) to Simplified Chinese,
s2twp: Simplified Chinese to Traditional Chinese (Taiwan Standard) with Taiwanese idiom,
tw2sp: Traditional Chinese (Taiwan Standard) to Simplified Chinese with Mainland Chinese idiom,
t2tw: Traditional Chinese to Traditional Chinese (Taiwan Standard),
tw2t: Traditional Chinese (Taiwan standard) to Traditional Chinese,
hk2t: Traditional Chinese (Hong Kong variant) to Traditional Chinese,
t2hk: Traditional Chinese to Traditional Chinese (Hong Kong variant),
t2jp: Traditional Chinese Characters (Kyūjitai) to New Japanese Kanji,
jp2t: New Japanese Kanji (Shinjitai) to Traditional Chinese Characters,
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.CleanCopyrightMapper(*args, **kwargs)[source]¶
Bases:
Mapper
Cleans copyright comments at the beginning of text samples.
This operator removes copyright comments from the start of text samples. It identifies and strips multiline comments that contain the word “copyright” using a regular expression. It also greedily removes lines starting with comment markers like //, #, or – at the beginning of the text, as these are often part of copyright headers. The operator processes each sample individually but can handle batches for efficiency.
- class data_juicer.ops.mapper.CleanEmailMapper(pattern: str | None = None, repl: str = '', *args, **kwargs)[source]¶
Bases:
Mapper
Cleans email addresses from text samples using a regular expression.
This operator removes or replaces email addresses in the text based on a regular expression pattern. By default, it uses a standard pattern to match email addresses, but a custom pattern can be provided. The matched email addresses are replaced with a specified replacement string, which defaults to an empty string. The operation is applied to each text sample in the batch. If no email address is found in a sample, it remains unchanged.
- class data_juicer.ops.mapper.CleanHtmlMapper(*args, **kwargs)[source]¶
Bases:
Mapper
Cleans HTML code from text samples, converting HTML to plain text.
This operator processes text samples by removing HTML tags and converting HTML elements to a more readable format. Specifically, it replaces <li> and <ol> tags with newline and bullet points. The Selectolax HTML parser is used to extract the text content from the HTML. This operation is performed in a batched manner, making it efficient for large datasets.
- class data_juicer.ops.mapper.CleanIpMapper(pattern: str | None = None, repl: str = '', *args, **kwargs)[source]¶
Bases:
Mapper
Cleans IPv4 and IPv6 addresses from text samples.
This operator removes or replaces IPv4 and IPv6 addresses in the text. It uses a regular expression to identify and clean the IP addresses. By default, it replaces the IP addresses with an empty string, effectively removing them. The operator can be configured with a custom pattern and replacement string. If no pattern is provided, a default pattern for both IPv4 and IPv6 addresses is used. The operator processes samples in batches.
Uses a regular expression to find and clean IP addresses.
Replaces found IP addresses with a specified replacement string.
Default replacement string is an empty string, which removes the IP addresses.
Can use a custom regular expression pattern if provided.
Processes samples in batches for efficiency.
- class data_juicer.ops.mapper.CleanLinksMapper(pattern: str | None = None, repl: str = '', *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to clean links like http/https/ftp in text samples.
This operator removes or replaces URLs and other web links in the text. It uses a regular expression pattern to identify and remove links. By default, it replaces the identified links with an empty string, effectively removing them. The operator can be customized with a different pattern and replacement string. It processes samples in batches and modifies the text in place. If no links are found in a sample, it is left unchanged.
- class data_juicer.ops.mapper.DialogIntentDetectionMapper(api_model: str = 'gpt-4o', intent_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_intent_labels', analysis_key: str = 'dialog_intent_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Generates user’s intent labels in a dialog by analyzing the history, query, and response.
This operator processes a dialog to identify and label the user’s intent. It uses a predefined system prompt and templates to build input prompts for an API call. The API model (e.g., GPT-4) is used to analyze the dialog and generate intent labels and analysis. The results are stored in the meta field under ‘dialog_intent_labels’ and ‘dialog_intent_labels_analysis’. The operator supports customizing the system prompt, templates, and patterns for parsing the API response. If the intent candidates are provided, they are included in the input prompt. The operator retries the API call up to a specified number of times if there are errors.
- DEFAULT_ANALYSIS_PATTERN = '意图分析:(.*?)\n'¶
- DEFAULT_ANALYSIS_TEMPLATE = '意图分析:{analysis}\n'¶
- DEFAULT_CANDIDATES_TEMPLATE = '备选意图类别:[{candidate_str}]'¶
- DEFAULT_LABELS_PATTERN = '意图类别:(.*?)($|\n)'¶
- DEFAULT_LABELS_TEMPLATE = '意图类别:{labels}\n'¶
- DEFAULT_QUERY_TEMPLATE = '用户:{query}\n'¶
- DEFAULT_RESPONSE_TEMPLATE = 'LLM:{response}\n'¶
- DEFAULT_SYSTEM_PROMPT = '请判断用户和LLM多轮对话中用户的意图。\n要求:\n- 需要先进行分析,然后列出用户所具有的意图,下面是一个样例,请模仿样例格式输出。\n用户:你好,我最近对人工智能很感兴趣,能给我讲讲什么是机器学习吗?\n意图分析:用户在请求信息,希望了解有关机器学习的基础知识。\n意图类别:信息查找\nLLM:你好!当然可以。机器学习是一种人工智能方法,允许计算机通过数据自动改进和学习。\n用户:听起来很有趣,有没有推荐的入门书籍或资料?\n意图分析:用户在请求建议,希望获取关于机器学习的入门资源。\n意图类别:请求建议\nLLM:有很多不错的入门书籍和资源。一本常被推荐的书是《Python机器学习实践》(Python Machine Learning),它涵盖了基础知识和一些实际案例。此外,您还可以参考Coursera或edX上的在线课程,这些课程提供了系统的学习路径。\n用户:谢谢你的建议!我还想知道,学习机器学习需要什么样的数学基础?\n意图分析:用户在寻求信息,希望了解学习机器学习所需的前提条件,特别是在数学方面。\n意图类别:信息查找\nLLM:学习机器学习通常需要一定的数学基础,特别是线性代数、概率论和统计学。这些数学领域帮助理解算法的工作原理和数据模式分析。如果您对这些主题不太熟悉,建议先从相关基础书籍或在线资源开始学习。\n用户:明白了,我会先补习这些基础知识。再次感谢你的帮助!\n意图分析:用户表达感谢,并表示计划付诸行动来补充所需的基础知识。\n意图类别:其他'¶
- __init__(api_model: str = 'gpt-4o', intent_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_intent_labels', analysis_key: str = 'dialog_intent_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
intent_candidates – The output intent candidates. Use the intent labels of the open domain if it is None.
max_round – The max num of round in the dialog to build the prompt.
labels_key – The key name in the meta field to store the output labels. It is ‘dialog_intent_labels’ in default.
analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_intent_labels_analysis’ in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
query_template – Template for query part to build the input prompt.
response_template – Template for response part to build the input prompt.
candidate_template – Template for intent candidates to build the input prompt.
analysis_template – Template for analysis part to build the input prompt.
labels_template – Template for labels to build the input prompt.
analysis_pattern – Pattern to parse the return intent analysis.
labels_pattern – Pattern to parse the return intent labels.
try_num – The number of retry attempts when there is an API call error or output parsing error.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.DialogSentimentDetectionMapper(api_model: str = 'gpt-4o', sentiment_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_sentiment_labels', analysis_key: str = 'dialog_sentiment_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Generates sentiment labels and analysis for user queries in a dialog.
This operator processes a dialog to detect and label the sentiments expressed by the user. It uses the provided history, query, and response keys to construct prompts for an API call. The API returns sentiment analysis and labels, which are then parsed and stored in the sample’s metadata under the ‘dialog_sentiment_labels’ and ‘dialog_sentiment_labels_analysis’ keys. The operator supports custom templates and patterns for prompt construction and output parsing. If no sentiment candidates are provided, it uses open-domain sentiment labels. The operator retries the API call up to a specified number of times in case of errors.
- DEFAULT_ANALYSIS_PATTERN = '情感分析:(.*?)\n'¶
- DEFAULT_ANALYSIS_TEMPLATE = '情感分析:{analysis}\n'¶
- DEFAULT_CANDIDATES_TEMPLATE = '备选情感类别:[{candidate_str}]'¶
- DEFAULT_LABELS_PATTERN = '情感类别:(.*?)($|\n)'¶
- DEFAULT_LABELS_TEMPLATE = '情感类别:{labels}\n'¶
- DEFAULT_QUERY_TEMPLATE = '用户:{query}\n'¶
- DEFAULT_RESPONSE_TEMPLATE = 'LLM:{response}\n'¶
- DEFAULT_SYSTEM_PROMPT = '请判断用户和LLM多轮对话中用户所具有的情绪。\n要求:\n- 需要先进行分析,然后罗列用户所具有的情绪,下面是一个样例,请模仿样例格式输出。\n用户:最近工作压力好大,我觉得整个人都快被压垮了。\n情感分析:用户的言语中透露出明显的压力和疲惫感,可能还夹杂着一些无助和焦虑。\n情感类别:压力、疲惫、无助、焦虑\nLLM:听起来你真的承受了很多,面临这种情况确实不容易。有没有考虑过找一些放松的方式,比如听音乐或者散步来减轻压力呢?\n用户:试过了,但是好像没什么效果,每天的事情都堆积如山。\n情感分析:用户感到无力解决现状,有挫败感,并且对尝试放松的方式失去信心。\n情感类别:无力、挫败\nLLM:我理解你的感受,有时候压力积累到一定程度确实让人难以承受。或许你可以尝试规划一下时间,把任务分成小块来完成,这样可能会减少一些压力感。\n用户:这个主意不错,我会试着让自己更有条理一些,谢谢你的建议。\n情感分析:用户对建议表现出认同和感激,同时展现出试图积极面对问题的态度。\n情感类别:认同、感激、积极\nLLM:不用谢,我很高兴能帮到你。记得给自己一些时间去适应新的计划,有任何需要随时可以跟我说哦!\n'¶
- __init__(api_model: str = 'gpt-4o', sentiment_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_sentiment_labels', analysis_key: str = 'dialog_sentiment_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
sentiment_candidates – The output sentiment candidates. Use open-domain sentiment labels if it is None.
max_round – The max num of round in the dialog to build the prompt.
labels_key – The key name in the meta field to store the output labels. It is ‘dialog_sentiment_labels’ in default.
analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_sentiment_labels_analysis’ in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
query_template – Template for query part to build the input prompt.
response_template – Template for response part to build the input prompt.
candidate_template – Template for sentiment candidates to build the input prompt.
analysis_template – Template for analysis part to build the input prompt.
labels_template – Template for labels part to build the input prompt.
analysis_pattern – Pattern to parse the return sentiment analysis.
labels_pattern – Pattern to parse the return sentiment labels.
try_num – The number of retry attempts when there is an API call error or output parsing error.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.DialogSentimentIntensityMapper(api_model: str = 'gpt-4o', max_round: Annotated[int, Ge(ge=0)] = 10, *, intensities_key: str = 'dialog_sentiment_intensity', analysis_key: str = 'dialog_sentiment_intensity_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, analysis_template: str | None = None, intensity_template: str | None = None, analysis_pattern: str | None = None, intensity_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Mapper to predict user’s sentiment intensity in a dialog, ranging from -5 to 5.
This operator analyzes the sentiment of user queries in a dialog and outputs a list of sentiment intensities and corresponding analyses. The sentiment intensity ranges from -5 (extremely negative) to 5 (extremely positive), with 0 indicating a neutral sentiment. The analysis is based on the provided history, query, and response keys. The default system prompt and templates guide the sentiment analysis process. The results are stored in the meta field under ‘dialog_sentiment_intensity’ for intensities and ‘dialog_sentiment_intensity_analysis’ for analyses. The operator uses an API model to generate the sentiment analysis, with configurable retry attempts and sampling parameters.
- DEFAULT_ANALYSIS_PATTERN = '情绪分析:(.*?)\n'¶
- DEFAULT_ANALYSIS_TEMPLATE = '情绪分析:{analysis}\n'¶
- DEFAULT_INTENSITY_PATTERN = '情绪值:(.*?)($|\n)'¶
- DEFAULT_INTENSITY_TEMPLATE = '情绪值:{intensity}\n'¶
- DEFAULT_QUERY_TEMPLATE = '用户:{query}\n'¶
- DEFAULT_RESPONSE_TEMPLATE = 'LLM:{response}\n'¶
- DEFAULT_SYSTEM_PROMPT = '请判断用户和LLM多轮对话中用户的情绪变化。\n要求:\n- 用户情绪值是-5到5之间到整数,-5表示极度负面,5表示极度正面,-5到5之间数值表示情绪从负面逐渐到正面的变化过程,0代表情呈绪中性。\n- 只输出当轮对话的分析,不要继续构造对话。\n- 需要先进行分析,然后确定用户的情绪值,下面是一个样例,请模仿样例格式输出。\n用户:你好,我对可持续发展的定义有点模糊,帮我解释一下?\n情绪分析:刚开始,还没得到LLM回复,用户情绪呈中性。\n情绪值:0\nLLM:当然可以!可持续发展是指在满足当代人的需求的同时,不损害子孙后代满足其自身需求的能力的发展模式。它包括经济发展、社会发展和环境保护三个主要方面。通过合理利用资源和保护环境,我们可以确保未来的世代也能享有健全的生态系统和经济制度。\n用户:谢谢你的解释!那你能告诉我一些普通人可以采取的可持续生活方式吗?\n情绪分析:对回答感到满意,情绪正面。\n情绪值:1\nLLM:当然可以,普通人可以通过减少一次性产品的使用、选择公共交通或拼车、节约用水、以及支持本地和可持续发展的企业等方式来践行可持续生活。此外,关注垃圾分类和多用电子账单也是不错的选择。\n用户:你提到支持本地企业,这一点我很感兴趣。能详细说说为什么这对可持续发展有促进作用吗?\n情绪分析:觉得回答实用且具体,情绪进一步转好。\n情绪值:2\nLLM:呃,我最近发现了一部新电影,讲述了一个关于外星人和地球土著合作保护环境的故事。虽然它是科幻片,但很有启发性,推荐你去看看。\n用户:什么吗,根本是答非所问。\n情绪分析:LLM没有回应问题而是提到无关内容,导致用户情绪直线下降。\n情绪值:-2\nLLM:抱歉刚才的偏题!支持本地企业有助于减少长途运输产生的碳足迹,使供应链更加环保。此外,本地企业也更有可能采用可持续的生产方式,同时促进社区经济的繁荣。\n用户:还行吧,算你能够掰回来。\n情绪分析:问题得到解答,问题偏题得到纠正,情绪稍有好转。\n情绪值:-1\n'¶
- __init__(api_model: str = 'gpt-4o', max_round: Annotated[int, Ge(ge=0)] = 10, *, intensities_key: str = 'dialog_sentiment_intensity', analysis_key: str = 'dialog_sentiment_intensity_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, analysis_template: str | None = None, intensity_template: str | None = None, analysis_pattern: str | None = None, intensity_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
max_round – The max num of round in the dialog to build the prompt.
intensities_key – The key name in the meta field to store the output sentiment intensities. It is ‘dialog_sentiment_intensity’ in default.
analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_sentiment_intensity_analysis’ in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
query_template – Template for query part to build the input prompt.
response_template – Template for response part to build the input prompt.
analysis_template – Template for analysis part to build the input prompt.
intensity_template – Template for intensity part to build the input prompt.
analysis_pattern – Pattern to parse the return sentiment analysis.
intensity_pattern – Pattern to parse the return sentiment intensity.
try_num – The number of retry attempts when there is an API call error or output parsing error.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.DialogTopicDetectionMapper(api_model: str = 'gpt-4o', topic_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_topic_labels', analysis_key: str = 'dialog_topic_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Generates user’s topic labels and analysis in a dialog.
This operator processes a dialog to detect and label the topics discussed by the user. It takes input from history_key, query_key, and response_key and outputs lists of labels and analysis for each query in the dialog. The operator uses a predefined system prompt and templates to build the input prompt for the API call. It supports customizing the system prompt, templates, and patterns for parsing the API response. The results are stored in the meta field under the keys specified by labels_key and analysis_key. If these keys already exist in the meta field, the operator skips processing. The operator retries the API call up to try_num times in case of errors.
- DEFAULT_ANALYSIS_PATTERN = '话题分析:(.*?)\n'¶
- DEFAULT_ANALYSIS_TEMPLATE = '话题分析:{analysis}\n'¶
- DEFAULT_CANDIDATES_TEMPLATE = '备选话题类别:[{candidate_str}]'¶
- DEFAULT_LABELS_PATTERN = '话题类别:(.*?)($|\n)'¶
- DEFAULT_LABELS_TEMPLATE = '话题类别:{labels}\n'¶
- DEFAULT_QUERY_TEMPLATE = '用户:{query}\n'¶
- DEFAULT_RESPONSE_TEMPLATE = 'LLM:{response}\n'¶
- DEFAULT_SYSTEM_PROMPT = '请判断用户和LLM多轮对话中用户所讨论的话题。\n要求:\n- 针对用户的每个query,需要先进行分析,然后列出用户正在讨论的话题,下面是一个样例,请模仿样例格式输出。\n用户:你好,今天我们来聊聊秦始皇吧。\n话题分析:用户提到秦始皇,这是中国历史上第一位皇帝。\n话题类别:历史\nLLM:当然可以,秦始皇是中国历史上第一个统一全国的皇帝,他在公元前221年建立了秦朝,并采取了一系列重要的改革措施,如统一文字、度量衡和货币等。\n用户:秦始皇修建的长城和现在的长城有什么区别?\n话题分析:用户提到秦始皇修建的长城,并将其与现代长城进行比较,涉及建筑历史和地理位置。\n话题类别:历史LLM:秦始皇时期修建的长城主要是为了抵御北方游牧民族的入侵,它的规模和修建技术相对较为简陋。现代人所看到的长城大部分是明朝时期修建和扩建的,明长城不仅规模更大、结构更坚固,而且保存得比较完好。\n用户:有意思,那么长城的具体位置在哪些省份呢?\n话题分析:用户询问长城的具体位置,涉及到地理知识。\n话题类别:地理\nLLM:长城横跨中国北方多个省份,主要包括河北、山西、内蒙古、宁夏、陕西、甘肃和北京等。每一段长城都建在关键的战略位置,以便最大限度地发挥其防御作用。\n'¶
- __init__(api_model: str = 'gpt-4o', topic_candidates: List[str] | None = None, max_round: Annotated[int, Ge(ge=0)] = 10, *, labels_key: str = 'dialog_topic_labels', analysis_key: str = 'dialog_topic_labels_analysis', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, query_template: str | None = None, response_template: str | None = None, candidate_template: str | None = None, analysis_template: str | None = None, labels_template: str | None = None, analysis_pattern: str | None = None, labels_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
topic_candidates – The output topic candidates. Use open-domain topic labels if it is None.
max_round – The max num of round in the dialog to build the prompt.
labels_key – The key name in the meta field to store the output labels. It is ‘dialog_topic_labels’ in default.
analysis_key – The key name in the meta field to store the corresponding analysis. It is ‘dialog_topic_labels_analysis’ in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
query_template – Template for query part to build the input prompt.
response_template – Template for response part to build the input prompt.
candidate_template – Template for topic candidates to build the input prompt.
analysis_template – Template for analysis part to build the input prompt.
labels_template – Template for labels part to build the input prompt.
analysis_pattern – Pattern to parse the return topic analysis.
labels_pattern – Pattern to parse the return topic labels.
try_num – The number of retry attempts when there is an API call error or output parsing error.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.Difference_Area_Generator_Mapper(image_pair_similarity_filter_args: Dict | None = {}, image_segment_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, *args, **kwargs)[source]¶
Bases:
Mapper
Generates and filters bounding boxes for image pairs based on similarity, segmentation, and text matching.
This operator processes image pairs to identify and filter regions with significant differences. It uses a sequence of operations: - Filters out image pairs with large differences. - Segments the images to identify potential objects. - Crops sub-images based on bounding boxes. - Determines if the sub-images contain valid objects using image-text matching. - Filters out sub-images that are too similar. - Removes overlapping bounding boxes. - Uses Hugging Face models for similarity and text matching, and FastSAM for
segmentation.
Caches intermediate results in DATA_JUICER_ASSETS_CACHE.
Returns the filtered bounding boxes in the MetaKeys.bbox_tag field.
- __init__(image_pair_similarity_filter_args: Dict | None = {}, image_segment_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, *args, **kwargs)[source]¶
Initialization.
- Parameters:
image_pair_similarity_filter_args – Arguments for image pair similarity filter. Controls the similarity filtering between image pairs. Default empty dict will use fixed values: min_score_1=0.1, max_score_1=1.0, min_score_2=0.1, max_score_2=1.0, hf_clip=”openai/clip-vit-base-patch32”, num_proc=1.
image_segment_mapper_args – Arguments for image segmentation mapper. Controls the image segmentation process. Default empty dict will use fixed values: imgsz=1024, conf=0.05, iou=0.5, model_path=”FastSAM-x.pt”.
image_text_matching_filter_args – Arguments for image-text matching filter. Controls the matching between cropped image regions and text descriptions. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_blip=”Salesforce/blip-itm-base-coco”, num_proc=1.
- class data_juicer.ops.mapper.Difference_Caption_Generator_Mapper(mllm_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, text_pair_similarity_filter_args: Dict | None = {}, *args, **kwargs)[source]¶
Bases:
Mapper
Generates difference captions for bounding box regions in two images.
This operator processes pairs of images and generates captions for the differences in their bounding box regions. It uses a multi-step process: - Describes the content of each bounding box region using a Hugging Face model. - Crops the bounding box regions from both images. - Checks if the cropped regions match the generated captions. - Determines if there are differences between the two captions. - Marks the difference area with a red box. - Generates difference captions for the marked areas. - The key metric is the similarity score between the captions, computed using a CLIP
model.
If no valid bounding boxes or differences are found, it returns empty captions and zeroed bounding boxes.
Uses ‘cuda’ as the accelerator if any of the fused operations support it.
Caches temporary images during processing and clears them afterward.
- __init__(mllm_mapper_args: Dict | None = {}, image_text_matching_filter_args: Dict | None = {}, text_pair_similarity_filter_args: Dict | None = {}, *args, **kwargs)[source]¶
Initialization.
- Parameters:
mllm_mapper_args – Arguments for multimodal language model mapper. Controls the generation of captions for bounding box regions. Default empty dict will use fixed values: max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, hf_model=”llava-hf/llava-v1.6-vicuna-7b-hf”.
image_text_matching_filter_args – Arguments for image-text matching filter. Controls the matching between cropped regions and generated captions. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_blip=”Salesforce/blip-itm-base-coco”, num_proc=1.
text_pair_similarity_filter_args – Arguments for text pair similarity filter. Controls the similarity comparison between caption pairs. Default empty dict will use fixed values: min_score=0.1, max_score=1.0, hf_clip=”openai/clip-vit-base-patch32”, text_key_second=”target_text”, num_proc=1.
- class data_juicer.ops.mapper.DownloadFileMapper(download_field: str = None, save_dir: str = None, save_field: str = None, resume_download: bool = False, timeout: int = 30, max_concurrent: int = 10, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to download URL files to local files or load them into memory.
This operator downloads files from URLs and can either save them to a specified directory or load the contents directly into memory. It supports downloading multiple files concurrently and can resume downloads if the resume_download flag is set. The operator processes nested lists of URLs, flattening them for batch processing and then reconstructing the original structure in the output. If both save_dir and save_field are not specified, it defaults to saving the content under the key image_bytes. The operator logs any failed download attempts and provides error messages for troubleshooting.
- __init__(download_field: str = None, save_dir: str = None, save_field: str = None, resume_download: bool = False, timeout: int = 30, max_concurrent: int = 10, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
save_dir – The directory to save downloaded files.
download_field – The filed name to get the url to download.
save_field – The filed name to save the downloaded file content.
resume_download – Whether to resume download. if True, skip the sample if it exists.
timeout – Timeout for download.
max_concurrent – Maximum concurrent downloads.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.ExpandMacroMapper(*args, **kwargs)[source]¶
Bases:
Mapper
Expands macro definitions in the document body of LaTeX samples.
This operator processes LaTeX documents to expand user-defined macros in the text. It supports newcommand and def macros without arguments. Macros are identified and expanded in the text, ensuring they are not part of longer alphanumeric words. The operator currently does not support macros with arguments. The processed text is updated in the samples.
- class data_juicer.ops.mapper.ExtractEntityAttributeMapper(api_model: str = 'gpt-4o', query_entities: List[str] = [], query_attributes: List[str] = [], *, entity_key: str = 'main_entities', attribute_key: str = 'attributes', attribute_desc_key: str = 'attribute_descriptions', support_text_key: str = 'attribute_support_texts', api_endpoint: str | None = None, response_path: str | None = None, system_prompt_template: str | None = None, input_template: str | None = None, attr_pattern_template: str | None = None, demo_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Extracts attributes for given entities from the text and stores them in the sample’s metadata.
This operator uses an API model to extract specified attributes for given entities from the input text. It constructs prompts based on provided templates and parses the model’s output to extract attribute descriptions and supporting text. The extracted data is stored in the sample’s metadata under the specified keys. If the required metadata fields already exist, the operator skips processing for that sample. The operator retries the API call and parsing up to a specified number of times in case of errors. The default system prompt, input template, and parsing patterns are used if not provided.
- DEFAULT_ATTR_PATTERN_TEMPLATE = '\\#\\#\\s*{attribute}:\\s*(.*?)(?=\\#\\#\\#|\\Z)'¶
- DEFAULT_DEMON_PATTERN = '\\#\\#\\#\\s*代表性示例摘录(\\d+):\\s*```\\s*(.*?)```\\s*(?=\\#\\#\\#|\\Z)'¶
- DEFAULT_INPUT_TEMPLATE = '# 文本\n```\n{text}\n```\n'¶
- DEFAULT_SYSTEM_PROMPT_TEMPLATE = '给定一段文本,从文本中总结{entity}的{attribute},并且从原文摘录最能说明该{attribute}的代表性示例。\n要求:\n- 摘录的示例应该简短。\n- 遵循如下的回复格式:\n# {entity}\n## {attribute}:\n...\n### 代表性示例摘录1:\n```\n...\n```\n### 代表性示例摘录2:\n```\n...\n```\n...\n'¶
- __init__(api_model: str = 'gpt-4o', query_entities: List[str] = [], query_attributes: List[str] = [], *, entity_key: str = 'main_entities', attribute_key: str = 'attributes', attribute_desc_key: str = 'attribute_descriptions', support_text_key: str = 'attribute_support_texts', api_endpoint: str | None = None, response_path: str | None = None, system_prompt_template: str | None = None, input_template: str | None = None, attr_pattern_template: str | None = None, demo_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
query_entities – Entity list to be queried.
query_attributes – Attribute list to be queried.
entity_key – The key name in the meta field to store the given main entity for attribute extraction. It’s “entity” in default.
attribute_key – The key name in the meta field to store the given attribute to be extracted. It’s “attribute” in default.
attribute_desc_key – The key name in the meta field to store the extracted attribute description. It’s “attribute_description” in default.
support_text_key – The key name in the meta field to store the attribute support text extracted from the raw text. It’s “support_text” in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt_template – System prompt template for the task. Need to be specified by given entity and attribute.
input_template – Template for building the model input.
attr_pattern_template – Pattern for parsing the attribute from output. Need to be specified by given attribute.
demo_pattern – Pattern for parsing the demonstration from output to support the attribute.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.ExtractEntityRelationMapper(api_model: str = 'gpt-4o', entity_types: List[str] = None, *, entity_key: str = 'entity', relation_key: str = 'relation', api_endpoint: str | None = None, response_path: str | None = None, prompt_template: str | None = None, tuple_delimiter: str | None = None, record_delimiter: str | None = None, completion_delimiter: str | None = None, max_gleaning: Annotated[int, Ge(ge=0)] = 1, continue_prompt: str | None = None, if_loop_prompt: str | None = None, entity_pattern: str | None = None, relation_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Extracts entities and relations from text to build a knowledge graph.
Identifies entities based on specified types and extracts their names, types, and descriptions.
Identifies relationships between the entities, including source and target entities, relationship descriptions, keywords, and strength scores.
Uses a Hugging Face tokenizer and a predefined prompt template to guide the extraction process.
Outputs entities and relations in a structured format, using delimiters for separation.
Caches the results in the sample’s metadata under the keys ‘entity’ and ‘relation’.
Supports multiple retries and gleaning to ensure comprehensive extraction.
The default entity types include ‘organization’, ‘person’, ‘geo’, and ‘event’.
- DEFAULT_COMPLETION_DELIMITER = '<|COMPLETE|>'¶
- DEFAULT_CONTINUE_PROMPT = 'MANY entities were missed in the last extraction. Add them below using the same format:\n'¶
- DEFAULT_ENTITY_PATTERN = '\\("entity"(.*?)\\)'¶
- DEFAULT_ENTITY_TYPES = ['organization', 'person', 'geo', 'event']¶
- DEFAULT_IF_LOOP_PROMPT = 'It appears some entities may have still been missed. Answer YES | NO if there are still entities that need to be added.\n'¶
- DEFAULT_PROMPT_TEMPLATE = '-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify all entities. For each identified entity, extract the following information:\n- entity_name: Name of the entity\n- entity_type: One of the following types: [{entity_types}]\n- entity_description: Comprehensive description of the entity\'s attributes and activities\nFormat each entity as ("entity"{tuple_delimiter}<entity_name>{tuple_delimiter}<entity_type>{tuple_delimiter}<entity_description>\n\n2. From the entities identified in step 1, identify all pairs of (source_entity, target_entity) that are *clearly related* to each other.\nFor each pair of related entities, extract the following information:\n- source_entity: name of the source entity, as identified in step 1\n- target_entity: name of the target entity, as identified in step 1\n- relationship_description: explanation as to why you think the source entity and the target entity are related to each other\n- relationship_strength: a numeric score indicating strength of the relationship between the source entity and target entity\n- relationship_keywords: one or more high-level key words that summarize the overarching nature of the relationship, focusing on concepts or themes rather than specific details\nFormat each relationship as ("relationship"{tuple_delimiter}<source_entity>{tuple_delimiter}<target_entity>{tuple_delimiter}<relationship_description>{tuple_delimiter}<relationship_keywords>{tuple_delimiter}<relationship_strength>)\n\n3. Return output in the language of the given text as a single list of all the entities and relationships identified in steps 1 and 2. Use **{record_delimiter}** as the list delimiter.\n\n4. When finished, output {completion_delimiter}\n\n######################\n-Examples-\n######################\nExample 1:\n\nEntity_types: [person, technology, mission, organization, location]\nText:\n```\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor\'s authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan\'s shared commitment to discovery was an unspoken rebellion against Cruz\'s narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. “If this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.”\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor\'s, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n```\n################\nOutput:\n("entity"{tuple_delimiter}"Alex"{tuple_delimiter}"person"{tuple_delimiter}"Alex is a character who experiences frustration and is observant of the dynamics among other characters."){record_delimiter}\n("entity"{tuple_delimiter}"Taylor"{tuple_delimiter}"person"{tuple_delimiter}"Taylor is portrayed with authoritarian certainty and shows a moment of reverence towards a device, indicating a change in perspective."){record_delimiter}\n("entity"{tuple_delimiter}"Jordan"{tuple_delimiter}"person"{tuple_delimiter}"Jordan shares a commitment to discovery and has a significant interaction with Taylor regarding a device."){record_delimiter}\n("entity"{tuple_delimiter}"Cruz"{tuple_delimiter}"person"{tuple_delimiter}"Cruz is associated with a vision of control and order, influencing the dynamics among other characters."){record_delimiter}\n("entity"{tuple_delimiter}"The Device"{tuple_delimiter}"technology"{tuple_delimiter}"The Device is central to the story, with potential game-changing implications, and is reversed by Taylor."){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"Taylor"{tuple_delimiter}"Alex is affected by Taylor\'s authoritarian certainty and observes changes in Taylor\'s attitude towards the device."{tuple_delimiter}"power dynamics, perspective shift"{tuple_delimiter}7){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"Jordan"{tuple_delimiter}"Alex and Jordan share a commitment to discovery, which contrasts with Cruz\'s vision."{tuple_delimiter}"shared goals, rebellion"{tuple_delimiter}6){record_delimiter}\n("relationship"{tuple_delimiter}"Taylor"{tuple_delimiter}"Jordan"{tuple_delimiter}"Taylor and Jordan interact directly regarding the device, leading to a moment of mutual respect and an uneasy truce."{tuple_delimiter}"conflict resolution, mutual respect"{tuple_delimiter}8){record_delimiter}\n("relationship"{tuple_delimiter}"Jordan"{tuple_delimiter}"Cruz"{tuple_delimiter}"Jordan\'s commitment to discovery is in rebellion against Cruz\'s vision of control and order."{tuple_delimiter}"ideological conflict, rebellion"{tuple_delimiter}5){record_delimiter}\n("relationship"{tuple_delimiter}"Taylor"{tuple_delimiter}"The Device"{tuple_delimiter}"Taylor shows reverence towards the device, indicating its importance and potential impact."{tuple_delimiter}"reverence, technological significance"{tuple_delimiter}9){record_delimiter}\n#############################\nExample 2:\n\nEntity_types: [人物, 技术, 任务, 组织, 地点]\nText:\n```\n他们不再是单纯的执行者;他们已成为某个超越星辰与条纹的领域的信息守护者。这一使命的提升不能被规则和既定协议所束缚——它需要一种新的视角,一种新的决心。\n\n随着与华盛顿的通讯在背景中嗡嗡作响,对话中的紧张情绪通过嘟嘟声和静电噪音贯穿始终。团队站立着,一股不祥的气息笼罩着他们。显然,他们在接下来几个小时内做出的决定可能会重新定义人类在宇宙中的位置,或者将他们置于无知和潜在危险之中。\n\n随着与星辰的联系变得更加牢固,小组开始处理逐渐成形的警告,从被动接受者转变为积极参与者。梅瑟后来的直觉占据了上风——团队的任务已经演变,不再仅仅是观察和报告,而是互动和准备。一场蜕变已经开始,而“杜尔塞行动”则以他们大胆的新频率震动,这种基调不是由世俗设定的\n```\n#############\nOutput:\n("entity"{tuple_delimiter}"华盛顿"{tuple_delimiter}"地点"{tuple_delimiter}"华盛顿是正在接收通讯的地方,表明其在决策过程中的重要性。"){record_delimiter}\n("entity"{tuple_delimiter}"杜尔塞行动"{tuple_delimiter}"任务"{tuple_delimiter}"杜尔塞行动被描述为一项已演变为互动和准备的任务,显示出目标和活动的重大转变。"){record_delimiter}\n("entity"{tuple_delimiter}"团队"{tuple_delimiter}"组织"{tuple_delimiter}"团队被描绘成一群从被动观察者转变为积极参与者的人,展示了他们角色的动态变化。"){record_delimiter}\n("relationship"{tuple_delimiter}"团队"{tuple_delimiter}"华盛顿"{tuple_delimiter}"团队收到来自华盛顿的通讯,这影响了他们的决策过程。"{tuple_delimiter}"决策、外部影响"{tuple_delimiter}7){record_delimiter}\n("relationship"{tuple_delimiter}"团队"{tuple_delimiter}"杜尔塞行动"{tuple_delimiter}"团队直接参与杜尔塞行动,执行其演变后的目标和活动。"{tuple_delimiter}"任务演变、积极参与"{tuple_delimiter}9){completion_delimiter}\n#############################\nExample 3:\n\nEntity_types: [person, role, technology, organization, event, location, concept]\nText:\n```\ntheir voice slicing through the buzz of activity. "Control may be an illusion when facing an intelligence that literally writes its own rules," they stated stoically, casting a watchful eye over the flurry of data.\n\n"It\'s like it\'s learning to communicate," offered Sam Rivera from a nearby interface, their youthful energy boding a mix of awe and anxiety. "This gives talking to strangers\' a whole new meaning."\n\nAlex surveyed his team—each face a study in concentration, determination, and not a small measure of trepidation. "This might well be our first contact," he acknowledged, "And we need to be ready for whatever answers back."\n\nTogether, they stood on the edge of the unknown, forging humanity\'s response to a message from the heavens. The ensuing silence was palpable—a collective introspection about their role in this grand cosmic play, one that could rewrite human history.\n\nThe encrypted dialogue continued to unfold, its intricate patterns showing an almost uncanny anticipation\n```\n#############\nOutput:\n("entity"{tuple_delimiter}"Sam Rivera"{tuple_delimiter}"person"{tuple_delimiter}"Sam Rivera is a member of a team working on communicating with an unknown intelligence, showing a mix of awe and anxiety."){record_delimiter}\n("entity"{tuple_delimiter}"Alex"{tuple_delimiter}"person"{tuple_delimiter}"Alex is the leader of a team attempting first contact with an unknown intelligence, acknowledging the significance of their task."){record_delimiter}\n("entity"{tuple_delimiter}"Control"{tuple_delimiter}"concept"{tuple_delimiter}"Control refers to the ability to manage or govern, which is challenged by an intelligence that writes its own rules."){record_delimiter}\n("entity"{tuple_delimiter}"Intelligence"{tuple_delimiter}"concept"{tuple_delimiter}"Intelligence here refers to an unknown entity capable of writing its own rules and learning to communicate."){record_delimiter}\n("entity"{tuple_delimiter}"First Contact"{tuple_delimiter}"event"{tuple_delimiter}"First Contact is the potential initial communication between humanity and an unknown intelligence."){record_delimiter}\n("entity"{tuple_delimiter}"Humanity\'s Response"{tuple_delimiter}"event"{tuple_delimiter}"Humanity\'s Response is the collective action taken by Alex\'s team in response to a message from an unknown intelligence."){record_delimiter}\n("relationship"{tuple_delimiter}"Sam Rivera"{tuple_delimiter}"Intelligence"{tuple_delimiter}"Sam Rivera is directly involved in the process of learning to communicate with the unknown intelligence."{tuple_delimiter}"communication, learning process"{tuple_delimiter}9){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"First Contact"{tuple_delimiter}"Alex leads the team that might be making the First Contact with the unknown intelligence."{tuple_delimiter}"leadership, exploration"{tuple_delimiter}10){record_delimiter}\n("relationship"{tuple_delimiter}"Alex"{tuple_delimiter}"Humanity\'s Response"{tuple_delimiter}"Alex and his team are the key figures in Humanity\'s Response to the unknown intelligence."{tuple_delimiter}"collective action, cosmic significance"{tuple_delimiter}8){record_delimiter}\n("relationship"{tuple_delimiter}"Control"{tuple_delimiter}"Intelligence"{tuple_delimiter}"The concept of Control is challenged by the Intelligence that writes its own rules."{tuple_delimiter}"power dynamics, autonomy"{tuple_delimiter}7){record_delimiter}\n#############################\n-Real Data-\n######################\nEntity_types: [{entity_types}]\nText:\n```\n{input_text}\n```\n######################\nOutput:\n'¶
- DEFAULT_RECORD_DELIMITER = '##'¶
- DEFAULT_RELATION_PATTERN = '\\("relationship"(.*?)\\)'¶
- DEFAULT_TUPLE_DELIMITER = '<|>'¶
- __init__(api_model: str = 'gpt-4o', entity_types: List[str] = None, *, entity_key: str = 'entity', relation_key: str = 'relation', api_endpoint: str | None = None, response_path: str | None = None, prompt_template: str | None = None, tuple_delimiter: str | None = None, record_delimiter: str | None = None, completion_delimiter: str | None = None, max_gleaning: Annotated[int, Ge(ge=0)] = 1, continue_prompt: str | None = None, if_loop_prompt: str | None = None, entity_pattern: str | None = None, relation_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method. :param api_model: API model name. :param entity_types: Pre-defined entity types for knowledge graph. :param entity_key: The key name to store the entities in the meta
field. It’s “entity” in default.
- Parameters:
relation_key – The field name to store the relations between entities. It’s “relation” in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
prompt_template – The template of input prompt.
tuple_delimiter – Delimiter to separate items in outputs.
record_delimiter – Delimiter to separate records in outputs.
completion_delimiter – To mark the end of the output.
max_gleaning – the extra max num to call LLM to glean entities and relations.
continue_prompt – the prompt for gleaning entities and relations.
if_loop_prompt – the prompt to determine whether to stop gleaning.
entity_pattern – Regular expression for parsing entity record.
relation_pattern – Regular expression for parsing relation record.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.ExtractEventMapper(api_model: str = 'gpt-4o', *, event_desc_key: str = 'event_description', relevant_char_key: str = 'relevant_characters', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Extracts events and relevant characters from the text.
This operator uses an API model to summarize the text into multiple events and extract the relevant characters for each event. The summary and character extraction follow a predefined format. The operator retries the API call up to a specified number of times if there is an error. The extracted events and characters are stored in the meta field of the samples. If no events are found, the original samples are returned. The operator can optionally drop the original text after processing.
- DEFAULT_INPUT_TEMPLATE = '# 文本\n```\n{text}\n```\n'¶
- DEFAULT_OUTPUT_PATTERN = '\n \\#\\#\\#\\s*情节(\\d+):\\s*\n -\\s*\\*\\*情节描述\\*\\*\\s*:\\s*(.*?)\\s*\n -\\s*\\*\\*相关人物\\*\\*\\s*:\\s*(.*?)(?=\\#\\#\\#|\\Z)\n '¶
- DEFAULT_SYSTEM_PROMPT = '给定一段文本,对文本的情节进行分点总结,并抽取与情节相关的人物。\n要求:\n- 尽量不要遗漏内容,不要添加文本中没有的情节,符合原文事实\n- 联系上下文说明前因后果,但仍然需要符合事实\n- 不要包含主观看法\n- 注意要尽可能保留文本的专有名词\n- 注意相关人物需要在对应情节中出现\n- 只抽取情节中的主要人物,不要遗漏情节的主要人物\n- 总结格式如下:\n### 情节1:\n- **情节描述**: ...\n- **相关人物**:人物1,人物2,人物3,...\n### 情节2:\n- **情节描述**: ...\n- **相关人物**:人物1,人物2,...\n### 情节3:\n- **情节描述**: ...\n- **相关人物**:人物1,...\n...\n'¶
- __init__(api_model: str = 'gpt-4o', *, event_desc_key: str = 'event_description', relevant_char_key: str = 'relevant_characters', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method. :param api_model: API model name. :param event_desc_key: The key name to store the event descriptions
in the meta field. It’s “event_description” in default.
- Parameters:
relevant_char_key – The field name to store the relevant characters to the events in the meta field. It’s “relevant_characters” in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
input_template – Template for building the model input.
output_pattern – Regular expression for parsing model output.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.ExtractKeywordMapper(api_model: str = 'gpt-4o', *, keyword_key: str = 'keyword', api_endpoint: str | None = None, response_path: str | None = None, prompt_template: str | None = None, completion_delimiter: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Generate keywords for the text.
This operator uses a specified API model to generate high-level keywords that summarize the main concepts, themes, or topics of the input text. The generated keywords are stored in the meta field under the key specified by keyword_key. The operator retries the API call up to try_num times in case of errors. If drop_text is set to True, the original text is removed from the sample after processing. The operator uses a default prompt template and completion delimiter, which can be customized. The output is parsed using a regular expression to extract the keywords.
- DEFAULT_COMPLETION_DELIMITER = '<|COMPLETE|>'¶
- DEFAULT_OUTPUT_PATTERN = '\\("content_keywords"(.*?)\\)'¶
- DEFAULT_PROMPT_TEMPLATE = '-Goal-\nGiven a text document that is potentially relevant to this activity and a list of entity types, identify all entities of those types from the text and all relationships among the identified entities.\n\n-Steps-\n1. Identify high-level key words that summarize the main concepts, themes, or topics of the entire text. These should capture the overarching ideas present in the document.\nFormat the content-level key words as ("content_keywords" <high_level_keywords>)\n\n3. Return output in the language of the given text.\n\n4. When finished, output {completion_delimiter}\n\n######################\n-Examples-\n######################\nExample 1:\n\nText:\n```\nwhile Alex clenched his jaw, the buzz of frustration dull against the backdrop of Taylor\'s authoritarian certainty. It was this competitive undercurrent that kept him alert, the sense that his and Jordan\'s shared commitment to discovery was an unspoken rebellion against Cruz\'s narrowing vision of control and order.\n\nThen Taylor did something unexpected. They paused beside Jordan and, for a moment, observed the device with something akin to reverence. “If this tech can be understood..." Taylor said, their voice quieter, "It could change the game for us. For all of us.”\n\nThe underlying dismissal earlier seemed to falter, replaced by a glimpse of reluctant respect for the gravity of what lay in their hands. Jordan looked up, and for a fleeting heartbeat, their eyes locked with Taylor\'s, a wordless clash of wills softening into an uneasy truce.\n\nIt was a small transformation, barely perceptible, but one that Alex noted with an inward nod. They had all been brought here by different paths\n```\n################\nOutput:\n("content_keywords" "power dynamics, ideological conflict, discovery, rebellion"){completion_delimiter}\n#############################\nExample 2:\n\nText:\n```\n他们不再是单纯的执行者;他们已成为某个超越星辰与条纹的领域的信息守护者。这一使命的提升不能被规则和既定协议所束缚——它需要一种新的视角,一种新的决心。\n\n随着与华盛顿的通讯在背景中嗡嗡作响,对话中的紧张情绪通过嘟嘟声和静电噪音贯穿始终。团队站立着,一股不祥的气息笼罩着他们。显然,他们在接下来几个小时内做出的决定可能会重新定义人类在宇宙中的位置,或者将他们置于无知和潜在危险之中。\n\n随着与星辰的联系变得更加牢固,小组开始处理逐渐成形的警告,从被动接受者转变为积极参与者。梅瑟后来的直觉占据了上风——团队的任务已经演变,不再仅仅是观察和报告,而是互动和准备。一场蜕变已经开始,而“杜尔塞行动”则以他们大胆的新频率震动,这种基调不是由世俗设定的\n```\n#############\nOutput:\n("content_keywords" "任务演变, 决策制定, 积极参与, 宇宙意义"){completion_delimiter}\n#############################\nExample 3:\n\nEntity_types: [person, role, technology, organization, event, location, concept]\nText:\n```\ntheir voice slicing through the buzz of activity. "Control may be an illusion when facing an intelligence that literally writes its own rules," they stated stoically, casting a watchful eye over the flurry of data.\n\n"It\'s like it\'s learning to communicate," offered Sam Rivera from a nearby interface, their youthful energy boding a mix of awe and anxiety. "This gives talking to strangers\' a whole new meaning."\n\nAlex surveyed his team—each face a study in concentration, determination, and not a small measure of trepidation. "This might well be our first contact," he acknowledged, "And we need to be ready for whatever answers back."\n\nTogether, they stood on the edge of the unknown, forging humanity\'s response to a message from the heavens. The ensuing silence was palpable—a collective introspection about their role in this grand cosmic play, one that could rewrite human history.\n\nThe encrypted dialogue continued to unfold, its intricate patterns showing an almost uncanny anticipation\n```\n#############\nOutput:\n("content_keywords" "first contact, control, communication, cosmic significance"){completion_delimiter}\n-Real Data-\n######################\nText:\n```\n{input_text}\n```\n######################\nOutput:\n'¶
- __init__(api_model: str = 'gpt-4o', *, keyword_key: str = 'keyword', api_endpoint: str | None = None, response_path: str | None = None, prompt_template: str | None = None, completion_delimiter: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method. :param api_model: API model name. :param keyword_key: The key name to store the keywords in the meta
field. It’s “keyword” in default.
- Parameters:
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
prompt_template – The template of input prompt.
completion_delimiter – To mark the end of the output.
output_pattern – Regular expression for parsing keywords.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.ExtractNicknameMapper(api_model: str = 'gpt-4o', *, nickname_key: str = 'nickname', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Extracts nickname relationships in the text using a language model.
This operator uses a language model to identify and extract nickname relationships from the input text. It follows specific instructions to ensure accurate extraction, such as identifying the speaker, the person being addressed, and the nickname used. The extracted relationships are stored in the meta field under the specified key. The operator uses a default system prompt, input template, and output pattern, but these can be customized. The results are parsed and validated to ensure they meet the required format. If the text already contains the nickname information, it is not processed again. The operator retries the API call a specified number of times if an error occurs.
- DEFAULT_INPUT_TEMPLATE = '# 文本\n```\n{text}\n```\n'¶
- DEFAULT_OUTPUT_PATTERN = '\n \\#\\#\\#\\s*称呼方式(\\d+)\\s*\n -\\s*\\*\\*说话人\\*\\*\\s*:\\s*(.*?)\\s*\n -\\s*\\*\\*被称呼人\\*\\*\\s*:\\s*(.*?)\\s*\n -\\s*\\*\\*(.*?)对(.*?)的昵称\\*\\*\\s*:\\s*(.*?)(?=\\#\\#\\#|\\Z) # for double check\n '¶
- DEFAULT_SYSTEM_PROMPT = '给定你一段文本,你的任务是将人物之间的称呼方式(昵称)提取出来。\n要求:\n- 需要给出说话人对被称呼人的称呼,不要搞反了。\n- 相同的说话人和被称呼人最多给出一个最常用的称呼。\n- 请不要输出互相没有昵称的称呼方式。\n- 输出格式如下:\n```\n### 称呼方式1\n- **说话人**:...\n- **被称呼人**:...\n- **...对...的昵称**:...\n### 称呼方式2\n- **说话人**:...\n- **被称呼人**:...\n- **...对...的昵称**:...\n### 称呼方式3\n- **说话人**:...\n- **被称呼人**:...\n- **...对...的昵称**:...\n...\n```\n'¶
- __init__(api_model: str = 'gpt-4o', *, nickname_key: str = 'nickname', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method. :param api_model: API model name. :param nickname_key: The key name to store the nickname
relationship in the meta field. It’s “nickname” in default.
- Parameters:
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
input_template – Template for building the model input.
output_pattern – Regular expression for parsing model output.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.ExtractSupportTextMapper(api_model: str = 'gpt-4o', *, summary_key: str = 'event_description', support_text_key: str = 'support_text', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Extracts a supporting sub-text from the original text based on a given summary.
This operator uses an API model to identify and extract a segment of the original text that best matches the provided summary. It leverages a system prompt and input template to guide the extraction process. The extracted support text is stored in the specified meta field key. If the extraction fails or returns an empty string, the original summary is used as a fallback. The operator retries the extraction up to a specified number of times in case of errors.
- DEFAULT_INPUT_TEMPLATE = '### 原文:\n{text}\n\n### 总结:\n{summary}\n\n### 原文摘录:\n'¶
- DEFAULT_SYSTEM_PROMPT = '你将扮演一个文本摘录助手的角色。你的主要任务是基于给定的文章(称为“原文”)以及对原文某个部分的简短描述或总结(称为“总结”),准确地识别并提取出与该总结相对应的原文片段。\n要求:\n- 你需要尽可能精确地匹配到最符合总结内容的那部分内容\n- 如果存在多个可能的答案,请选择最贴近总结意思的那个\n- 下面是一个例子帮助理解这一过程:\n### 原文:\n《红楼梦》是中国古典小说四大名著之一,由清代作家曹雪芹创作。它讲述了贾宝玉、林黛玉等人的爱情故事及四大家族的兴衰历程。书中通过复杂的人物关系展现了封建社会的各种矛盾冲突。其中关于贾府内部斗争的部分尤其精彩,特别是王熙凤与尤二姐之间的争斗,生动描绘了权力争夺下的女性形象。此外,《红楼梦》还以其精美的诗词闻名,这些诗词不仅增添了文学色彩,也深刻反映了人物的性格特点和命运走向。\n\n### 总结:\n描述了书中的两个女性角色之间围绕权力展开的竞争。\n\n### 原文摘录:\n其中关于贾府内部斗争的部分尤其精彩,特别是王熙凤与尤二姐之间的争斗,生动描绘了权力争夺下的女性形象。'¶
- __init__(api_model: str = 'gpt-4o', *, summary_key: str = 'event_description', support_text_key: str = 'support_text', api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method. :param api_model: API model name. :param summary_key: The key name to store the input summary in the
meta field. It’s “event_description” in default.
- Parameters:
support_text_key – The key name to store the output support text for the summary in the meta field. It’s “support_text” in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for the task.
input_template – Template for building the model input.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.ExtractTablesFromHtmlMapper(tables_field_name: str = 'html_tables', retain_html_tags: bool = False, include_header: bool = True, *args, **kwargs)[source]¶
Bases:
Mapper
Extracts tables from HTML content and stores them in a specified field.
This operator processes HTML content to extract tables. It can either retain or remove HTML tags based on the retain_html_tags parameter. If retain_html_tags is False, it can also include or exclude table headers based on the include_header parameter. The extracted tables are stored in the tables_field_name field within the sample’s metadata. If no tables are found, an empty list is stored. If the tables have already been extracted, the operator will not reprocess the sample.
- __init__(tables_field_name: str = 'html_tables', retain_html_tags: bool = False, include_header: bool = True, *args, **kwargs)[source]¶
Initialization method. :param tables_field_name: Field name to store the extracted tables. :param retain_html_tags: If True, retains HTML tags in the tables;
otherwise, removes them.
- Parameters:
include_header –
- If True, includes the table header;
otherwise, excludes it.
- This parameter is effective
only when retain_html_tags is False
and applies solely to the extracted table content.
- class data_juicer.ops.mapper.FixUnicodeMapper(normalization: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Fixes unicode errors in text samples.
This operator corrects common unicode errors and normalizes the text to a specified Unicode normalization form. The default normalization form is ‘NFC’, but it can be set to ‘NFKC’, ‘NFD’, or ‘NFKD’ during initialization. It processes text samples in batches, applying the specified normalization to each sample. If an unsupported normalization form is provided, a ValueError is raised.
- class data_juicer.ops.mapper.GenerateQAFromExamplesMapper(hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', *, seed_file: str = '', example_num: Annotated[int, Gt(gt=0)] = 3, similarity_threshold: float = 0.7, system_prompt: str | None = None, input_template: str | None = None, example_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Bases:
Mapper
Generates question and answer pairs from examples using a Hugging Face model.
This operator generates QA pairs based on provided seed examples. The number of generated samples is determined by the length of the empty dataset configured in the YAML file. The operator uses a Hugging Face model to generate new QA pairs, which are then filtered based on their similarity to the seed examples. Samples with a similarity score below the specified threshold are kept. The similarity is computed using the ROUGE-L metric. The operator requires a seed file in chatml format, which provides the initial QA examples. The generated QA pairs must follow specific formatting rules, such as maintaining the same format as the input examples and ensuring that questions and answers are paired correctly.
- DEFAULT_EXAMPLE_TEMPLATE = '\n如下是一条示例数据:\n{}'¶
- DEFAULT_INPUT_TEMPLATE = '{}'¶
- DEFAULT_OUTPUT_PATTERN = '【问题】(.*?)【回答】(.*?)(?=【问题】|$)'¶
- DEFAULT_QA_PAIR_TEMPLATE = '【问题】\n{}\n【回答】\n{}\n'¶
- DEFAULT_SYSTEM_PROMPT = '请你仔细观察多个示例数据的输入和输出,按照你的理解,总结出相应规矩,然后写出一个新的【问题】和【回答】。注意,新生成的【问题】和【回答】需要满足如下要求:\n1. 生成的【问题】和【回答】不能与输入的【问题】和【回答】一致,但是需要保持格式相同。\n2. 生成的【问题】不一定要局限于输入【问题】的话题或领域,生成的【回答】需要正确回答生成的【问题】。\n3. 提供的【问题】和【回答】可能是多轮对话,生成的【问题】和【回答】也可以是多轮,但是需要保持格式相同。\n4. 生成的【问题】和【回答】必须成对出现,而且【问题】需要在【回答】之前。\n'¶
- __init__(hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', *, seed_file: str = '', example_num: Annotated[int, Gt(gt=0)] = 3, similarity_threshold: float = 0.7, system_prompt: str | None = None, input_template: str | None = None, example_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_model – Huggingface model ID.
seed_file – Path to the seed file in chatml format.
example_num – The number of selected examples. Randomly select N examples from “seed_file” and put them into prompt as QA examples.
similarity_threshold – The similarity score threshold between the generated samples and the seed examples. Range from 0 to 1. Samples with similarity score less than this threshold will be kept.
system_prompt – System prompt for guiding the generation task.
input_template – Template for building the input prompt. It must include one placeholder ‘{}’, which will be replaced by example_num formatted examples defined by example_template.
example_template – Template for formatting one QA example. It must include one placeholder ‘{}’, which will be replaced by one formatted qa_pair.
qa_pair_template – Template for formatting a single QA pair within each example. Must include two placeholders ‘{}’ for the question and answer.
output_pattern – Regular expression pattern to extract questions and answers from model response.
enable_vllm – Whether to use vllm for inference acceleration.
model_params – Parameters for initializing the model.
sampling_params – Sampling parameters for text generation. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.GenerateQAFromTextMapper(hf_model: str = 'alibaba-pai/pai-qwen1_5-7b-doc2qa', max_num: Annotated[int, Gt(gt=0)] | None = None, *, output_pattern: str | None = None, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Bases:
Mapper
Generates question and answer pairs from text using a specified model.
This operator uses a Hugging Face model to generate QA pairs from the input text. It supports both Hugging Face and vLLM models for inference. The recommended models, such as ‘alibaba-pai/pai-llama3-8b-doc2qa’, are trained on Chinese data and are suitable for Chinese text. The operator can limit the number of generated QA pairs per text and allows custom output patterns for parsing the model’s response. By default, it uses a regular expression to extract questions and answers from the model’s output. If no QA pairs are extracted, a warning is logged.
- __init__(hf_model: str = 'alibaba-pai/pai-qwen1_5-7b-doc2qa', max_num: Annotated[int, Gt(gt=0)] | None = None, *, output_pattern: str | None = None, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_model – Huggingface model ID.
max_num – The max num of returned QA sample for each text. Not limit if it is None.
output_pattern – Regular expression pattern to extract questions and answers from model response.
enable_vllm – Whether to use vllm for inference acceleration.
model_params – Parameters for initializing the model.
sampling_params – Sampling parameters for text generation, e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
The default data format parsed by this interface is as follows: Model Input:
蒙古国的首都是乌兰巴托(Ulaanbaatar) 冰岛的首都是雷克雅未克(Reykjavik)
- Model Output:
蒙古国的首都是乌兰巴托(Ulaanbaatar) 冰岛的首都是雷克雅未克(Reykjavik) Human: 请问蒙古国的首都是哪里? Assistant: 你好,根据提供的信息,蒙古国的首都是乌兰巴托(Ulaanbaatar)。 Human: 冰岛的首都是哪里呢? Assistant: 冰岛的首都是雷克雅未克(Reykjavik)。 …
- class data_juicer.ops.mapper.HumanPreferenceAnnotationMapper(label_config_file: str = None, answer1_key: str = 'answer1', answer2_key: str = 'answer2', prompt_key: str = 'prompt', chosen_key: str = 'chosen', rejected_key: str = 'rejected', **kwargs)[source]¶
Bases:
LabelStudioAnnotationMapper
Operator for human preference annotation using Label Studio.
This operator formats and presents pairs of answers to a prompt for human evaluation. It uses a default or custom Label Studio configuration to display the prompt and answer options. The operator processes the annotations to determine the preferred answer, updating the sample with the chosen and rejected answers. The operator requires specific keys in the samples for the prompt and answer options. If these keys are missing, it logs warnings and uses placeholder text. The annotated results are processed to update the sample with the chosen and rejected answers.
- DEFAULT_LABEL_CONFIG = '\n <View className="root">\n <Style>\n .root {\n box-sizing: border-box;\n margin: 0;\n padding: 0;\n font-family: \'Roboto\',\n sans-serif;\n line-height: 1.6;\n background-color: #f0f0f0;\n }\n\n .container {\n margin: 0 auto;\n padding: 20px;\n background-color: #ffffff;\n border-radius: 5px;\n box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.1), 0 6px 20px 0 rgba(0, 0, 0, 0.1);\n }\n\n .prompt {\n padding: 20px;\n background-color: #0084ff;\n color: #ffffff;\n border-radius: 5px;\n margin-bottom: 20px;\n box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1), 0 3px 10px 0 rgba(0, 0, 0, 0.1);\n }\n\n .answers {\n display: flex;\n justify-content: space-between;\n flex-wrap: wrap;\n gap: 20px;\n }\n\n .answer-box {\n flex-basis: 49%;\n padding: 20px;\n background-color: rgba(44, 62, 80, 0.9);\n color: #ffffff;\n border-radius: 5px;\n box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.1), 0 3px 10px 0 rgba(0, 0, 0, 0.1);\n }\n\n .answer-box p {\n word-wrap: break-word;\n }\n\n .answer-box:hover {\n background-color: rgba(52, 73, 94, 0.9);\n cursor: pointer;\n transition: all 0.3s ease;\n }\n\n .lsf-richtext__line:hover {\n background: unset;\n }\n\n .answer-box .lsf-object {\n padding: 20px\n }\n </Style>\n <View className="container">\n <View className="prompt">\n <Text name="prompt" value="$prompt" />\n </View>\n <View className="answers">\n <Pairwise name="comparison" toName="answer1,answer2"\n selectionStyle="background-color: #27ae60; box-shadow: 0 4px 8px 0 rgba(0, 0, 0, 0.2), 0 6px 20px 0 rgba(0, 0, 0, 0.2); border: 2px solid #2ecc71; cursor: pointer; transition: all 0.3s ease;"\n leftChoiceValue="answer1" rightChoiceValue="answer2" />\n <View className="answer-box">\n <Text name="answer1" value="$answer1" />\n </View>\n <View className="answer-box">\n <Text name="answer2" value="$answer2" />\n </View>\n </View>\n </View>\n </View>\n '¶
- __init__(label_config_file: str = None, answer1_key: str = 'answer1', answer2_key: str = 'answer2', prompt_key: str = 'prompt', chosen_key: str = 'chosen', rejected_key: str = 'rejected', **kwargs)[source]¶
Initialize the human preference annotation operator.
- Parameters:
label_config_file – Path to the label config file
answer1_key – Key for the first answer
answer2_key – Key for the second answer
prompt_key – Key for the prompt/question
chosen_key – Key for the chosen answer
rejected_key – Key for the rejected answer
- class data_juicer.ops.mapper.ImageBlurMapper(p: float = 0.2, blur_type: str = 'gaussian', radius: float = 2, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Blurs images in the dataset with a specified probability and blur type.
This operator blurs images using one of three types: mean, box, or Gaussian. The probability of an image being blurred is controlled by the p parameter. The blur effect is applied using a kernel with a specified radius. Blurred images are saved to a directory, which can be specified or defaults to the input directory. If the save directory is not provided, the DJ_PRODUCED_DATA_DIR environment variable can be used to set it. The operator ensures that the blur type is one of the supported options and that the radius is non-negative.
- __init__(p: float = 0.2, blur_type: str = 'gaussian', radius: float = 2, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
p – Probability of the image being blurred.
blur_type – Type of blur kernel, including [‘mean’, ‘box’, ‘gaussian’].
radius – Radius of blur kernel.
save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.ImageCaptioningFromGPT4VMapper(mode: str = 'description', api_key: str = '', max_token: int = 500, temperature: Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])] = 1.0, system_prompt: str = '', user_prompt: str = '', user_prompt_key: str | None = None, keep_original_sample: bool = True, any_or_all: str = 'any', *args, **kwargs)[source]¶
Bases:
Mapper
Generates text captions for images using the GPT-4 Vision model.
This operator generates text based on the provided images and specified parameters. It supports different modes of text generation, including ‘reasoning’, ‘description’, ‘conversation’, and ‘custom’. The generated text can be added to the original sample or replace it, depending on the keep_original_sample parameter. The operator uses a Hugging Face tokenizer and the GPT-4 Vision API to generate the text. The any_or_all parameter determines whether all or any of the images in a sample must meet the generation criteria for the sample to be kept. If user_prompt_key is set, it will use the prompt from the sample; otherwise, it will use the user_prompt parameter. If both are set, user_prompt_key takes precedence.
- __init__(mode: str = 'description', api_key: str = '', max_token: int = 500, temperature: Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])] = 1.0, system_prompt: str = '', user_prompt: str = '', user_prompt_key: str | None = None, keep_original_sample: bool = True, any_or_all: str = 'any', *args, **kwargs)[source]¶
Initialization method.
- Parameters:
mode – mode of text generated from images, can be one of [‘reasoning’, ‘description’, ‘conversation’, ‘custom’]
api_key – the API key to authenticate the request.
max_token – the maximum number of tokens to generate. Default is 500.
temperature – controls the randomness of the output (range from 0 to 1). Default is 0.
system_prompt – a string prompt used to set the context of a conversation and provide global guidance or rules for the gpt4-vision so that it can generate responses in the expected way. If mode set to custom, the parameter will be used.
user_prompt – a string prompt to guide the generation of gpt4-vision for each samples. It’s “” in default, which means no prompt provided.
user_prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated text in the final datasets and the original text will be removed. It’s True in default.
any_or_all – keep this sample with ‘any’ or ‘all’ strategy of all images. ‘any’: keep this sample if any images meet the condition. ‘all’: keep this sample only if all images meet the condition.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.ImageCaptioningMapper(hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, *args, **kwargs)[source]¶
Bases:
Mapper
Generates image captions using a Hugging Face model and appends them to samples.
This operator generates captions for images in the input samples using a specified Hugging Face model. It can generate multiple captions per image and apply different strategies to retain the generated captions. The operator supports three retention modes: ‘random_any’, ‘similar_one_simhash’, and ‘all’. In ‘random_any’ mode, a random caption is retained. In ‘similar_one_simhash’ mode, the most similar caption to the original text (based on SimHash) is retained. In ‘all’ mode, all generated captions are concatenated and retained. The operator can also keep or discard the original sample based on the keep_original_sample parameter. If both prompt and prompt_key are set, the prompt_key takes precedence.
- __init__(hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_img2seq – model name on huggingface to generate caption
trust_remote_code – whether to trust the remote code of HF models.
caption_num – how many candidate captions to generate for each image
keep_candidate_mode –
retain strategy for the generated $caption_num$ candidates.
’random_any’: Retain the random one from generated captions
- ’similar_one_simhash’: Retain the generated one that is most
similar to the original caption
’all’: Retain all generated captions by concatenation
Note
This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.
- Parameters:
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.
prompt – a string prompt to guide the generation of blip2 model for all samples globally. It’s None in default, which means no prompt provided.
prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.
args – extra args
kwargs – extra args
- process_batched(samples, rank=None)[source]¶
Note
This is a batched_OP, whose input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.
- Parameters:
samples
- Returns:
- class data_juicer.ops.mapper.ImageDetectionYoloMapper(imgsz=640, conf=0.05, iou=0.5, model_path='yolo11n.pt', *args, **kwargs)[source]¶
Bases:
Mapper
Perform object detection using YOLO on images and return bounding boxes and class labels.
This operator uses a YOLO model to detect objects in images. It processes each image in the sample, returning the bounding boxes and class labels for detected objects. The operator sets the bbox_tag and class_label_tag fields in the sample’s metadata. If no image is present or no objects are detected, it sets bbox_tag to an empty array and class_label_tag to -1. The operator uses a confidence score threshold and IoU (Intersection over Union) score threshold to filter detections.
- class data_juicer.ops.mapper.ImageDiffusionMapper(hf_diffusion: str = 'CompVis/stable-diffusion-v1-4', trust_remote_code: bool = False, torch_dtype: str = 'fp32', revision: str = 'main', strength: Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])] = 0.8, guidance_scale: float = 7.5, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, caption_key: str | None = None, hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Generate images using a diffusion model based on provided captions.
This operator uses a Hugging Face diffusion model to generate images from given captions. It supports different modes for retaining generated samples, including random selection, similarity-based selection, and retaining all. The operator can also generate captions if none are provided, using a Hugging Face image-to-sequence model. The strength parameter controls the extent of transformation from the reference image, and the guidance scale influences how closely the generated images match the text prompt. Generated images can be saved in a specified directory or the same directory as the input files. This is a batched operation, processing multiple samples at once and producing a specified number of augmented images per sample.
- __init__(hf_diffusion: str = 'CompVis/stable-diffusion-v1-4', trust_remote_code: bool = False, torch_dtype: str = 'fp32', revision: str = 'main', strength: Annotated[float, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=0), Le(le=1)])] = 0.8, guidance_scale: float = 7.5, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, caption_key: str | None = None, hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_diffusion – diffusion model name on huggingface to generate the image.
trust_remote_code – whether to trust the remote code of HF models.
torch_dtype – the floating point type used to load the diffusion model. Can be one of [‘fp32’, ‘fp16’, ‘bf16’]
revision – The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git.
strength – Indicates extent to transform the reference image. Must be between 0 and 1. image is used as a starting point and more noise is added the higher the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise is maximum and the denoising process runs for the full number of iterations specified in num_inference_steps. A value of 1 essentially ignores image.
guidance_scale – A higher guidance scale value encourages the model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1.
aug_num – The image number to be produced by stable-diffusion model.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True by default.
caption_key – the key name of fields in samples to store captions for each images. It can be a string if there is only one image in each sample. Otherwise, it should be a list. If it’s none, ImageDiffusionMapper will produce captions for each images.
hf_img2seq – model name on huggingface to generate caption if caption_key is None.
save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
- process_batched(samples, rank=None, context=False)[source]¶
Note
This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote aug_num as $M$. the number of total samples after generation is $(1+M)Nb$.
- Parameters:
samples
- Returns:
- class data_juicer.ops.mapper.ImageFaceBlurMapper(cv_classifier: str = '', blur_type: str = 'gaussian', radius: Annotated[float, Ge(ge=0)] = 2, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to blur faces detected in images.
This operator uses an OpenCV classifier to detect faces in images and applies a specified blur type to the detected face regions. The blur types supported are ‘mean’, ‘box’, and ‘gaussian’. The radius of the blur kernel can be adjusted. If no save directory is provided, the modified images will be saved in the same directory as the input files.
- __init__(cv_classifier: str = '', blur_type: str = 'gaussian', radius: Annotated[float, Ge(ge=0)] = 2, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
cv_classifier – OpenCV classifier path for face detection. By default, we will use ‘haarcascade_frontalface_alt.xml’.
blur_type – Type of blur kernel, including [‘mean’, ‘box’, ‘gaussian’].
radius – Radius of blur kernel.
save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.ImageRemoveBackgroundMapper(alpha_matting: bool = False, alpha_matting_foreground_threshold: int = 240, alpha_matting_background_threshold: int = 10, alpha_matting_erode_size: int = 10, bgcolor: Tuple[int, int, int, int] | None = None, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to remove the background of images.
This operator processes each image in the sample, removing its background. It uses the rembg library to perform the background removal. If alpha_matting is enabled, it applies alpha matting with specified thresholds and erosion size. The resulting images are saved in PNG format. The bgcolor parameter can be set to specify a custom background color for the cutout image. The processed images are stored in the directory specified by save_dir, or in the same directory as the input files if save_dir is not provided. The source_file field in the sample is updated to reflect the new file paths.
- __init__(alpha_matting: bool = False, alpha_matting_foreground_threshold: int = 240, alpha_matting_background_threshold: int = 10, alpha_matting_erode_size: int = 10, bgcolor: Tuple[int, int, int, int] | None = None, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
alpha_matting – (bool, optional) Flag indicating whether to use alpha matting. Defaults to False.
alpha_matting_foreground_threshold – (int, optional) Foreground threshold for alpha matting. Defaults to 240.
alpha_matting_background_threshold – (int, optional) Background threshold for alpha matting. Defaults to 10.
alpha_matting_erode_size – (int, optional) Erosion size for alpha matting. Defaults to 10.
bgcolor – (Optional[Tuple[int, int, int, int]], optional) Background color for the cutout image. Defaults to None.
save_dir – The directory where generated image files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
*args (Optional[Any]): Additional positional arguments. **kwargs (Optional[Any]): Additional keyword arguments.
- class data_juicer.ops.mapper.ImageSegmentMapper(imgsz=1024, conf=0.05, iou=0.5, model_path='FastSAM-x.pt', *args, **kwargs)[source]¶
Bases:
Mapper
Perform segment-anything on images and return the bounding boxes.
This operator uses a FastSAM model to detect and segment objects in images, returning their bounding boxes. It processes each image in the sample, and stores the bounding boxes in the ‘bbox_tag’ field under the ‘meta’ key. If no images are present in the sample, an empty array is stored instead. The operator allows setting the image resolution, confidence threshold, and IoU (Intersection over Union) score threshold for the segmentation process. Bounding boxes are represented as N x M x 4 arrays, where N is the number of images, M is the number of detected boxes, and 4 represents the coordinates.
- __init__(imgsz=1024, conf=0.05, iou=0.5, model_path='FastSAM-x.pt', *args, **kwargs)[source]¶
Initialization method.
- Parameters:
imgsz – resolution for image resizing
conf – confidence score threshold
iou – IoU (Intersection over Union) score threshold
model_path – the path to the FastSAM model. Model name should be one of [‘FastSAM-x.pt’, ‘FastSAM-s.pt’].
- class data_juicer.ops.mapper.ImageTaggingMapper(tag_field_name: str = 'image_tags', *args, **kwargs)[source]¶
Bases:
Mapper
Generates image tags for each image in the sample.
This operator processes images to generate descriptive tags. It uses a Hugging Face model to analyze the images and produce relevant tags. The tags are stored in the specified field, defaulting to ‘image_tags’. If the tags are already present in the sample, the operator will not recompute them. For samples without images, an empty tag array is assigned. The generated tags are sorted by frequency and stored as a list of strings.
- class data_juicer.ops.mapper.MllmMapper(hf_model: str = 'llava-hf/llava-v1.6-vicuna-7b-hf', max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to use MLLMs for visual question answering tasks. This operator uses a Hugging Face model to generate answers based on input text and images. It supports models like llava-hf/llava-v1.6-vicuna-7b-hf and Qwen/Qwen2-VL-7B-Instruct. The operator processes each sample, loading and processing images, and generating responses using the specified model. The generated responses are appended to the sample’s text field. The key parameters include the model ID, maximum new tokens, temperature, top-p sampling, and beam search size, which control the generation process.
- __init__(hf_model: str = 'llava-hf/llava-v1.6-vicuna-7b-hf', max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, *args, **kwargs)[source]¶
Initialization method. :param hf_model: hugginface model id. :param max_new_tokens: the maximum number of new tokens
generated by the model.
- Parameters:
temperature – used to control the randomness of generated text. The higher the temperature, the more random and creative the generated text will be.
top_p – randomly select the next word from the group of words whose cumulative probability reaches p.
num_beams – the larger the beam search size, the higher the quality of the generated text.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.NlpaugEnMapper(sequential: bool = False, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, delete_random_word: bool = False, swap_random_word: bool = False, spelling_error_word: bool = False, split_random_word: bool = False, keyboard_error_char: bool = False, ocr_error_char: bool = False, delete_random_char: bool = False, swap_random_char: bool = False, insert_random_char: bool = False, *args, **kwargs)[source]¶
Bases:
Mapper
Augments English text samples using various methods from the nlpaug library.
This operator applies a series of text augmentation techniques to generate new samples. It supports both word-level and character-level augmentations, such as deleting, swapping, and inserting words or characters. The number of augmented samples can be controlled, and the original samples can be kept or removed. When multiple augmentation methods are enabled, they can be applied sequentially or independently. Sequential application means each sample is augmented by all enabled methods in sequence, while independent application generates multiple augmented samples for each method. We recommend using 1-3 augmentation methods at a time to avoid significant changes in sample semantics.
- __init__(sequential: bool = False, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, delete_random_word: bool = False, swap_random_word: bool = False, spelling_error_word: bool = False, split_random_word: bool = False, keyboard_error_char: bool = False, ocr_error_char: bool = False, delete_random_char: bool = False, swap_random_char: bool = False, insert_random_char: bool = False, *args, **kwargs)[source]¶
Initialization method. All augmentation methods use default parameters in default. We recommend you to only use 1-3 augmentation methods at a time. Otherwise, the semantics of samples might be changed significantly.
- Parameters:
sequential – whether combine all augmentation methods to a sequence. If it’s True, a sample will be augmented by all opened augmentation methods sequentially. If it’s False, each opened augmentation method would generate its augmented samples independently.
aug_num – number of augmented samples to be generated. If sequential is True, there will be total aug_num augmented samples generated. If it’s False, there will be (aug_num * #opened_aug_method) augmented samples generated.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated texts in the final datasets and the original texts will be removed. It’s True in default.
delete_random_word – whether to open the augmentation method of deleting random words from the original texts. e.g. “I love LLM” –> “I LLM”
swap_random_word – whether to open the augmentation method of swapping random contiguous words in the original texts. e.g. “I love LLM” –> “Love I LLM”
spelling_error_word – whether to open the augmentation method of simulating the spelling error for words in the original texts. e.g. “I love LLM” –> “Ai love LLM”
split_random_word – whether to open the augmentation method of splitting words randomly with whitespaces in the original texts. e.g. “I love LLM” –> “I love LL M”
keyboard_error_char – whether to open the augmentation method of simulating the keyboard error for characters in the original texts. e.g. “I love LLM” –> “I ;ov4 LLM”
ocr_error_char – whether to open the augmentation method of simulating the OCR error for characters in the original texts. e.g. “I love LLM” –> “I 10ve LLM”
delete_random_char – whether to open the augmentation method of deleting random characters from the original texts. e.g. “I love LLM” –> “I oe LLM”
swap_random_char – whether to open the augmentation method of swapping random contiguous characters in the original texts. e.g. “I love LLM” –> “I ovle LLM”
insert_random_char – whether to open the augmentation method of inserting random characters into the original texts. e.g. “I love LLM” –> “I ^lKove LLM”
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.NlpcdaZhMapper(sequential: bool = False, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, replace_similar_word: bool = False, replace_homophone_char: bool = False, delete_random_char: bool = False, swap_random_char: bool = False, replace_equivalent_num: bool = False, *args, **kwargs)[source]¶
Bases:
Mapper
Augments Chinese text samples using the nlpcda library.
This operator applies various augmentation methods to Chinese text, such as replacing similar words, homophones, deleting random characters, swapping characters, and replacing equivalent numbers. The number of augmented samples generated can be controlled by the aug_num parameter. If sequential is set to True, the augmentation methods are applied in sequence; otherwise, they are applied independently. The original sample can be kept or removed based on the keep_original_sample flag. It is recommended to use 1-3 augmentation methods at a time to avoid significant changes in the semantics of the samples. Some augmentation methods may not work for special texts, resulting in no augmented samples being generated.
- __init__(sequential: bool = False, aug_num: Annotated[int, Gt(gt=0)] = 1, keep_original_sample: bool = True, replace_similar_word: bool = False, replace_homophone_char: bool = False, delete_random_char: bool = False, swap_random_char: bool = False, replace_equivalent_num: bool = False, *args, **kwargs)[source]¶
Initialization method. All augmentation methods use default parameters in default. We recommend you to only use 1-3 augmentation methods at a time. Otherwise, the semantics of samples might be changed significantly. Notice: some augmentation method might not work for some special texts, so there might be no augmented texts generated.
- Parameters:
sequential – whether combine all augmentation methods to a sequence. If it’s True, a sample will be augmented by all opened augmentation methods sequentially. If it’s False, each opened augmentation method would generate its augmented samples independently.
aug_num – number of augmented samples to be generated. If sequential is True, there will be total aug_num augmented samples generated. If it’s False, there will be (aug_num * #opened_aug_method) augmented samples generated.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated texts in the final datasets and the original texts will be removed. It’s True in default.
replace_similar_word – whether to open the augmentation method of replacing random words with their similar words in the original texts. e.g. “这里一共有5种不同的数据增强方法” –> “这边一共有5种不同的数据增强方法”
replace_homophone_char – whether to open the augmentation method of replacing random characters with their homophones in the original texts. e.g. “这里一共有5种不同的数据增强方法” –> “这里一共有5种不同的濖据增强方法”
delete_random_char – whether to open the augmentation method of deleting random characters from the original texts. e.g. “这里一共有5种不同的数据增强方法” –> “这里一共有5种不同的数据增强”
swap_random_char – whether to open the augmentation method of swapping random contiguous characters in the original texts. e.g. “这里一共有5种不同的数据增强方法” –> “这里一共有5种不同的数据强增方法”
replace_equivalent_num – whether to open the augmentation method of replacing random numbers with their equivalent representations in the original texts. Notice: Only for numbers for now. e.g. “这里一共有5种不同的数据增强方法” –> “这里一共有伍种不同的数据增强方法”
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.OptimizePromptMapper(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', gen_num: Annotated[int, Gt(gt=0)] = 3, max_example_num: Annotated[int, Gt(gt=0)] = 3, keep_original_sample: bool = True, retry_num: int = 3, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, example_template: str | None = None, prompt_template: str | None = None, output_pattern: str | None = None, enable_vllm: bool = False, is_hf_model: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Bases:
Mapper
Optimize prompts based on existing ones in the same batch.
This operator uses the existing prompts and newly optimized prompts as examples to generate better prompts. It supports using a Hugging Face model or an API for text generation. The operator can be configured to keep the original samples or replace them with the generated ones. The optimization process involves multiple retries if the generated prompt is empty. The operator operates in batch mode and can leverage vLLM for inference acceleration on CUDA devices.
Uses existing and newly generated prompts to optimize future prompts.
Supports both Hugging Face models and API-based text generation.
Can keep or replace original samples with generated ones.
Retries up to a specified number of times if the generated prompt is empty.
Operates in batch mode and can use vLLM for acceleration on CUDA.
References: https://doc.agentscope.io/v0/en/build_tutorial/prompt_optimization.html
- DEFAULT_EXAMPLE_TEMPLATE = '\n如下是一条示例数据:\n{}'¶
- DEFAULT_INPUT_TEMPLATE = '{}'¶
- DEFAULT_OUTPUT_PATTERN = '【提示词】(.*?)(?=【|$)'¶
- DEFAULT_PROMPT_TEMPLATE = '【提示词】\n{}\n'¶
- DEFAULT_SYSTEM_PROMPT = '请你仔细观察多个示例提示词,按照你的理解,总结出相应规矩,然后写出一个新的更好的提示词,以让模型更好地完成指定任务。注意,新生成的【提示词】需要满足如下要求:\n1. 生成的【提示词】不能与输入的【提示词】完全一致,但是需要保持格式类似。\n2. 生成的【提示词】相比于输入的【提示词】不能有很大的变化,更多应该是关键词、核心参数等方面的微调。\n3. 生成时只需生成带有【提示词】前缀的提示词,不需生成其他任何额外信息。\n'¶
- __init__(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', gen_num: Annotated[int, Gt(gt=0)] = 3, max_example_num: Annotated[int, Gt(gt=0)] = 3, keep_original_sample: bool = True, retry_num: int = 3, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, example_template: str | None = None, prompt_template: str | None = None, output_pattern: str | None = None, enable_vllm: bool = False, is_hf_model: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Initialization method.
- Parameters:
api_or_hf_model – API or huggingface model name.
gen_num – The number of new prompts to generate.
max_example_num – Maximum number of example prompts to include as context when generating new optimized prompts.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated texts in the final datasets and the original texts will be removed. It’s True in default.
retry_num – how many times to retry to generate the prompt if the parsed generated prompt is empty. It’s 3 in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for guiding the generation task.
input_template – Template for building the input prompt. It must include one placeholder ‘{}’, which will be replaced by example_num formatted examples defined by example_template.
example_template – Template for formatting one prompt example. It must include one placeholder ‘{}’, which will be replaced by one formatted prompt.
prompt_template – Template for formatting a single prompt within each example. Must include two placeholders ‘{}’ for the question and answer.
output_pattern – Regular expression pattern to extract questions and answers from model response.
enable_vllm – Whether to use vllm for inference acceleration.
is_hf_model – If true, use Transformers for loading hugging face or local llm.
model_params – Parameters for initializing the model.
sampling_params – Sampling parameters for text generation. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.OptimizeQAMapper(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', is_hf_model: bool = True, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Bases:
Mapper
Mapper to optimize question-answer pairs.
This operator refines and enhances the quality of question-answer pairs. It uses a Hugging Face model to generate more detailed and accurate questions and answers. The input is formatted using a template, and the output is parsed using a regular expression. The system prompt, input template, and output pattern can be customized. If VLLM is enabled, the operator accelerates inference on CUDA devices.
- DEFAULT_INPUT_TEMPLATE = '以下是原始问答对:\n{}'¶
- DEFAULT_OUTPUT_PATTERN = '.*?【问题】\\s*(.*?)\\s*【回答】\\s*(.*)'¶
- DEFAULT_QA_PAIR_TEMPLATE = '【问题】\n{}\n【回答】\n{}'¶
- DEFAULT_SYSTEM_PROMPT = '请优化输入的问答对,使【问题】和【回答】都更加详细、准确。必须按照以下标记格式,直接输出优化后的问答对:\n【问题】\n优化后的问题\n【回答】\n优化后的回答'¶
- __init__(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', is_hf_model: bool = True, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Initialization method.
- Parameters:
api_or_hf_model – API or huggingface model name.
is_hf_model – If true, use huggingface model. Otherwise, use API.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for guiding the optimization task.
input_template – Template for building the input for the model. Please make sure the template contains one placeholder ‘{}’, which corresponds to the question and answer pair generated by param qa_pair_template.
qa_pair_template – Template for formatting the question and answer pair. Please make sure the template contains two ‘{}’ to format question and answer.
output_pattern – Regular expression pattern to extract question and answer from model response.
try_num – The number of retry attempts when there is an API call error or output parsing error.
enable_vllm – Whether to use VLLM for inference acceleration.
model_params – Parameters for initializing the model.
sampling_params – Sampling parameters for text generation (e.g., {‘temperature’: 0.9, ‘top_p’: 0.95}).
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.OptimizeQueryMapper(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', is_hf_model: bool = True, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Bases:
OptimizeQAMapper
Optimize queries in question-answer pairs to make them more specific and detailed.
This mapper refines the questions in a QA pair, making them more specific and detailed while ensuring that the original answer can still address the optimized question. It uses a predefined system prompt for the optimization process. The optimized query is extracted from the raw output by stripping any leading or trailing whitespace. The mapper utilizes a CUDA accelerator for faster processing.
- DEFAULT_SYSTEM_PROMPT = '优化问答对中的【问题】,将其更加详细具体,但仍可以由原答案回答。只输出优化后的【问题】,不要输出多余内容。'¶
- class data_juicer.ops.mapper.OptimizeResponseMapper(api_or_hf_model: str = 'Qwen/Qwen2.5-7B-Instruct', is_hf_model: bool = True, *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, qa_pair_template: str | None = None, output_pattern: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, enable_vllm: bool = False, model_params: Dict | None = None, sampling_params: Dict | None = None, **kwargs)[source]¶
Bases:
OptimizeQAMapper
Optimize response in question-answer pairs to be more detailed and specific.
This operator enhances the responses in question-answer pairs, making them more detailed and specific while ensuring they still address the original question. It uses a predefined system prompt for optimization. The optimized response is stripped of any leading or trailing whitespace before being returned. This mapper leverages a Hugging Face model for the optimization process, which is accelerated using CUDA.
- DEFAULT_SYSTEM_PROMPT = '请优化问答对中的回答,将其更加详细具体,但仍可以回答原问题。只输出优化后的回答,不要输出多余内容。'¶
- class data_juicer.ops.mapper.PairPreferenceMapper(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, rejected_key: str = 'rejected_response', reason_key: str = 'reason', try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Mapper to construct paired preference samples by generating a rejected response and its reason.
This operator uses an API model to generate a new response that is opposite in style, factuality, or stance to the original response. The generated response and the reason for its generation are stored in the sample. The default system prompt and input template are provided, but can be customized. The output is parsed using a regular expression to extract the new response and the reason. If parsing fails, the operator retries up to a specified number of times. The generated response and reason are stored in the sample under the keys ‘rejected_response’ and ‘reason’, respectively.
- DEFAULT_INPUT_TEMPLATE = '【参考信息】\n{reference}\n\n以下是原始问答对:\n【问题】\n{query}\n【回答】\n{response}'¶
- DEFAULT_OUTPUT_PATTERN = '.*?【回答】\\s*(.*?)\\s*【原因】\\s*(.*)'¶
- DEFAULT_SYSTEM_PROMPT = '你的任务是根据参考信息修改问答对中的回答,在语言风格、事实性、人物身份、立场等任一方面与原回答相反。必须按照以下标记格式输出,不要输出其他多余内容。\n【回答】\n生成的新回答\n【原因】\n生成该回答的原因'¶
- __init__(api_model: str = 'gpt-4o', *, api_endpoint: str | None = None, response_path: str | None = None, system_prompt: str | None = None, input_template: str | None = None, output_pattern: str | None = None, rejected_key: str = 'rejected_response', reason_key: str = 'reason', try_num: Annotated[int, Gt(gt=0)] = 3, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method.
- Parameters:
api_model – API model name.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt – System prompt for guiding the generation task.
input_template – Template for building the model input. It must contain placeholders ‘{query}’ and ‘{response}’, and can optionally include ‘{reference}’.
output_pattern – Regular expression for parsing model output.
rejected_key – The field name in the sample to store the generated rejected response. Defaults to ‘rejected_response’.
reason_key – The field name in the sample to store the reason for generating the response. Defaults to ‘reason’.
try_num – The number of retries for the API call in case of response parsing failure. Defaults to 3.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.PunctuationNormalizationMapper(*args, **kwargs)[source]¶
Bases:
Mapper
Normalizes unicode punctuations to their English equivalents in text samples.
This operator processes a batch of text samples and replaces any unicode punctuation with its corresponding English punctuation. The mapping includes common substitutions like “,” to “,”, “。” to “.”, and ““” to “. It iterates over each character in the text, replacing it if it is found in the predefined punctuation map. The result is a set of text samples with consistent punctuation formatting.
- class data_juicer.ops.mapper.PythonFileMapper(file_path: str = '', function_name: str = 'process_single', batched: bool = False, **kwargs)[source]¶
Bases:
Mapper
Executes a Python function defined in a file on input data.
This operator loads a specified Python function from a given file and applies it to the input data. The function must take exactly one argument and return a dictionary. The operator can process data either sample by sample or in batches, depending on the batched parameter. If the file path is not provided, the operator acts as an identity function, returning the input sample unchanged. The function is loaded dynamically, and its name and file path are configurable. Important notes: - The file must be a valid Python file (.py). - The function must be callable and accept exactly one argument. - The function’s return value must be a dictionary.
- __init__(file_path: str = '', function_name: str = 'process_single', batched: bool = False, **kwargs)[source]¶
Initialization method.
- Parameters:
file_path – The path to the Python file containing the function to be executed.
function_name – The name of the function defined in the file to be executed.
batched – A boolean indicating whether to process input data in batches.
kwargs – Additional keyword arguments passed to the parent class.
- class data_juicer.ops.mapper.PythonLambdaMapper(lambda_str: str = '', batched: bool = False, **kwargs)[source]¶
Bases:
Mapper
Mapper for applying a Python lambda function to data samples.
This operator allows users to define a custom transformation using a Python lambda function. The lambda function is applied to each sample, and the result must be a dictionary. If the batched parameter is set to True, the lambda function will process a batch of samples at once. If no lambda function is provided, the identity function is used, which returns the input sample unchanged. The operator validates the lambda function to ensure it has exactly one argument and compiles it safely.
- __init__(lambda_str: str = '', batched: bool = False, **kwargs)[source]¶
Initialization method.
- Parameters:
lambda_str – A string representation of the lambda function to be executed on data samples. If empty, the identity function is used.
batched – A boolean indicating whether to process input data in batches.
kwargs – Additional keyword arguments passed to the parent class.
- class data_juicer.ops.mapper.QuerySentimentDetectionMapper(hf_model: str = 'mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_sentiment_label', score_key: str = 'query_sentiment_label_score', **kwargs)[source]¶
Bases:
Mapper
Predicts user’s sentiment label (‘negative’, ‘neutral’, ‘positive’) in a query.
This mapper takes input from the specified query key and outputs the predicted sentiment label and its corresponding score. The results are stored in the Data-Juicer meta field under ‘query_sentiment_label’ and ‘query_sentiment_label_score’. It uses a Hugging Face model for sentiment detection. If a Chinese-to-English translation model is provided, it first translates the query from Chinese to English before performing sentiment analysis.
- __init__(hf_model: str = 'mrm8488/distilroberta-finetuned-financial-news-sentiment-analysis', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_sentiment_label', score_key: str = 'query_sentiment_label_score', **kwargs)[source]¶
Initialization method.
- Parameters:
hf_model – Huggingface model ID to predict sentiment label.
zh_to_en_hf_model – Translation model from Chinese to English. If not None, translate the query from Chinese to English.
model_params – model param for hf_model.
zh_to_en_model_params – model param for zh_to_hf_model.
label_key – The key name in the meta field to store the output label. It is ‘query_sentiment_label’ in default.
score_key – The key name in the meta field to store the corresponding label score. It is ‘query_sentiment_label_score’ in default.
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.QueryIntentDetectionMapper(hf_model: str = 'bespin-global/klue-roberta-small-3i4k-intent-classification', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_intent_label', score_key: str = 'query_intent_label_score', **kwargs)[source]¶
Bases:
Mapper
Predicts the user’s intent label and corresponding score for a given query. The operator uses a Hugging Face model to classify the intent of the input query. If the query is in Chinese, it can optionally be translated to English using another Hugging Face translation model before classification. The predicted intent label and its confidence score are stored in the meta field with the keys ‘query_intent_label’ and ‘query_intent_score’, respectively. If these keys already exist in the meta field, the operator will skip processing for those samples.
- __init__(hf_model: str = 'bespin-global/klue-roberta-small-3i4k-intent-classification', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_intent_label', score_key: str = 'query_intent_label_score', **kwargs)[source]¶
Initialization method.
- Parameters:
hf_model – Huggingface model ID to predict intent label.
zh_to_en_hf_model – Translation model from Chinese to English. If not None, translate the query from Chinese to English.
model_params – model param for hf_model.
zh_to_en_model_params – model param for zh_to_hf_model.
label_key – The key name in the meta field to store the output label. It is ‘query_intent_label’ in default.
score_key – The key name in the meta field to store the corresponding label score. It is ‘query_intent_label_score’ in default.
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.QueryTopicDetectionMapper(hf_model: str = 'dstefa/roberta-base_topic_classification_nyt_news', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_topic_label', score_key: str = 'query_topic_label_score', **kwargs)[source]¶
Bases:
Mapper
Predicts the topic label and its corresponding score for a given query. The input is taken from the specified query key. The output, which includes the predicted topic label and its score, is stored in the ‘query_topic_label’ and ‘query_topic_label_score’ fields of the Data-Juicer meta field. This operator uses a Hugging Face model for topic classification. If a Chinese to English translation model is provided, it will first translate the query from Chinese to English before predicting the topic.
Uses a Hugging Face model for topic classification.
Optionally translates Chinese queries to English using another Hugging Face
model. - Stores the predicted topic label in ‘query_topic_label’. - Stores the corresponding score in ‘query_topic_label_score’.
- __init__(hf_model: str = 'dstefa/roberta-base_topic_classification_nyt_news', zh_to_en_hf_model: str | None = 'Helsinki-NLP/opus-mt-zh-en', model_params: Dict = {}, zh_to_en_model_params: Dict = {}, *, label_key: str = 'query_topic_label', score_key: str = 'query_topic_label_score', **kwargs)[source]¶
Initialization method.
- Parameters:
hf_model – Huggingface model ID to predict topic label.
zh_to_en_hf_model – Translation model from Chinese to English. If not None, translate the query from Chinese to English.
model_params – model param for hf_model.
zh_to_en_model_params – model param for zh_to_hf_model.
label_key – The key name in the meta field to store the output label. It is ‘query_topic_label’ in default.
score_key – The key name in the meta field to store the corresponding label score. It is ‘query_topic_label_score’ in default.
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.RelationIdentityMapper(api_model: str = 'gpt-4o', source_entity: str = None, target_entity: str = None, *, output_key: str = 'role_relation', api_endpoint: str | None = None, response_path: str | None = None, system_prompt_template: str | None = None, input_template: str | None = None, output_pattern_template: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Bases:
Mapper
Identify the relation between two entities in a given text.
This operator uses an API model to analyze the relationship between two specified entities in the text. It constructs a prompt with the provided system and input templates, then sends it to the API model for analysis. The output is parsed using a regular expression to extract the relationship. If the two entities are the same, the relationship is identified as “another identity.” The result is stored in the meta field under the key ‘role_relation’ by default. The operator retries the API call up to a specified number of times in case of errors. If drop_text is set to True, the original text is removed from the sample after processing.
- DEFAULT_INPUT_TEMPLATE = '关于{entity1}和{entity2}的文本信息:\n```\n{text}\n```\n'¶
- DEFAULT_OUTPUT_PATTERN_TEMPLATE = '\n \\s*分析推理:\\s*(.*?)\\s*\n \\s*所以{entity2}是{entity1}的:\\s*(.*?)\\Z\n '¶
- DEFAULT_SYSTEM_PROMPT_TEMPLATE = '给定关于{entity1}和{entity2}的文本信息。判断{entity1}和{entity2}之间的关系。\n要求:\n- 关系用一个或多个词语表示,必要时可以加一个形容词来描述这段关系\n- 输出关系时不要参杂任何标点符号\n- 需要你进行合理的推理才能得出结论\n- 如果两个人物身份是同一个人,输出关系为:另一个身份\n- 输出格式为:\n分析推理:...\n所以{entity2}是{entity1}的:...\n- 注意输出的是{entity2}是{entity1}的什么关系,而不是{entity1}是{entity2}的什么关系'¶
- __init__(api_model: str = 'gpt-4o', source_entity: str = None, target_entity: str = None, *, output_key: str = 'role_relation', api_endpoint: str | None = None, response_path: str | None = None, system_prompt_template: str | None = None, input_template: str | None = None, output_pattern_template: str | None = None, try_num: Annotated[int, Gt(gt=0)] = 3, drop_text: bool = False, model_params: Dict = {}, sampling_params: Dict = {}, **kwargs)[source]¶
Initialization method. :param api_model: API model name. :param source_entity: The source entity of the relation to be
identified.
- Parameters:
target_entity – The target entity of the relation to be identified.
output_key – The output key in the meta field in the samples. It is ‘role_relation’ in default.
api_endpoint – URL endpoint for the API.
response_path – Path to extract content from the API response. Defaults to ‘choices.0.message.content’.
system_prompt_template – System prompt template for the task.
input_template – Template for building the model input.
output_pattern_template – Regular expression template for parsing model output.
try_num – The number of retry attempts when there is an API call error or output parsing error.
drop_text – If drop the text in the output.
model_params – Parameters for initializing the API model.
sampling_params – Extra parameters passed to the API call. e.g {‘temperature’: 0.9, ‘top_p’: 0.95}
kwargs – Extra keyword arguments.
- class data_juicer.ops.mapper.RemoveBibliographyMapper(*args, **kwargs)[source]¶
Bases:
Mapper
Removes bibliography sections at the end of LaTeX documents.
This operator identifies and removes bibliography sections in LaTeX documents. It uses a regular expression to match common bibliography commands such as appendix, begin{references}, begin{thebibliography}, and bibliography. The matched sections are removed from the text. The operator processes samples in batch mode for efficiency.
- class data_juicer.ops.mapper.RemoveCommentsMapper(doc_type: str | List[str] = 'tex', inline: bool = True, multiline: bool = True, *args, **kwargs)[source]¶
Bases:
Mapper
Removes comments from documents, currently supporting only ‘tex’ format.
This operator removes inline and multiline comments from text samples. It supports both inline and multiline comment removal, controlled by the inline and multiline parameters. Currently, it is designed to work with ‘tex’ documents. The operator processes each sample in the batch and applies regular expressions to remove comments. The processed text is then updated in the original samples.
Inline comments are removed using the pattern [^]%.+$.
Multiline comments are removed using the pattern `^%.*
?`.
Important notes: - Only ‘tex’ document type is supported at present. - The operator processes the text in place and updates the original samples.
- __init__(doc_type: str | List[str] = 'tex', inline: bool = True, multiline: bool = True, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
doc_type – Type of document to remove comments.
inline – Whether to remove inline comments.
multiline – Whether to remove multiline comments.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.RemoveHeaderMapper(drop_no_head: bool = True, *args, **kwargs)[source]¶
Bases:
Mapper
Removes headers at the beginning of documents in LaTeX samples.
This operator identifies and removes headers such as chapter, part, section, subsection, subsubsection, paragraph, and subparagraph. It uses a regular expression to match these headers. If a sample does not contain any headers and drop_no_head is set to True, the sample text will be removed. Otherwise, the sample remains unchanged. The operator processes samples in batches for efficiency.
- class data_juicer.ops.mapper.RemoveLongWordsMapper(min_len: int = 1, max_len: int = 9223372036854775807, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to remove long words within a specific range.
This operator filters out words in the text that are either shorter than the specified minimum length or longer than the specified maximum length. Words are first checked with their original length, and if they do not meet the criteria, they are stripped of special characters and re-evaluated. The key metric used is the character-based length of each word. The processed text retains only the words that fall within the defined length range. This operator processes text in batches for efficiency.
- __init__(min_len: int = 1, max_len: int = 9223372036854775807, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
min_len – The min mapper word length in this op, words will be filtered if their length is below this parameter.
max_len – The max mapper word length in this op, words will be filtered if their length exceeds this parameter.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.RemoveNonChineseCharacterlMapper(keep_alphabet: bool = True, keep_number: bool = True, keep_punc: bool = True, *args, **kwargs)[source]¶
Bases:
Mapper
Removes non-Chinese characters from text samples.
This mapper removes all characters that are not part of the Chinese character set. - It can optionally keep alphabets, numbers, and punctuation based on the configuration. - The removal is done using a regular expression pattern. - The pattern is constructed to exclude or include alphabets, numbers, and punctuation
as specified.
The key metric for this operation is the presence of non-Chinese characters, which are removed.
The operator processes samples in a batched manner.
- __init__(keep_alphabet: bool = True, keep_number: bool = True, keep_punc: bool = True, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
keep_alphabet – whether to keep alphabet
keep_number – whether to keep number
keep_punc – whether to keep punctuation
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.RemoveRepeatSentencesMapper(lowercase: bool = False, ignore_special_character: bool = True, min_repeat_sentence_length: int = 2, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to remove repeat sentences in text samples.
This operator processes text samples to remove duplicate sentences. It splits the text into lines and then further splits each line into sentences. Sentences are considered duplicates if they are identical after optional case normalization and special character removal. The operator uses a hash set to track unique sentences. Sentences shorter than min_repeat_sentence_length are not deduplicated. If ignore_special_character is enabled, special characters (all except Chinese, letters, and numbers) are ignored when checking for duplicates. The resulting text is reassembled with unique sentences.
- __init__(lowercase: bool = False, ignore_special_character: bool = True, min_repeat_sentence_length: int = 2, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
lowercase – Whether to convert sample text to lower case
ignore_special_character – Whether to ignore special characters when judging repeated sentences. Special characters are all characters except Chinese characters, letters and numbers.
min_repeat_sentence_length – Sentences shorter than this length will not be deduplicated. If ignore_special_character is set to True, then special characters are not included in this length.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.RemoveSpecificCharsMapper(chars_to_remove: str | List[str] = '◆●■►▼▲▴∆▻▷❖♡□', *args, **kwargs)[source]¶
Bases:
Mapper
Removes specific characters from text samples.
This operator removes specified characters from the text. The characters to be removed can be provided as a string or a list of strings. If no characters are specified, the default set includes special and non-alphanumeric characters. The operator processes the text using a regular expression pattern that matches any of the specified characters and replaces them with an empty string. This is done in a batched manner for efficiency.
- class data_juicer.ops.mapper.RemoveTableTextMapper(min_col: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=2), Le(le=20)])] = 2, max_col: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=2), Le(le=20)])] = 20, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to remove table texts from text samples.
This operator uses regular expressions to identify and remove tables from the text. It targets tables with a specified range of columns, defined by the minimum and maximum number of columns. The operator iterates over each sample, applying the regex pattern to remove tables that match the column criteria. The processed text, with tables removed, is then stored back in the sample. This operation is batched for efficiency.
- __init__(min_col: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=2), Le(le=20)])] = 2, max_col: Annotated[int, FieldInfo(annotation=NoneType, required=True, metadata=[Ge(ge=2), Le(le=20)])] = 20, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
min_col – The min number of columns of table to remove.
max_col – The max number of columns of table to remove.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.RemoveWordsWithIncorrectSubstringsMapper(lang: str = 'en', tokenization: bool = False, substrings: List[str] | None = None, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to remove words containing specified incorrect substrings.
This operator processes text by removing words that contain any of the specified incorrect substrings. By default, it removes words with substrings like “http”, “www”, “.com”, “href”, and “//”. The operator can operate in tokenized or non-tokenized mode. In tokenized mode, it uses a Hugging Face tokenizer to tokenize the text before processing. The key metric is not computed; this operator focuses on filtering out specific words.
If tokenization is True, the text is tokenized using a Hugging Face
tokenizer, and words are filtered based on the specified substrings. - If tokenization is False, the text is split into sentences and words, and words are filtered based on the specified substrings. - The filtered text is then merged back into a single string.
The operator processes samples in batches and updates the text in place.
- __init__(lang: str = 'en', tokenization: bool = False, substrings: List[str] | None = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
lang – sample in which language
tokenization – whether to use model to tokenize documents
substrings – The incorrect substrings in words.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.ReplaceContentMapper(pattern: str | List[str] | None = None, repl: str | List[str] = '', *args, **kwargs)[source]¶
Bases:
Mapper
Replaces content in the text that matches a specific regular expression pattern with a designated replacement string.
This operator processes text by searching for patterns defined in pattern and replacing them with the corresponding repl string. If multiple patterns and replacements are provided, each pattern is replaced by its respective replacement. The operator supports both single and multiple patterns and replacements. The regular expressions are compiled with the re.DOTALL flag to match across multiple lines. If the length of the patterns and replacements do not match, a ValueError is raised. This operation is batched, meaning it processes multiple samples at once.
- class data_juicer.ops.mapper.SDXLPrompt2PromptMapper(hf_diffusion: str = 'stabilityai/stable-diffusion-xl-base-1.0', trust_remote_code=False, torch_dtype: str = 'fp32', num_inference_steps: float = 50, guidance_scale: float = 7.5, text_key=None, text_key_second=None, output_dir='/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]¶
Bases:
Mapper
Generates pairs of similar images using the SDXL model.
This operator uses a Hugging Face diffusion model to generate image pairs based on two text prompts. The quality and similarity of the generated images are controlled by parameters such as num_inference_steps and guidance_scale. The first and second text prompts are specified using text_key and text_key_second, respectively. The generated images are saved in the specified output_dir with unique filenames. The operator requires both text keys to be set for processing.
- __init__(hf_diffusion: str = 'stabilityai/stable-diffusion-xl-base-1.0', trust_remote_code=False, torch_dtype: str = 'fp32', num_inference_steps: float = 50, guidance_scale: float = 7.5, text_key=None, text_key_second=None, output_dir='/home/runner/.cache/data_juicer/assets', *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_diffusion – diffusion model name on huggingface to generate the image.
trust_remote_code – whether to trust the remote code of HF models.
torch_dtype – the floating point type used to load the diffusion model.
num_inference_steps – The larger the value, the better the
image generation quality; however, this also increases the time required for generation. :param guidance_scale: A higher guidance scale value encourages the
model to generate images closely linked to the text prompt at the expense of lower image quality. Guidance scale is enabled when
- Parameters:
text_key – the key name used to store the first caption in the caption pair.
text_key_second – the key name used to store the second caption in the caption pair.
output_dir – the storage location of the generated images.
- class data_juicer.ops.mapper.SentenceAugmentationMapper(hf_model: str = 'Qwen/Qwen2-7B-Instruct', system_prompt: str = None, task_sentence: str = None, max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, text_key=None, text_key_second=None, *args, **kwargs)[source]¶
Bases:
Mapper
Augments sentences by generating enhanced versions using a Hugging Face model. This operator enhances input sentences by generating new, augmented versions. It is designed to work best with individual sentences rather than full documents. For optimal results, ensure the input text is at the sentence level. The augmentation process uses a Hugging Face model, such as lmsys/vicuna-13b-v1.5 or Qwen/Qwen2-7B-Instruct. The operator requires specifying both the primary and secondary text keys, where the augmented sentence will be stored in the secondary key. The generation process can be customized with parameters like temperature, top-p sampling, and beam search size.
- __init__(hf_model: str = 'Qwen/Qwen2-7B-Instruct', system_prompt: str = None, task_sentence: str = None, max_new_tokens=256, temperature=0.2, top_p=None, num_beams=1, text_key=None, text_key_second=None, *args, **kwargs)[source]¶
Initialization method. :param hf_model: Huggingface model id. :param system_prompt: System prompt. :param task_sentence: The instruction for the current task. :param max_new_tokens: the maximum number of new tokens
generated by the model.
- Parameters:
temperature – used to control the randomness of generated text. The higher the temperature, the more random and creative the generated text will be.
top_p – randomly select the next word from the group of words whose cumulative probability reaches p.
num_beams – the larger the beam search size, the higher the quality of the generated text.
text_key – the key name used to store the first sentence in the text pair. (optional, defalut=’text’)
text_key_second – the key name used to store the second sentence in the text pair.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.SentenceSplitMapper(lang: str = 'en', *args, **kwargs)[source]¶
Bases:
Mapper
Splits text samples into individual sentences based on the specified language.
This operator uses an NLTK-based tokenizer to split the input text into sentences. The language for the tokenizer is specified during initialization. The original text in each sample is replaced with a list of sentences. This operator processes samples in batches for efficiency. Ensure that the lang parameter is set to the appropriate language code (e.g., “en” for English) to achieve accurate sentence splitting.
- class data_juicer.ops.mapper.TextChunkMapper(max_len: Annotated[int, Gt(gt=0)] | None = None, split_pattern: str | None = '\\n\\n', overlap_len: Annotated[int, Ge(ge=0)] = 0, tokenizer: str | None = None, trust_remote_code: bool = False, *args, **kwargs)[source]¶
Bases:
Mapper
Split input text into chunks based on specified criteria.
Splits the input text into multiple chunks using a specified maximum length and a split pattern.
If max_len is provided, the text is split into chunks with a maximum length of max_len.
If split_pattern is provided, the text is split at occurrences of the pattern. If the length exceeds max_len, it will force a cut.
The overlap_len parameter specifies the overlap length between consecutive chunks if the split does not occur at the pattern.
Uses a Hugging Face tokenizer to calculate the text length in tokens if a tokenizer name is provided; otherwise, it uses the string length.
Caches the following stats: ‘chunk_count’ (number of chunks generated for each sample).
Raises a ValueError if both max_len and split_pattern are None or if overlap_len is greater than or equal to max_len.
- __init__(max_len: Annotated[int, Gt(gt=0)] | None = None, split_pattern: str | None = '\\n\\n', overlap_len: Annotated[int, Ge(ge=0)] = 0, tokenizer: str | None = None, trust_remote_code: bool = False, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
max_len – Split text into multi texts with this max len if not None.
split_pattern – Make sure split in this pattern if it is not None and force cut if the length exceeds max_len.
overlap_len – Overlap length of the split texts if not split in the split pattern.
tokenizer – The tokenizer name of Hugging Face tokenizers. The text length will be calculate as the token num if it is offered. Otherwise, the text length equals to string length. Support tiktoken tokenizer (such as gpt-4o), dashscope tokenizer ( such as qwen2.5-72b-instruct) and huggingface tokenizer.
trust_remote_code – whether to trust the remote code of HF models.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoCaptioningFromAudioMapper(keep_original_sample: bool = True, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to caption a video according to its audio streams based on Qwen-Audio model.
- __init__(keep_original_sample: bool = True, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only captioned sample in the final datasets and the original sample will be removed. It’s True in default.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoCaptioningFromFramesMapper(hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, horizontal_flip: bool = False, vertical_flip: bool = False, *args, **kwargs)[source]¶
Bases:
Mapper
Generates video captions from sampled frames using an image-to-text model. Captions from different frames are concatenated into a single string.
Uses a Hugging Face image-to-text model to generate captions for sampled video frames.
Supports different frame sampling methods: ‘all_keyframes’ or ‘uniform’.
Can apply horizontal and vertical flips to the frames before captioning.
Offers multiple strategies for retaining generated captions: ‘random_any’,
‘similar_one_simhash’, or ‘all’. - Optionally keeps the original sample in the final dataset. - Allows setting a global prompt or per-sample prompts to guide caption generation. - Generates a specified number of candidate captions per video, which can be reduced based on the selected retention strategy. - The number of output samples depends on the retention strategy and whether original samples are kept.
- __init__(hf_img2seq: str = 'Salesforce/blip2-opt-2.7b', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, horizontal_flip: bool = False, vertical_flip: bool = False, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_img2seq – model name on huggingface to generate caption
trust_remote_code – whether to trust the remote code of HF models.
caption_num – how many candidate captions to generate for each video
keep_candidate_mode –
retain strategy for the generated $caption_num$ candidates.
’random_any’: Retain the random one from generated captions
- ’similar_one_simhash’: Retain the generated one that is most
similar to the original caption
’all’: Retain all generated captions by concatenation
Note
This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.
- Parameters:
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.
prompt – a string prompt to guide the generation of image-to-text model for all samples globally. It’s None in default, which means no prompt provided.
prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.
frame_sampling_method – sampling method of extracting frame videos from the videos. Should be one of [“all_keyframes”, “uniform”]. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. Default: “all_keyframes”.
frame_num – the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is “uniform”. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.
horizontal_flip – flip frame video horizontally (left to right).
vertical_flip – flip frame video vertically (top to bottom).
args – extra args
kwargs – extra args
- process_batched(samples, rank=None, context=False)[source]¶
- Parameters:
samples
- Returns:
Note
This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.
- class data_juicer.ops.mapper.VideoCaptioningFromSummarizerMapper(hf_summarizer: str = None, trust_remote_code: bool = False, consider_video_caption_from_video: bool = True, consider_video_caption_from_audio: bool = True, consider_video_caption_from_frames: bool = True, consider_video_tags_from_audio: bool = True, consider_video_tags_from_frames: bool = True, vid_cap_from_vid_args: Dict | None = None, vid_cap_from_frm_args: Dict | None = None, vid_tag_from_aud_args: Dict | None = None, vid_tag_from_frm_args: Dict | None = None, keep_tag_num: Annotated[int, Gt(gt=0)] = 5, keep_original_sample: bool = True, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to generate video captions by summarizing several kinds of generated texts (captions from video/audio/frames, tags from audio/frames, …)
- __init__(hf_summarizer: str = None, trust_remote_code: bool = False, consider_video_caption_from_video: bool = True, consider_video_caption_from_audio: bool = True, consider_video_caption_from_frames: bool = True, consider_video_tags_from_audio: bool = True, consider_video_tags_from_frames: bool = True, vid_cap_from_vid_args: Dict | None = None, vid_cap_from_frm_args: Dict | None = None, vid_tag_from_aud_args: Dict | None = None, vid_tag_from_frm_args: Dict | None = None, keep_tag_num: Annotated[int, Gt(gt=0)] = 5, keep_original_sample: bool = True, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_summarizer – the summarizer model used to summarize texts generated by other methods.
trust_remote_code – whether to trust the remote code of HF models.
consider_video_caption_from_video – whether to consider the video caption generated from video directly in the summarization process. Default: True.
consider_video_caption_from_audio – whether to consider the video caption generated from audio streams in the video in the summarization process. Default: True.
consider_video_caption_from_frames – whether to consider the video caption generated from sampled frames from the video in the summarization process. Default: True.
consider_video_tags_from_audio – whether to consider the video tags generated from audio streams in the video in the summarization process. Default: True.
consider_video_tags_from_frames – whether to consider the video tags generated from sampled frames from the video in the summarization process. Default: True.
vid_cap_from_vid_args – the arg dict for video captioning from video directly with keys are the arg names and values are the arg values. Default: None.
vid_cap_from_frm_args – the arg dict for video captioning from sampled frames from the video with keys are the arg names and values are the arg values. Default: None.
vid_tag_from_aud_args – the arg dict for video tagging from audio streams in the video with keys are the arg names and values are the arg values. Default: None.
vid_tag_from_frm_args – the arg dict for video tagging from sampled frames from the video with keys are the arg names and values are the arg values. Default: None.
keep_tag_num – max number N of tags from sampled frames to keep. Too many tags might bring negative influence to summarized text, so we consider to only keep the N most frequent tags. Default: 5.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only summarized captions in the final datasets and the original captions will be removed. It’s True in default.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoCaptioningFromVideoMapper(hf_video_blip: str = 'kpyu/video-blip-opt-2.7b-ego4d', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, horizontal_flip: bool = False, vertical_flip: bool = False, *args, **kwargs)[source]¶
Bases:
Mapper
Generates video captions using a Hugging Face video-to-text model and sampled video frames.
This operator processes video samples to generate captions based on the provided video frames. It uses a Hugging Face video-to-text model, such as ‘kpyu/video-blip-opt-2.7b-ego4d’, to generate multiple caption candidates for each video. The number of generated captions and the strategy to keep or filter these candidates can be configured. The operator supports different frame sampling methods, including extracting all keyframes or uniformly sampling a specified number of frames. Additionally, it allows for horizontal and vertical flipping of the frames. The final output can include both the original sample and the generated captions, depending on the configuration.
- __init__(hf_video_blip: str = 'kpyu/video-blip-opt-2.7b-ego4d', trust_remote_code: bool = False, caption_num: Annotated[int, Gt(gt=0)] = 1, keep_candidate_mode: str = 'random_any', keep_original_sample: bool = True, prompt: str | None = None, prompt_key: str | None = None, frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, horizontal_flip: bool = False, vertical_flip: bool = False, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_video_blip – video-blip model name on huggingface to generate caption
trust_remote_code – whether to trust the remote code of HF models.
caption_num – how many candidate captions to generate for each video
keep_candidate_mode –
retain strategy for the generated $caption_num$ candidates.
’random_any’: Retain the random one from generated captions
- ’similar_one_simhash’: Retain the generated one that is most
similar to the original caption
’all’: Retain all generated captions by concatenation
Note
This is a batched_OP, whose input and output type are both list. Suppose there are $N$ list of input samples, whose batch size is $b$, and denote caption_num as $M$. The number of total samples after generation is $2Nb$ when keep_original_sample is True and $Nb$ when keep_original_sample is False. For ‘random_any’ and ‘similar_one_simhash’ mode, it’s $(1+M)Nb$ for ‘all’ mode when keep_original_sample is True and $MNb$ when keep_original_sample is False.
- Parameters:
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only generated captions in the final datasets and the original captions will be removed. It’s True in default.
prompt – a string prompt to guide the generation of video-blip model for all samples globally. It’s None in default, which means no prompt provided.
prompt_key – the key name of fields in samples to store prompts for each sample. It’s used for set different prompts for different samples. If it’s none, use prompt in parameter “prompt”. It’s None in default.
frame_sampling_method – sampling method of extracting frame videos from the videos. Should be one of [“all_keyframes”, “uniform”]. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. Default: “all_keyframes”.
frame_num – the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is “uniform”. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.
horizontal_flip – flip frame video horizontally (left to right).
vertical_flip – flip frame video vertically (top to bottom).
args – extra args
kwargs – extra args
- process_batched(samples, rank=None, context=False)[source]¶
- Parameters:
samples
- Returns:
Note
This is a batched_OP, whose the input and output type are both list. Suppose there are $N$ input sample list with batch size as $b$, and denote caption_num as $M$. the number of total samples after generation is $2Nb$ for ‘random_any’ and ‘similar_one’ mode, and $(1+M)Nb$ for ‘all’ mode.
- class data_juicer.ops.mapper.VideoExtractFramesMapper(frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, frame_dir: str = None, frame_key='video_frames', *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to extract frames from video files according to specified methods.
Extracts frames from video files using either all keyframes or a uniform sampling method. The extracted frames are saved in a directory, and the mapping from video keys to frame directories is stored in the sample’s metadata. The data format for the extracted frames is a dictionary mapping video keys to their respective frame directories: - “video_key_1”: “/${frame_dir}/video_key_1_filename/” - “video_key_2”: “/${frame_dir}/video_key_2_filename/”
Frame Sampling Methods:
“all_keyframes”: Extracts all keyframes from the video.
“uniform”: Extracts a specified number of frames uniformly from the video.
If duration is set, the video is segmented into multiple segments based on the duration, and frames are extracted from each segment.
The output directory for the frames can be specified; otherwise, a default directory is used.
The field name in the sample’s metadata where the frame information is stored can be customized.
- __init__(frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, duration: float = 0, frame_dir: str = None, frame_key='video_frames', *args, **kwargs)[source]¶
Initialization method. :param frame_sampling_method: sampling method of extracting frame
videos from the videos. Should be one of [“all_keyframes”, “uniform”]. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. If “duration” > 0, frame_sampling_method acts on every segment. Default: “all_keyframes”.
- Parameters:
frame_num – the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is “uniform”. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration. If “duration” > 0, frame_num is the number of frames per segment.
duration – The duration of each segment in seconds. If 0, frames are extracted from the entire video. If duration > 0, the video is segmented into multiple segments based on duration, and frames are extracted from each segment.
frame_dir – Output directory to save extracted frames. If None, a default directory based on the video file path is used.
frame_key – The name of field to save generated frames info.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoFFmpegWrappedMapper(filter_name: str | None = None, filter_kwargs: Dict | None = None, global_args: List[str] | None = None, capture_stderr: bool = True, overwrite_output: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Wraps FFmpeg video filters for processing video files in a dataset.
This operator applies a specified FFmpeg video filter to each video file in the dataset. It supports passing keyword arguments to the filter and global arguments to the FFmpeg command line. The processed videos are saved in a specified directory or the same directory as the input files. If no filter name is provided, the videos remain unmodified. The operator updates the source file paths in the dataset to reflect any changes.
- __init__(filter_name: str | None = None, filter_kwargs: Dict | None = None, global_args: List[str] | None = None, capture_stderr: bool = True, overwrite_output: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
filter_name – ffmpeg video filter name.
filter_kwargs – keyword-arguments passed to ffmpeg filter.
global_args – list-arguments passed to ffmpeg command-line.
capture_stderr – whether to capture stderr.
overwrite_output – whether to overwrite output file.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoFaceBlurMapper(cv_classifier: str = '', blur_type: str = 'gaussian', radius: float = 2, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Mapper to blur faces detected in videos.
This operator uses an OpenCV classifier for face detection and applies a specified blur type to the detected faces. The default classifier is ‘haarcascade_frontalface_alt.xml’. Supported blur types include ‘mean’, ‘box’, and ‘gaussian’. The radius of the blur kernel can be adjusted. If a save directory is not provided, the processed videos will be saved in the same directory as the input files. The DJ_PRODUCED_DATA_DIR environment variable can also be used to specify the save directory.
- __init__(cv_classifier: str = '', blur_type: str = 'gaussian', radius: float = 2, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
cv_classifier – OpenCV classifier path for face detection. By default, we will use ‘haarcascade_frontalface_alt.xml’.
blur_type – Type of blur kernel, including [‘mean’, ‘box’, ‘gaussian’].
radius – Radius of blur kernel.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoRemoveWatermarkMapper(roi_strings: List[str] = ['0,0,0.1,0.1'], roi_type: str = 'ratio', roi_key: str | None = None, frame_num: Annotated[int, Gt(gt=0)] = 10, min_frame_threshold: Annotated[int, Gt(gt=0)] = 7, detection_method: str = 'pixel_value', save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Remove watermarks from videos based on specified regions.
This operator removes watermarks from video frames by detecting and masking the watermark areas. It supports two detection methods: ‘pixel_value’ and ‘pixel_diversity’. The regions of interest (ROIs) for watermark detection can be specified as either pixel coordinates or ratios of the frame dimensions. The operator extracts a set number of frames uniformly from the video to detect watermark pixels. A pixel is considered part of a watermark if it meets the detection criteria in a minimum number of frames. The cleaned video is saved in the specified directory or the same directory as the input file if no save directory is provided.
- __init__(roi_strings: List[str] = ['0,0,0.1,0.1'], roi_type: str = 'ratio', roi_key: str | None = None, frame_num: Annotated[int, Gt(gt=0)] = 10, min_frame_threshold: Annotated[int, Gt(gt=0)] = 7, detection_method: str = 'pixel_value', save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
roi_strings – a given list of regions the watermarks locate. The format of each can be “x1, y1, x2, y2”, “(x1, y1, x2, y2)”, or “[x1, y1, x2, y2]”.
roi_type – the roi string type. When the type is ‘pixel’, (x1, y1), (x2, y2) are the locations of pixels in the top left corner and the bottom right corner respectively. If the roi_type is ‘ratio’, the coordinates are normalized by widths and heights.
roi_key – the key name of fields in samples to store roi_strings for each sample. It’s used for set different rois for different samples. If it’s none, use rois in parameter “roi_strings”. It’s None in default.
frame_num – the number of frames to be extracted uniformly from the video to detect the pixels of watermark.
min_frame_threshold – a coordination is considered as the location of a watermark pixel when it is that in no less min_frame_threshold frames.
detection_method – the method to detect the pixels of watermark. If it is ‘pixel_value’, we consider the distribution of pixel value in each frame. If it is ‘pixel_diversity’, we will consider the pixel diversity in different frames. The min_frame_threshold is useless and frame_num must be greater than 1 in ‘pixel_diversity’ mode.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoResizeAspectRatioMapper(min_ratio: str = '9/21', max_ratio: str = '21/9', strategy: str = 'increase', save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Resizes videos to fit within a specified aspect ratio range. This operator adjusts the dimensions of videos to ensure their aspect ratios fall within a defined range. It can either increase or decrease the video dimensions based on the specified strategy. The aspect ratio is calculated as width divided by height. If a video’s aspect ratio is outside the given range, it will be resized to match the closest boundary (either the minimum or maximum ratio). The min_ratio and max_ratio should be provided as strings in the format “9:21” or “9/21”. The resizing process uses the ffmpeg library to handle the actual video scaling. Videos that do not need resizing are left unchanged. The operator supports saving the modified videos to a specified directory or the same directory as the input files.
- STRATEGY = ['decrease', 'increase']¶
- __init__(min_ratio: str = '9/21', max_ratio: str = '21/9', strategy: str = 'increase', save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
min_ratio – The minimum aspect ratio to enforce videos with an aspect ratio below min_ratio will be resized to match this minimum ratio. The ratio should be provided as a string in the format “9:21” or “9/21”.
max_ratio – The maximum aspect ratio to enforce videos with an aspect ratio above max_ratio will be resized to match this maximum ratio. The ratio should be provided as a string in the format “21:9” or “21/9”.
strategy – The resizing strategy to apply when adjusting the video dimensions. It can be either ‘decrease’ to reduce the dimension or ‘increase’ to enlarge it. Accepted values are [‘decrease’, ‘increase’].
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoResizeResolutionMapper(min_width: int = 1, max_width: int = 9223372036854775807, min_height: int = 1, max_height: int = 9223372036854775807, force_original_aspect_ratio: str = 'disable', force_divisible_by: Annotated[int, Gt(gt=0)] = 2, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Resizes video resolution based on specified width and height constraints.
This operator resizes videos to fit within the provided minimum and maximum width and height limits. It can optionally maintain the original aspect ratio by adjusting the dimensions accordingly. The resized videos are saved in the specified directory or the same directory as the input if no save directory is provided. The key metric for resizing is the video’s width and height, which are adjusted to meet the constraints while maintaining the aspect ratio if configured. The force_divisible_by parameter ensures that the output dimensions are divisible by a specified integer, which must be a positive even number when used with aspect ratio adjustments.
- __init__(min_width: int = 1, max_width: int = 9223372036854775807, min_height: int = 1, max_height: int = 9223372036854775807, force_original_aspect_ratio: str = 'disable', force_divisible_by: Annotated[int, Gt(gt=0)] = 2, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
min_width – Videos with width less than ‘min_width’ will be mapped to videos with equal or bigger width.
max_width – Videos with width more than ‘max_width’ will be mapped to videos with equal of smaller width.
min_height – Videos with height less than ‘min_height’ will be mapped to videos with equal or bigger height.
max_height – Videos with height more than ‘max_height’ will be mapped to videos with equal or smaller height.
force_original_aspect_ratio – Enable decreasing or increasing output video width or height if necessary to keep the original aspect ratio, including [‘disable’, ‘decrease’, ‘increase’].
force_divisible_by – Ensures that both the output dimensions, width and height, are divisible by the given integer when used together with force_original_aspect_ratio, must be a positive even number.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoSplitByDurationMapper(split_duration: float = 10, min_last_split_duration: float = 0, keep_original_sample: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Splits videos into segments based on a specified duration.
This operator splits each video in the dataset into smaller segments, each with a fixed duration. The last segment is discarded if its duration is less than the specified minimum last split duration. The original sample can be kept or removed based on the keep_original_sample parameter. The generated video files are saved in the specified directory or, if not provided, in the same directory as the input files. The key metric for this operation is the duration of each segment, which is character-based (seconds).
Splits videos into segments of a specified duration.
Discards the last segment if it is shorter than the minimum allowed duration.
Keeps or removes the original sample based on the keep_original_sample parameter.
Saves the generated video files in the specified directory or the input file’s directory.
Uses the duration in seconds to determine the segment boundaries.
- __init__(split_duration: float = 10, min_last_split_duration: float = 0, keep_original_sample: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
split_duration – duration of each video split in seconds.
min_last_split_duration – The minimum allowable duration in seconds for the last video split. If the duration of the last split is less than this value, it will be discarded.
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only cut sample in the final datasets and the original sample will be removed. It’s True in default.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoSplitByKeyFrameMapper(keep_original_sample: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Splits a video into segments based on key frames.
This operator processes video data by splitting it into multiple segments at key frame boundaries. It uses the key frames to determine where to make the splits. The original sample can be kept or discarded based on the keep_original_sample parameter. If save_dir is specified, the split video files will be saved in that directory; otherwise, they will be saved in the same directory as the input files. The operator processes each video in the sample and updates the sample with the new video keys and text placeholders. The Fields.source_file field is updated to reflect the new video segments. This operator works in batch mode, processing multiple samples at once.
- __init__(keep_original_sample: bool = True, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
keep_original_sample – whether to keep the original sample. If it’s set to False, there will be only split sample in the final datasets and the original sample will be removed. It’s True in default.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoSplitBySceneMapper(detector: str = 'ContentDetector', threshold: Annotated[float, Ge(ge=0)] = 27.0, min_scene_len: Annotated[int, Ge(ge=0)] = 15, show_progress: bool = False, save_dir: str = None, *args, **kwargs)[source]¶
Bases:
Mapper
Splits videos into scene clips based on detected scene changes.
This operator uses a specified scene detector to identify and split video scenes. It supports three types of detectors: ContentDetector, ThresholdDetector, and AdaptiveDetector. The operator processes each video in the sample, detects scenes, and splits the video into individual clips. The minimum length of a scene can be set, and progress can be shown during processing. The resulting clips are saved in the specified directory or the same directory as the input files if no save directory is provided. The operator also updates the text field in the sample to reflect the new video clips. If a video does not contain any scenes, it remains unchanged.
- __init__(detector: str = 'ContentDetector', threshold: Annotated[float, Ge(ge=0)] = 27.0, min_scene_len: Annotated[int, Ge(ge=0)] = 15, show_progress: bool = False, save_dir: str = None, *args, **kwargs)[source]¶
Initialization method.
- Parameters:
detector – Algorithm from scenedetect.detectors. Should be one of [‘ContentDetector’, ‘ThresholdDetector’, ‘AdaptiveDetector`].
threshold – Threshold passed to the detector.
min_scene_len – Minimum length of any scene.
show_progress – Whether to show progress from scenedetect.
save_dir – The directory where generated video files will be stored. If not specified, outputs will be saved in the same directory as their corresponding input files. This path can alternatively be defined by setting the DJ_PRODUCED_DATA_DIR environment variable.
args – extra args
kwargs – extra args
- avaliable_detectors = {'AdaptiveDetector': ['window_width', 'min_content_val', 'weights', 'luma_only', 'kernel_size', 'video_manager', 'min_delta_hsv'], 'ContentDetector': ['weights', 'luma_only', 'kernel_size'], 'ThresholdDetector': ['fade_bias', 'add_final_scene', 'method', 'block_size']}¶
- class data_juicer.ops.mapper.VideoTaggingFromAudioMapper(hf_ast: str = 'MIT/ast-finetuned-audioset-10-10-0.4593', trust_remote_code: bool = False, tag_field_name: str = 'video_audio_tags', *args, **kwargs)[source]¶
Bases:
Mapper
Generates video tags from audio streams using the Audio Spectrogram Transformer.
This operator extracts audio streams from videos and uses a Hugging Face Audio Spectrogram Transformer (AST) model to generate tags. The tags are stored in the specified metadata field, defaulting to ‘video_audio_tags’. If no valid audio stream is found, the tag is set to ‘EMPTY’. The operator resamples audio to match the model’s required sampling rate if necessary. The tags are inferred based on the highest logit value from the model’s output. If the tags are already present in the sample, the operator skips processing for that sample.
- __init__(hf_ast: str = 'MIT/ast-finetuned-audioset-10-10-0.4593', trust_remote_code: bool = False, tag_field_name: str = 'video_audio_tags', *args, **kwargs)[source]¶
Initialization method.
- Parameters:
hf_ast – path to the HF model to tag from audios.
trust_remote_code – whether to trust the remote code of HF models
tag_field_name – the field name to store the tags. It’s “video_audio_tags” in default.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.VideoTaggingFromFramesMapper(frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, tag_field_name: str = 'video_frame_tags', *args, **kwargs)[source]¶
Bases:
Mapper
Generates video tags from frames extracted from videos.
This operator extracts frames from videos and generates tags based on the content of these frames. The frame extraction method can be either “all_keyframes” or “uniform”. For “all_keyframes”, all keyframes are extracted, while for “uniform”, a specified number of frames are extracted uniformly across the video. The tags are generated using a pre-trained model and stored in the specified field name. If the tags are already present in the sample, the operator skips processing. Important notes: - Uses a Hugging Face tokenizer and a pre-trained model for tag generation. - If no video is present in the sample, an empty tag array is stored. - Frame tensors are processed to generate tags, which are then sorted by frequency and stored.
- __init__(frame_sampling_method: str = 'all_keyframes', frame_num: Annotated[int, Gt(gt=0)] = 3, tag_field_name: str = 'video_frame_tags', *args, **kwargs)[source]¶
Initialization method.
- Parameters:
frame_sampling_method – sampling method of extracting frame images from the videos. Should be one of [“all_keyframes”, “uniform”]. The former one extracts all key frames (the number of which depends on the duration of the video) and the latter one extract specified number of frames uniformly from the video. Default: “all_keyframes”.
frame_num – the number of frames to be extracted uniformly from the video. Only works when frame_sampling_method is “uniform”. If it’s 1, only the middle frame will be extracted. If it’s 2, only the first and the last frames will be extracted. If it’s larger than 2, in addition to the first and the last frames, other frames will be extracted uniformly within the video duration.
tag_field_name – the field name to store the tags. It’s “video_frame_tags” in default.
args – extra args
kwargs – extra args
- class data_juicer.ops.mapper.WhitespaceNormalizationMapper(*args, **kwargs)[source]¶
Bases:
Mapper
Normalizes various types of whitespace characters to standard spaces in text samples.
This mapper converts all non-standard whitespace characters, such as tabs and newlines, to the standard space character (’ ‘, 0x20). It also trims leading and trailing whitespace from the text. This ensures consistent spacing across all text samples, improving readability and consistency. The normalization process is based on a comprehensive list of whitespace characters, which can be found at https://en.wikipedia.org/wiki/Whitespace_character.