infineac.process_event.events_to_corpus#

infineac.process_event.events_to_corpus(events: list[dict], nlp_model, keywords: list[str] | dict[str, int] = [], modifier_words: list[str] = ['disregarding', 'except', 'excluding', 'ignoring', 'leaving out', 'not including', 'omitting'], sections: str = 'all', context_window_sentence: tuple[int, int] | int = 0, join_adjacent_sentences: bool = True, subsequent_paragraphs: int = 0, extract_answers: bool = False, return_type: str = 'list', lemmatize: bool = True, lowercase: bool = True, remove_stopwords: bool = True, remove_punctuation: bool = True, remove_numeric: bool = True, remove_currency: bool = True, remove_space: bool = True, remove_keywords: bool = True, remove_names: bool = True, remove_strategies: bool | dict[str, list[str]] = True, remove_additional_stopwords: bool | list[str] = True) DataFrame[source]#

Converts a list of events to a corpus (list of texts).

This is a wrapper function that calls extract_passages_from_events(), corpus_list_to_dataframe() and infineac.process_text.process_corpus(). This function is used to extract the corpus from the events and process it with the infineac.process_text module according to the given parameters.

Parameters:
  • events (list[dict]) – Lists of dicts containing the events.

  • nlp_model (spacy.lang, default: None) – NLP model. lemmatize : bool, default: True If document should be lemmatized.

  • keywords (list[str] | dict[str, int], default: []) – List of keywords to search for in the events and extract the corresponding passages. If keywords is a dictionary, the keys are the keywords.

  • modifier_words (list[str], default: MODIFIER_WORDS) – List of modifier_words, which must not precede the keyword.

  • sections (str, default: "all") – Section of the event to extract the passages from. Either “all”, “presentation” or “qa”

  • context_window_sentence (tuple[int, int] | int, default: 0) – The context window of of the sentences to be extracted. Either an integer or a tuple of length 2. The first element of the tuple indicates the number of sentences to be extracted before the sentence the keyword was found in, the second element indicates the number of sentences after it. If only an integer is provided, the same number of sentences are extracted before and after the keyword. If one of the elements is -1, all sentences before or after the keyword are extracted. So -1 can be used to extract all sentences before and after the keyword, e.g. the entire paragraph.

  • join_adjacent_sentences (bool, default: True) – Whether to join adjacent sentences.

  • subsequent_paragraphs (int, default: 0) – Number of subsequent paragraphs to extract after the one containing a keyword.

  • extract_answers (bool, default: False) – If True, entire answers to questions that include a keyword are also extracted.

  • return_type (str, default: "list") – The return type of the method. Either “str” or “list”

  • lowercase (bool, default: True) – If document should be lowercased.

  • remove_stopwords (bool, default: True) – If stopwords should be removed from document.

  • remove_punctuation (bool, default: True) – If punctuation should be removed from document.

  • remove_numeric (bool, default: False) – If numerics should be removed from document.

  • remove_currency (bool, default: True) – If currency symbols should be removed from document.

  • remove_space (bool, default: True) – If spaces should be removed from document.

  • remove_keywords (bool, default: True) – If keywords should be removed from document.

  • remove_names (bool, default: True) – If participant names should be removed from document.

  • remove_strategies (bool | dict[str, list[str]], default: True) – If the strategy keywords should be removed from document.

  • remove_additional_stopwords (bool | list[str], default: True) – If additional stopwords should be removed from document.

Returns:

The corpus as a polars DataFrame with indices, indicating the position of the texts in the corpus: event - presentation or qa - part - paragraph - sentence, the original text and the processed text.

Return type:

pl.DataFrame