analyzeJson method Null safety

Future<TextDocument> analyzeJson(
  1. {required Map<String, dynamic> document,
  2. required TextAnalyzer analyzer,
  3. NGramRange? nGramRange,
  4. TokenFilter? tokenFilter,
  5. Iterable<String>? zones}
)

Hydrates a TextDocument from the document, zones and analyzer parameters. The static factory:

  • extracts the sourceText from the zones in a JSON document, inserting line ending marks between the zones; then
  • splits the sourceText into paragraphs, sentences, terms and nGrams in the nGramRange using the analyzer; and then
  • uses a analyzer to tokenize the sourceText and populate the tokens property.

Implementation

static Future<TextDocument> analyzeJson(
    {required Map<String, dynamic> document,
    required TextAnalyzer analyzer,
    NGramRange? nGramRange,
    TokenFilter? tokenFilter,
    Iterable<String>? zones}) async {
  final sourceText = document.toSourceText(zones);
  final tokens = await analyzer.jsonTokenizer(document,
      zones: zones, tokenFilter: tokenFilter);
  final terms = analyzer.termSplitter(sourceText);
  final nGrams = terms.nGrams(nGramRange ?? NGramRange(1, 2));
  final sentences = analyzer.sentenceSplitter(sourceText);
  final paragraphs = analyzer.paragraphSplitter(sourceText);
  final keywords = tokens.toPhrases();
  final graph = TermCoOccurrenceGraph(keywords);
  final syllableCount = terms.map((e) => analyzer.syllableCounter(e)).sum;
  return _TextDocumentImpl(sourceText, zones, tokens, paragraphs, sentences,
      terms, nGrams, syllableCount, graph);
}