Reverse Dependencies of mwparserfromhell
The following projects have a declared dependency on mwparserfromhell:
- aidevkit — 一些ai开发过程中使用到的工具模块
- archean — Extract Information from Wikimedia Dumps
- earwigbot — EarwigBot is a bot that edits Wikipedia and interacts over IRC
- Expanda — Integrated Corpus-Building Environment
- genwiki2024 — nicesprinkler
- gobbet — Random news articles in any language
- goodwiki — Utility that converts Wikipedia pages into GitHub-flavored Markdown.
- graphbrain — Knowledge System + Natural Language Understanding
- graphite-datasets — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- h2ogpt — no summary
- hf-datasets — HuggingFace/NLP is an open library of NLP datasets.
- invisible-rabbit — Scalable Data Preprocessing Tool for Training Large Language Models
- invisible-unicorn — Scalable Data Preprocessing Tool for Training Large Language Models
- kazu — Biomedical Named Entity Recognition and Entity Linking for Enterprise use cases
- kdap — KDAP is a package to analyze knowledge data
- langchain_1111_Dev_cerebrum — Building applications with LLMs through composability
- langchain-by-johnsnowlabs — Building applications with LLMs through composability
- langchain-xfyun — 在LangChain中流畅地使用讯飞星火大模型
- langchaincoexpert — Building applications with LLMs through composability
- langchainmsai — Building applications with LLMs through composability
- langchainn — Building applications with LLMs through composability
- langumo — The unified corpus building environment for Language Models.
- llm-datasets — A collection of datasets for language model training including scripts for downloading, preprocesssing, and sampling.
- lm-datasets — A collection of datasets for language model training including scripts for downloading, preprocesssing, and sampling.
- mwbot — 个性化封装的Python Mediawiki API库
- mwcomposerfromhell — Convert the parsed MediaWiki wikicode (using mwparserfromhell) to HTML.
- mwedittypes — Edit diffs and type detection for Wikipedia
- mwsimpleedittypes — Edit diffs and type detection for Wikipedia (simple)
- mwtext — A set of utilities for processing MediaWiki text.
- nemo-curator — Scalable Data Preprocessing Tool for Training Large Language Models
- nlp — HuggingFace/NLP is an open library of NLP datasets.
- ommlx — ommlx
- oplangchain — langchain for OpenPlugin
- osw — Python toolset for data processing, queries, wikicode generation and page manipulation
- paniniwikiparser — Parses wiki xml
- py-3rdparty-mediawiki — Wrapper for mwclient with improvements for 3rd party wikis
- pyMetaModel — no summary
- pywikibot — Python MediaWiki Bot Framework
- pywikibot-scripts — Pywikibot Scripts Collection
- raha — Raha and Baran: An End-to-End Data Cleaning System
- revscoring — A set of utilities for generating quality scores for MediaWiki revisions
- rstojnic-tfds-nightly — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- scribe-data — Wikidata and Wikipedia data extraction for Scribe applications
- taxon2wikipedia — Add taxons to pt Wikipedia
- tensorflow-datasets — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- tfds-nightly — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- tfds-nightly-gradient — tensorflow/datasets is a library of datasets ready to use with TensorFlow.
- tibiawikisql — Python script that generates a SQLite database from TibiaWiki articles
- vectorcraft — A custom library extending LangChain functionality.
- wikiciteparser — A parser for wikipedia citation templates
- wikidata-bot-framework — A framework for making Wikidata bots.
- wikiexpand — Expansion engine for MediaWiki wiki pages based on mwparserfromhell
- wikinet — Network of wikipedia articles
- wikipedia-histories — A Python tool to pull the complete edit history of a Wikipedia page
- wikipedia2vec — A tool for learning vector representations of words and entities from Wikipedia
- wikipedia2vecsm — A tool for learning vector representations of words and entities from Wikipedia
- wikirec — Recommendation engine framework based on Wikipedia data
- wikitables — Import tables from any Wikipedia article
- wikitext-asymptote — Custom wikitext parser to produce html, plain text fields and relevant links from wikipedia page source code.
- wiktionary-de-parser — Extracts data from German Wiktionary dump files.
- youchoose — YouChoose is an open source recommendation library built on PyTorch.
1