Reverse Dependencies of w3lib
The following projects have a declared dependency on w3lib:
- aio-scrapy — A high-level Web Crawling and Web Scraping framework based on Asyncio
- aioscpy — An asyncio + aiolibs crawler imitate scrapy framework
- aioscrapy-redis — A mini spider framework, Integrate aiohttp into scrapy
- AioSpider-zly — 高并发异步爬虫框架
- Aminer-Scrapy — A high-level Web Crawling and Web Scraping framework
- anews — A Des of anews
- anywebsearch — Unified internet search across different search engines: Google, Bing(ddg), Brave, Qwant, Yandex
- archivebox — Self-hosted internet archiving solution.
- autopager — Detect and classify pagination links on web pages
- baotool — BaoTool (宝图), 个人积累的 python 工具库
- beowulf-python — Official python beowulf library.
- board-game-scraper — Board games data scraping and processing from BoardGameGeek and more!
- bondai — An AI-powered console assistant with a versatile API for seamless integration into applications.
- BRAD-Chat — This package connects large language models with bioinformatics workflows.
- bricks-py — quickly build your crawler
- cacheutils — A bunch of cache utils.
- changyong — no summary
- clutch.co-scraper — clutch.co-scraper is a command-line application written in Python that scrapes and saves information about firms according to the user-defined filters.
- cometai-core — A simple example package
- custard — custard easy to learn, fast to code, ready for production
- cyberplant-Scrapy — A high-level Web Crawling and Web Scraping framework
- czmlpy — Python 3 library to write CZML
- directory-client-core — Python common code for Directory API clients.
- dpay — Official Python dPay library.
- duplicate-url-discarder — Discarding duplicate URLs based on rules.
- E-Sic — Pacote para automatização de coletas no portal E-Sic, você pode obter os dados sobre as perguntas, respostas e até mesmo baixar os arquivos anexados.
- eagle-kaist — Stock Extractor library
- elegance-spider — A spider framework
- enex2notion — Import Evernote ENEX files to Notion
- eostalk — a fork of steem-python for eostalk blockchain
- espider — easy spider
- example_demo — A small example package test
- extruct — Extract embedded metadata from HTML markup
- fastutil — common python util
- feedsearch-crawler — Search sites for RSS, Atom, and JSON feeds
- fin-indicator — A library that calculates financial indicators and different signals resulting from Japanese candlestick patterns and indicators.
- FirstImpression — First Python library
- Flask-Async-Commit — 通过redis缓存和APS框架来异步提交数据库
- fluentCrawler — A decorator crawler
- form2request — Build HTTP requests out of HTML forms
- fxportia — Convert portia spider definitions to python scrapy spiders
- gesp — convenient scraping of german court decisions
- golodranets — Fork of official python STEEM library for Golos blockchain
- golos-lib-python — Python library for Golos blockchain
- Greek-scraper — Ultra-fast and efficient web scraper with GPU utilization for text cleaning and JSON output. Supports generic and language-specific scraping.
- haipproxy2 — High aviariable proxy pool client for crawlers.
- har2tree — HTTP Archive (HAR) to ETE Toolkit generator
- harvesttext — no summary
- hasaki — hasaki是一个python3开发常用工具的包
- hivepy — A python hive library.
- hoopa — Asynchronous crawler micro-framework based on python.
- html-to-etree — parse html to etree
- httpx-html — Web Scraping for Humans.
- iracema — Audio Content Analysis for Research on Musical Expressiveness and Individuality
- ivystar — python tools package of ivystar
- Kerko — A Flask blueprint that provides a faceted search interface for bibliographies based on Zotero.
- klat-connector — The Klat Chat API
- kraken-extract-from-html — Kraken Extract From HTML
- lightalg — lightalg
- lightbook — lightbook
- lightit — lightit
- lighttool — lighttool
- little-finger — tool pkg.
- lol-site-scraper — Bulk hosting sites scraper/downloader
- luis1996 — my description
- luisito1996 — my description
- luisito19963 — my description
- lznlp — LiangZhiNLP API wrapper
- masto — Masto OSINT Tool Python package for Mastodon user investigations.
- mplus — extensions
- new-frontera — A scalable frontier for web crawlers
- no-more-query-string — Remove unneccessary query-string from the URL given. Especially fbclid.
- openbalkans — A python implementation of OpenBalkans
- papers-dl — A command line application for downloading scientific papers
- parsel — Parsel is a library to extract data from HTML and XML using XPath and CSS selectors
- playwrightcapture — A simple library to capture websites using playwright
- pmc-xml — XML parser for PubMed Central (PMC) Database
- portia2code — Convert portia spider definitions to python scrapy spiders
- project-to-installer — no summary
- psyfar-downloader — Download Psyfar and convert to EPUB
- pubmed-xml — Pubmed XML Parser
- pyfunctions — This project contains many functions that can be used in daily development.
- pypubmed — Toolkits for NCBI Pubmed
- python-core — there is no description available
- python-golos — Python library for Golos blockchain
- python-utilities-jsm — Myriad python utilities.
- pytwitterscraper — Twitter Scraper using Python
- pyxbox — no summary
- quant1x — Quant1X量化交易框架
- RabbitSpider — no summary
- requests-html — HTML Parsing for Humans.
- requests-html-playwright — Requests-HTML(with microsoft/playwright-python): HTML Parsing for Humans™
- requests-htmlc — Fork of requests-html, powered by playwright
- requests-xml — XML Parsing for humans.
- ruiwen-data-all — no summary
- scala-wrapper — Scala Wrapper
- scrachy — Provides an SqlAlchemy based cache storage backend, a Selenium middleware, and a few other utilities for working with Scrapy.
- scraper-factory — Scraping library to retrieve data from useful pages, such as Amazon wishlists
- scraper-project-rami — no summary
- scrapling — Scrapling is an undetectable, powerful, flexible, high-performance Python library that makes Web Scraping easy again! In an internet filled with complications,
1
2