Welcome to Trafilatura’s documentation!

Python package Python versions Travis build status Code Coverage Downloads

Code:https://github.com/adbar/trafilatura
Documentation:https://trafilatura.readthedocs.io/
Issue tracker:https://github.com/adbar/trafilatura/issues

Demo as GIF image

Description

Trafilatura is a Python package and command-line tool which seamlessly downloads, parses, and scrapes web page data: it can extract metadata, main body text and comments while preserving part of the text formatting and page structure. The output can then be converted to different formats.

Distinguishing between a whole page and its essential parts can help to alleviate many quality problems related to web texts by dealing with noise caused by recurring elements, such as headers and footers, ads, links/blogroll, and so on.

The extractor has to be precise enough not to miss texts or discard valid documents; it should robust but also reasonably fast. Trafilatura is designed to run in production on millions of web documents.

Features

  • Seamless online (including page retrieval) or parallelized offline processing with URLs, HTML files or parsed HTML trees as input
  • Several output formats supported:
    • Plain text (minimal formatting)
    • CSV (with metadata, tab-separated values)
    • JSON (with metadata)
    • XML (for metadata and structure)
    • TEI-XML
  • Robust extraction algorithm, using readability and jusText as fallback, reasonably efficient with lxml:
    • Focuses on main text and/or comments
    • Structural elements preserved: paragraphs, titles, lists, quotes, code, line breaks, in-line text formatting (experimental)
    • Extraction of metadata (title, author, date, site name, categories and tags)
  • URL lists:
    • Generation of link lists from ATOM/RSS feeds
    • Efficient processing of URL queue
    • Blacklists or already processed URLs
  • Optional language detection on the extracted content

Evaluation and alternatives

The extraction focuses on the main content: usually the part displayed centrally, without left or right bars, header or footer, but including potential titles and (optionally) comments. These tasks are also known as web scraping, boilerplate removal, DOM-based content extraction, main content identification, or web page cleaning.

For reproducible results see the evaluation page and the evaluation script.

External evaluations:

Installation

Primarily, with Python package manager: pip install --upgrade trafilatura.

For more details please read the installation documentation.

Usage

With Python or on the command-line.

In a nutshell, with Python:

>>> import trafilatura
>>> downloaded = trafilatura.fetch_url('https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/')
>>> trafilatura.extract(downloaded)
# outputs main content and comments as plain text ...

On the command-line:

$ trafilatura -u "https://github.blog/2019-03-29-leader-spotlight-erin-spiceland/"
# outputs main content and comments as plain text ...

For more information please refer to quickstart, usage documentation and tutorials.

License

trafilatura is distributed under the GNU General Public License v3.0. If you wish to redistribute this library but feel bounded by the license conditions please try interacting at arms length, multi-licensing with compatible licenses, or contacting me.

See also GPL and free software licensing: What’s in it for business?

Going further

Trafilatura: Italian word for wire drawing.

Roadmap

  • [X] Language detection on the extracted content
  • [-] Duplicate detection at sentence, paragraph and document level using a least recently used (LRU) cache
  • [-] URL lists and document management
  • [ ] Sitemaps processing
  • [ ] Interaction with web archives (notably WARC format)
  • [ ] Configuration and extraction parameters
  • [ ] Integration of natural language processing tools

Contributing

Contributions are welcome!

Feel free to file issues on the dedicated page. Thanks to the contributors who submitted features and bugfixes!

Author

This effort is part of methods to derive information from web documents in order to build text databases for research (chiefly linguistic analysis and natural language processing). Extracting and pre-processing web texts presents a substantial challenge for those who must meet scientific expectations. Web corpus construction involves numerous design decisions, and this software package can help facilitate collection and enhance corpus quality and thus aid in these decisions.

https://zenodo.org/badge/DOI/10.5281/zenodo.3460969.svg

You can contact me via my contact page or GitHub.

Indices and tables