Core functions#

Extraction#

extract()#

trafilatura.extract(filecontent, url=None, record_id=None, no_fallback=False, favor_precision=False, favor_recall=False, include_comments=True, output_format='txt', tei_validation=False, target_language=None, include_tables=True, include_images=False, include_formatting=False, include_links=False, deduplicate=False, date_extraction_params=None, only_with_metadata=False, with_metadata=False, max_tree_size=None, url_blacklist=None, author_blacklist=None, settingsfile=None, prune_xpath=None, config=<configparser.ConfigParser object>, options=None, **kwargs)[source]#
Main function exposed by the package:

Wrapper for text extraction and conversion to chosen output format.

Parameters:
  • filecontent – HTML code as string.

  • url – URL of the webpage.

  • record_id – Add an ID to the metadata.

  • no_fallback – Skip the backup extraction with readability-lxml and justext.

  • favor_precision – prefer less text but correct extraction.

  • favor_recall – when unsure, prefer more text.

  • include_comments – Extract comments along with the main text.

  • output_format – Define an output format: “csv”, “json”, “markdown”, “txt”, “xml”, and “xmltei”.

  • tei_validation – Validate the XML-TEI output with respect to the TEI standard.

  • target_language – Define a language to discard invalid documents (ISO 639-1 format).

  • include_tables – Take into account information within the HTML <table> element.

  • include_images – Take images into account (experimental).

  • include_formatting – Keep structural elements related to formatting (only valuable if output_format is set to XML).

  • include_links – Keep links along with their targets (experimental).

  • deduplicate – Remove duplicate segments and documents.

  • date_extraction_params – Provide extraction parameters to htmldate as dict().

  • only_with_metadata – Only keep documents featuring all essential metadata (date, title, url).

  • max_tree_size – Discard documents with too many elements.

  • url_blacklist – Provide a blacklist of URLs as set() to filter out documents.

  • author_blacklist – Provide a blacklist of Author Names as set() to filter out authors.

  • settingsfile – Use a configuration file to override the standard settings.

  • prune_xpath – Provide an XPath expression to prune the tree before extraction. can be str or list of str.

  • config – Directly provide a configparser configuration.

  • options – Directly provide a whole extractor configuration.

Returns:

A string in the desired format or None.

bare_extraction()#

trafilatura.bare_extraction(filecontent, url=None, no_fallback=False, favor_precision=False, favor_recall=False, include_comments=True, output_format='python', target_language=None, include_tables=True, include_images=False, include_formatting=False, include_links=False, deduplicate=False, date_extraction_params=None, only_with_metadata=False, with_metadata=False, max_tree_size=None, url_blacklist=None, author_blacklist=None, as_dict=True, prune_xpath=None, config=<configparser.ConfigParser object>, options=None)[source]#

Internal function for text extraction returning bare Python variables.

Parameters:
  • filecontent – HTML code as string.

  • url – URL of the webpage.

  • no_fallback – Use faster heuristics and skip backup extraction.

  • favor_precision – prefer less text but correct extraction.

  • favor_recall – prefer more text even when unsure.

  • include_comments – Extract comments along with the main text.

  • output_format – Define an output format, Python being the default and the interest of this internal function. Other values: “csv”, “json”, “markdown”, “txt”, “xml”, and “xmltei”.

  • target_language – Define a language to discard invalid documents (ISO 639-1 format).

  • include_tables – Take into account information within the HTML <table> element.

  • include_images – Take images into account (experimental).

  • include_formatting – Keep structural elements related to formatting (present in XML format, converted to markdown otherwise).

  • include_links – Keep links along with their targets (experimental).

  • deduplicate – Remove duplicate segments and documents.

  • date_extraction_params – Provide extraction parameters to htmldate as dict().

  • only_with_metadata – Only keep documents featuring all essential metadata (date, title, url).

  • max_tree_size – Discard documents with too many elements.

  • url_blacklist – Provide a blacklist of URLs as set() to filter out documents.

  • author_blacklist – Provide a blacklist of Author Names as set() to filter out authors.

  • as_dict – Legacy option, return a dictionary instead of a class with attributes.

  • prune_xpath – Provide an XPath expression to prune the tree before extraction. can be str or list of str.

  • config – Directly provide a configparser configuration.

  • options – Directly provide a whole extractor configuration.

Returns:

A Python dict() containing all the extracted information or None.

Raises:

ValueError – Extraction problem.

baseline()#

trafilatura.baseline(filecontent)[source]#

Use baseline extraction function targeting text paragraphs and/or JSON metadata.

Parameters:

filecontent – HTML code as binary string or string.

Returns:

A LXML <body> element containing the extracted paragraphs, the main text as string, and its length as integer.

html2txt()#

trafilatura.html2txt(content)[source]#

Run basic html2txt on a document.

Parameters:

content – HTML document as string or LXML element.

Returns:

The extracted text in the form of a string or an empty string.

try_readability()#

trafilatura.external.try_readability(htmlinput)[source]#

Safety net: try with the generic algorithm readability

try_justext()#

trafilatura.external.try_justext(tree, url, target_language)[source]#

Second safety net: try with the generic algorithm justext

extract_metadata()#

trafilatura.extract_metadata(filecontent, default_url=None, date_config=None, extensive=True, author_blacklist=None)[source]#

Main process for metadata extraction.

Parameters:
  • filecontent – HTML code as string.

  • default_url – Previously known URL of the downloaded document.

  • date_config – Provide extraction parameters to htmldate as dict().

  • author_blacklist – Provide a blacklist of Author Names as set() to filter out authors.

Returns:

A trafilatura.metadata.Document containing the extracted metadata information or None. trafilatura.metadata.Document has .as_dict() method that will return a copy as a dict.

extract_comments()#

trafilatura.core.extract_comments(tree, options)[source]#

Try to extract comments out of potential sections in the HTML.

Helpers#

fetch_url()#

trafilatura.fetch_url(url, decode=True, no_ssl=False, config=<configparser.ConfigParser object>, options=None)[source]#

Downloads a web page and seamlessly decodes the response.

Parameters:
  • url – URL of the page to fetch.

  • no_ssl – Don’t try to establish a secure connection (to prevent SSLError).

  • config – Pass configuration values for output control.

  • options – Extraction options (supersedes config).

Returns:

Unicode string or None in case of failed downloads and invalid results.

fetch_response()#

trafilatura.fetch_response(url, *, decode=False, no_ssl=False, with_headers=False, config=<configparser.ConfigParser object>)[source]#

Downloads a web page and returns a full response object.

Parameters:
  • url – URL of the page to fetch.

  • decode – Use html attribute to decode the data (boolean).

  • no_ssl – Don’t try to establish a secure connection (to prevent SSLError).

  • with_headers – Keep track of the response headers.

  • config – Pass configuration values for output control.

Returns:

Response object or None in case of failed downloads and invalid results.

decode_file()#

trafilatura.utils.decode_file(filecontent)[source]#

Check if the bytestring could be GZip and eventually decompress it, guess bytestring encoding and try to decode to Unicode string. Resort to destructive conversion otherwise.

load_html()#

trafilatura.load_html(htmlobject)[source]#

Load object given as input and validate its type (accepted: lxml.html tree, trafilatura/urllib3 response, bytestring and string)

sanitize()#

trafilatura.utils.sanitize(text, preserve_space=False, trailing_space=False)[source]#

Convert text and discard incompatible and invalid characters

trim()#

trafilatura.utils.trim(string)[source]#

Remove unnecessary spaces within a text string

XML processing#

xmltotxt()#

trafilatura.xml.xmltotxt(xmloutput, include_formatting)[source]#

Convert to plain text format and optionally preserve formatting as markdown.

validate_tei()#

trafilatura.xml.validate_tei(xmldoc)[source]#

Check if an XML document is conform to the guidelines of the Text Encoding Initiative