Beyond 'Reader Mode' With Machine Learning –

Researchers from South Korea have used machine learning to develop an improved method for extracting actual content from web pages so that the ‘furniture’ of a web page – such as sidebars, footers and navigation headers, as well as advertisement blocks – disappears for the reader.
Though such functionality is either built into most popular web browsers, or else is easily available via extensions and plugins, these technologies rely on semantic formatting that may not be present in the web page, or which may have been deliberately compromised by the site owner in order to prevent the reader hiding the ‘full fat’ experience of the page.
One of our own web pages ‘slimmed down’ with Firefox’s integral Reader View functionality.
Instead, the new method uses a grid-based system that iterates through the web page, evaluating how pertinent the content is to the core aim of the page.
The content extraction pipeline first divides the page into a grid (upper row) before evaluating the relationship of found pertinent cells to other cells (middle) and finally merging the approved cells (bottom). Source:
Once a pertinent cell is identified, its relationship with nearby cells is also evaluated before being merged into the interpreted ‘core content’.
The central idea of the approach is to abandon code-based markup as an index of relevance (i.e. HTML tags that would normally denote the beginning of a paragraph, for instance, which can be replaced by alternate tags that will ‘fool’ screen readers and utilities such as Reader View), and deduce the content based solely on its visual appearance.
The approach, called Grid-Center-Expand (GCE), has been extended by the researchers into Deep Neural Network (DNN) models that exploit Google’s TabNet, an interpretative tabular learning architecture.
The paper is titled Don’t read, just look: Main content extraction from web pages using visually apparent features, and comes from three researchers at Hanyang University, and one from the Institute of Convergence Technology, all located in Seoul.
Improved extraction of core web page content is potentially valuable not only for the casual end-user, but also for machine systems that are tasked with ingesting or indexing domain content for the purposes of Natural Language Processing (NLP), and other sectors in AI.
As it stands, if non-relevant content is included in such extraction processes, it may need to be manually filtered (or labeled), at great expense; worse, if the unwanted content is included with the core content, it could affect how the core content is interpreted, and the outcome of transformer and encoder/decoder systems that are relying on clean content.
An improved method, the researchers argue, is especially necessary because existing approaches often fail with non-English web pages.
French, Japanese and Russian web pages are noted as scoring worst in success rates for the four most common ‘Reader View’ approaches: Mozilla’s Readability.js; Google’s DOM Distiller; Web2Text; and Boilernet.
The researchers compiled dataset material from English keywords in the GoogleTrends-2017 and GoogleTrends-2020 dataset, though they observe that, in terms of results, there were no practical differences between the two datasets.
Additionally, the authors gathered non-English keywords from South Korea, France, Japan, Russia, Indonesia and Saudi Arabia. Chinese keywords were added from a Baidu dataset, since Google Trends could not offer Chinese data.
In testing the system, the authors found that it offer the same level of performance as recent DNN models, while providing better accommodation for a wider variety of languages.
For instance, the Boilernet architecture, while maintaining good performance in extracting pertinent content, adapts poorly to Chinese and Japanese datasets, while Web2Text, the authors find, has ‘relatively poor performance’ all round, with linguistic features that are not multilingual, and are unsuited for extracting central content from web pages.
Mozilla’s Readbility.js was found to achieve acceptable performance across multiple languages including English, even as a rule-based method. However the researchers found that its performance dropped notably on Japanese and French datasets, highlighting the limitations of trying to parse characteristics of a specific region entirely by rule-based approaches.
Meanwhile Google’s DOM Distiller, which blends heuristics and machine learning approaches, was found to perform well across the board.
Table of results for methods tested during the project, including the researchers’ own GCE module. Higher numbers are better.
The researchers conclude that ‘GCE does not need to keep up with the rapidly changing web environment because it relies on human nature—genuinely global and multilingual features’.
Creating a GPT-Style Language Model for a Single Question
Towards Automated Science Writing
Freelance writer and editor, primarily on machine learning, artificial intelligence and big data.
Identifying Sponsored Content in News Sites With Machine Learning
A New and Simpler Deepfake Method That Outperforms Prior Approaches
Creating a GPT-Style Language Model for a Single Question
Creating Satellite Imagery From Vector Maps
Current AI Practices Could Be Enabling a New Generation of Copyright Trolls
An Authentic Focusing System for ‘Cheap’ Augmented Reality
Advertiser Disclosure: Unite.AI is committed to rigorous editorial standards to provide our readers with accurate information and news. We may receive compensation when you click on links to products we reviewed.
Copyright © 2021 Unite.AI

Connect with Chris Hood, a digital strategist that can help you with AI.

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 AI Caosuo - Proudly powered by theme Octo