Header menu link for other important links
X
LSCrawler: A Framework for an Enhanced Focused Web Crawler Based on Link Semantics
Yuvarani M, Iyengar N.Ch.S.N,
Published in IEEE
2006
Pages: 794 - 797
Abstract
The traditional process of focused web crawler is to harvest a collection of web documents that are focused on the topical subspaces. The intricacy of focused crawlers is identifying the next most important and relevant link to follow. Focused Crawlers mostly rely on probabilistic models for predicting the relevancy of the documents. The Web documents are well characterized by the hypertext and the hypertext can be used to determine the relevance of the document to the search domain. The semantics of the link characterizes the semantics of the document referred. In this article, a novel, and distinctive focused crawler named LSCrawler has been proposed. This LSCrawler system retrieves documents by speculating the relevancy of the document based on the keywords in the link and the surrounding text of the link. The relevancy of the documents is reckoned measuring the semantic similarity between the keywords in the link and the taxonomy hierarchy of the specific domain. The system exhibits better recall as it exploits the semantic of the keywords in the link. © 2006 IEEE.
About the journal
JournalData powered by Typeset2006 IEEE/WIC/ACM International Conference on Web Intelligence (WI 2006 Main Conference Proceedings)(WI'06)
PublisherData powered by TypesetIEEE
Open Access0