The traditional process of focused web crawler is to harvest a collection of web documents that are focused on the topical subspaces. The intricacy of focused crawlers is identifying the next most important and relevant link to follow. Focused Crawlers mostly rely on probabilistic models for predicting the relevancy of the documents. The Web documents are well characterized by the hypertext and the hypertext can be used to determine the relevance of the document to the search domain. The semantics of the link characterizes the semantics of the document referred. In this article, a novel, and distinctive focused crawler named LSCrawler has been proposed. This LSCrawler system retrieves documents by speculating the relevancy of the document based on the keywords in the link and the surrounding text of the link. The relevancy of the documents is reckoned measuring the semantic similarity between the keywords in the link and the taxonomy hierarchy of the specific domain. The system exhibits better recall as it exploits the semantic of the keywords in the link. © 2006 IEEE.