Header menu link for other important links
X
DLRG@AILA 2019:Context - Aware legal assistance system
R. Rameshkannan,
Published in CEUR-WS
2019
Volume: 2517
   
Pages: 58 - 63
Abstract
In this digital era, seamless information is available in the web. The Information Retrieval systems play an important role in quickly retrieving the relevant information based on the query from the user. Common Law Systems are followed in countries like UK, USA, Canada, Australia and India that has two primary sources of law viz., statues (established laws) and precedents (prior cases). The statutes deal with applying legal principles to a situations which may lead to filing the case, and the pecedents help lawyers to understand how the Court has handled the similar scenarios in the past, for the subsequent legal rea- soning. For any given situation, applying the apporpriate statues as well as identifying the prior cases are important and it is a time consuming process. There is a demand for an automated system which can identify the set prior cases and suitable statues for any situation. This will help the layers to get a preliminary understanding of the case and to identify where the problem fits. The objective of this work is to develop such an automatic system to identify relevant law or prior cases for a given situa- tion. This work has been submitted to AILA 2019 (Artificial Intelligence for Legal Assistance). Here, the assigned task is to identify the relevant statue (task1 ) / prior cases (task2) for a given situation, by consider- ing the Indian Legal documents. For this legal document retrieval task, we present our context-aware solution that finds the similarity between the given situation and legal documents / prior cases by following an effective word representation that considers the dependancy between the terms. We have evaluated our methodology on the dataset released by the orgranizers of AILA@FIRE2019 shared task. We have used the p@10 score as the evalution metric, and achieved the score 0.015 and 0.05 fo rtask1 and task2 respectively. © Copyright 2019 for this paper by its authors.
About the journal
JournalCEUR Workshop Proceedings
PublisherCEUR-WS
ISSN16130073