📦Darmowa dostawa od 69 zł - do Żabki oraz automatów i punktów DPD! Przy mniejszych zamówieniach zapłacisz jedynie 4,99 zł!🚚
Darmowa dostawa od 69,00 zł
Information Retrieval Evaluation - Donna Harman

Information Retrieval Evaluation - Donna Harman

Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. Thesecond chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation/ Conclusion

EAN: 9783031034046
Symbol
472HJS03527KS
Rok wydania
2011
Strony
120
Oprawa
Miekka
Format
19.1x23.5cm
Język
angielski
Więcej szczegółów
Bez ryzyka
14 dni na łatwy zwrot
Szeroki asortyment
ponad milion pozycji
Niskie ceny i rabaty
nawet do 50% każdego dnia
173,11 zł
/ szt.
Najniższa cena z 30 dni przed obniżką: / szt.
Cena regularna: / szt.
Możesz kupić także poprzez:
Do darmowej dostawy brakuje69,00 zł
Najtańsza dostawa 0,00 złWięcej
14 dni na łatwy zwrot
Bezpieczne zakupy
Ten produkt nie jest dostępny w sklepie stacjonarnym
Symbol
472HJS03527KS
Kod producenta
9783031034046
Rok wydania
2011
Strony
120
Oprawa
Miekka
Format
19.1x23.5cm
Język
angielski
Autorzy
Donna Harman
Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster "user" study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. Thesecond chapter covers the more recent "batch" evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / "Batch" Evaluation Since 1992 / Interactive Evaluation/ Conclusion

EAN: 9783031034046
Potrzebujesz pomocy? Masz pytania?Zadaj pytanie a my odpowiemy niezwłocznie, najciekawsze pytania i odpowiedzi publikując dla innych.
Zapytaj o produkt
Jeżeli powyższy opis jest dla Ciebie niewystarczający, prześlij nam swoje pytanie odnośnie tego produktu. Postaramy się odpowiedzieć tak szybko jak tylko będzie to możliwe. Dane są przetwarzane zgodnie z polityką prywatności. Przesyłając je, akceptujesz jej postanowienia.
Napisz swoją opinię
Twoja ocena:
5/5
Dodaj własne zdjęcie produktu:
Prawdziwe opinie klientów
4.8 / 5.0 13774 opinii
pixel