1twitter.com/openminted_eu Petr Knoth Knowledge Media institute, The Open University United Kingdom TDM in recommender systems for research and in tracking research impact 2TDM in recommender systems for research 3Why TDM in recommender systems for research? • Collaborative filtering vs content-based filtering• In the scholarly databases, we have many documents but relatively few users => content-based filtering• Recommending entities 4The CORE recommender system • CORE provides a content-based recommendation system for articles from across the global network of repositories.• Dataset: • 8.3 million full texts• 79 million metadata records• 3,658 data providers 5Recommendation as a service • Recommender plugin for repositories• Recommendations from the CORE API 6Recommendation as a service • Recommender plugin for repositories• Recommendations from the CORE API 7How does the CORE recommender system work?• Article-article recommender system. Processes:1.Preprocessing prior to recsys: feature extraction/enrichment with e.g. document type, citation and citation proximity data, identifiers, etc.2.Similarity measure/ranking function3.Post-filtering using record quality4.Feedback (crowdsourcing a black list) 8Combining features • Evaluating different ranking functions (P,R,MAP, etc.):• Weights for boosting • Scaling function (e.g. exponential decay for recency)• Offline ground truths:• MAG citation assumption • MAG co-citation assumption• Learning to rank (haven’t done yet)• Online A/B testing (haven’t done yet) 9Citation proximity analysis • CPA extends the co-citation assumption: “the more often two articles are co-cited in document, the more likely they are related” taking proximity into account.• Initial evaluation on 350k papers and 1,200 human relevance judgements shows a ~25% increase in precision@5 over co- citations. 10 Publications on this work • Knoth, P., Anastasiou, L., Charalampous, A., Cancellieri, M., Pearce, S., Pontika, N. and Bayer, V. (2017) Towards effective research recommender systems for repositories, Open Repositories 2017, Brisbane, Australia• Knoth, P. and Khadka, A. (2017) Can we do better than co-citations? Bringing Citation Proximity Analysis from idea to practice in research articles recommendation, 2nd Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries, @SIGIR 2017, Tokyo, Japan• Charalampous, A. and Knoth, P. (2017) Classifying document types to enhance search and recommendations in digital libraries, 21st International Conference on Theory and Practise of Digital Libraries (TPDL), Thessaloniki, Greece 11 TDM in Research Evaluation 12 • A class of research evaluation metrics that measures research value by analysing the full texts of publications.• Semantometrics aim to measure how far each scientific discovery takes us. • "Reading and judging a researcher's work is much more appropriate than relying on one number." – Leiden Manifesto 13 TDM in citation analysis • Current quantitative research evaluation methods are largely based on citation counts. • Journal Level – Journal Impact Factor (JIF) • Author Level – h-index, g-index• All citations are equal, but some are more equal than others … • None of these metrics account for citation type or sentiment.• Open Access means increased availability of full-text papers and articles for TDM analysis. 14 Detecting citation importance Human AnnotatorsSet of citing / cited paper pairs Citations classified according to: SENTIMENT • Uses method • Compares works • Continues work • … TYPE INFLUENCE Annotated ‘Gold Standard’ dataset Author Overlap Direct Citations Abstract Similarity ….Trained Classifier Classification Features 15 Detecting citation importance INPUT: Paper X Citation Extraction Author et al. (2017) Author et al. (2017) [1] Knoth, P., Anastasiou, L., Charalampous, A., Cancellieri, M., Pearce, S., Pontika, N., Bayer, V.: Towards effective research recommender systems for repositories. In: Proceedings of Open Repositories 2017 [3] ……… [4] ……… [n] ……… Citing / Cited Paper Pairs Feature Extraction Author Overlap Direct Citations Abstract Similarity ….Classifier Paper, Citation, Label X, [1], incidental X, [2], incidental X, [3], influential X, [4], incidental X, [n], ……. 16 Analysis of features • Many features used for this task by researchers, examples:• Total number of direct citations• Number of direct citations per section• Total number of indirect citations and number of indirect citations per section • Author overlap (Boolean)• Citation is considered helpful (Boolean)• Citation appears in table or caption• 1 / Number of references• Number of paper citations / all citations• Similarity between abstracts• PageRank• Number of citing papers after transitive closure• Field of cited paper. • Challenge: fairly small evaluation datasets 17 Contribution measure Assumption: Added value of publication p can be estimated based on the semantic distance from the publications cited by p to publications citing p. 18 Contribution measure • Based on semantic distance between citing and cited publications• Cited publications – state-of-the-art in the domain of the publication in question• Citing publications – areas of application• Tested 100 different distance combinations. • Detailed explanation and formula at semantometrics.org . 19 True Impact Dataset (TID) • Seminal and survey papers: two extreme cases of of paper types with different type of contribution:• Seminal: massive contribution to knowledge generation• Survey: educational value, but no contribution to knowledge generation• Key idea: A good research evaluation metric should be able to distinguish between these two publication types 20 True Impact Dataset (TID) • Experimental results: • Citation counts (~60% accuracy, i.e. 10% over baseline)• Readership (does not perform better than baseline)• Both metrics only poorly distinguish between seminal and survey papers.• We managed to achieve better results with the contribution method on this task than with widely used citation counts. 21 CORE Research Analytics Dashboard • A prototype service for universities helping them to track research impact. • TDM to slice and dice the data by department, funder and field• Benchmarking metrics against others• Integration of semantometrics in the future• Supporting REF2021 22 Publications on this work • Herrmannova, D., Patton, R., Knoth, P. and Stahl, C. (2017) Citations and readership are poor indicators of research excellence: Introducing TrueID, a new dataset for validating research evaluation metrics, Workshop: Scholarly Web Mining (SWM) at Tenth ACM International Conference on Web Search and Data Mining (WSDM2017)• Pride, D. and Knoth, P. (2017) Incidental or influential? A decade of using text-mining for citation function classification, 16th International Conference on Scientometrics & Informetrics, Wuhan, China• Pride, D. and Knoth, P. (2017) Incidental or influential? - Challenges in automatically detecting citation importance using publication full texts, 21st International Conference on Theory and Practise of Digital Libraries (TPDL), Thessaloniki, Greece• Knoth, P. and Herrmannova, D. (2014) Towards Semantometrics: A New Semantic Similarity Based Measure for Assessing a Research Publication's Contribution, D-Lib Magazine, 20, 11/12, Corporation for National Research Initiatives 23 Contributions • Two OpenMinTeD applications we have built in the scholarly communications use case. • TDM components are needed in both recommender systems and research evaluation.• Ongoing research in both areas• OpenMinTeD simplifies building such applications.