I took my bachelor degree in Multimedia and Web Technologies in University of Udine, and in the same institute I took also my Master Degree in Multimedia Communication and Information Technologies, with mark 110/110 cum laude.
The benefits of Magnitude Estimation techniques for relevance's gathering in text collections
One important issue in the information retrieval field is how to obtain a good estimates of relevance for a collection of documents with respect to few specific queries. These collections are important for testing, performance measurement and comparisons of information retrieval systems. Unlike the traditional relevance measurements in which binary or nominal scales are adopted, in our works we propose to use the magnitude estimation evaluation technique, a standardly applied in psychophysics to measure judgments of sensory stimuli. Such, stimuli intensities (for us the relevance assessment of documents with respect to some topics) are expressed by strictly positive real numbers; therefore the adopted scale is unbounded. Benefits of this technique are multiple: relevance judgements can be gathered in a continuous and unbound scale; there is always a value smaller or bigger than the previous one to assign to judge a document and, finally, the granularity of judgments is finer. Traditionally, relevance judgements are obtained by human assessors, but this is not scalable, time-consuming and the influence of the chosen assessor is not negligible. Our approach is to use crowdsourcing in order to collect multiple assessments in a short time, asking workers to complete a specific tasks which consist in assessing the relevance of some documents for some topics. The data collected from the crowdsourcing tasks, after an appropriate normalization, have been empirically proven to be reliable by comparing them to the data collected from expert assessors.
Crowdsourcing to support biomedical image analysis
One of the main activities which are performed in diagnostic pathology is cell recognition in biological images obtained from scans of human tissues. This is performed by human experts in a non-scalable and time-consuming way. Some softwares performing automatic recognition exist, but the results are still of low quality when compared to those of human experts. Our idea is to use crowdsourcing to obtain good-quality detections by humans workers, in a small time and in a cheap way. Our aim is to understand if crowdsourcing workers with no previous experience in biology, can carry out a detection better than automatic systems by performing simple and ad hoc tasks. We have performed experiments in which we collect many crowdsourcing users recognitions in images of breast cancer tissue, then we've aggregated the results and compared them with those provided by an expert. Early results seem to be encouraging and we are currently working on improvements for the algorithms and the aggregation methods for detections, in order to reach high-quality results comparable to those obtained from human experts.
The Axiometrics project
Information Retrieval (IR) is probably the most evaluation-oriented field in Computer Science. One crucial aspect of evaluation are the evaluation metrics. More than 100 information retrieval metrics are been developed since the 60'. The Axiometrics Project aims to understanding the relation among them, in term of axiomatic properties and statistical relations in both metric science (understanding of metrics) and engineering (their development). The axiomatic approach to IR effectiveness metrics defines a framework based on the notions of measure, measurement, and similarity; it provides a general definition of IR effectiveness metric; and it proposes a set of axioms that every effectiveness metric should satisfy. Ideally, the similarity through metrics is high when these satisfy the same axioms and theorems (based on axioms). So, observing all these similarities, it is possible to create new classifications of metrics which are better than the current, strongly based on heuristic observations. This work has been partially supported by a Google Research Award.
Lei Han, Kevin Roitero, Ujwal Gadiraju, Cristina Sarasua, Alessandro Checco, Eddy Maddalena, and Gianluca Demartini. All Those Wasted Hours: On Task Abandonment in Crowdsourcing. To be published in Proceedings of the 12th ACM International Conference on Web Search and Data Mining (WSDM2019) , Melbourne, Australia.
Eddy Maddalena, Luis-Daniel Ibáñez, Elena Simperl, Mattia Zeni, Enrico Bignotti, Fausto Giunchiglia, Claus Stadler, Patrick Westphal, Luís PF Garcia, and Jens Lehmann. QROWD: Because Big Data Integration is Humanly Possible. In Proceedings of the Project Showcase of the 24th ACM SIGKDD Conference On Knowledge Discovery And Data Mining (KDD2018), London, UK.
Kevin Roitero, Eddy Maddalena, Gianluca Demartini, and Stefano Mizzaro. On Fine-Grained Relevance Scales. In Proceedings of the 41st International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2018), Ann Arbor, Michigan, U.S.A.
Pavlos Vougiouklis, Eddy Maddalena, Jonathon Hare, and Elena Simperl. How biased is your NLG evaluation. In Proceedings of the CrowdBias workshop of the 6th AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2018), Zurich, Switzerland.
Eddy Maddalena, Kevin Roitero, Gianluca Demartini and Stefano Mizzaro. Considering Assessor Agreement in IR Evaluation. In Proceedings of the 3rd ACM International Conference on the Theory of Information Retrieval (ICTIR 2017), Amsterdam, The Netherlands.
Muhammad Helmy, Marco Basaldella, Eddy Maddalena, Stefano Mizzaro and Gianluca Demartini. Towards Building a Standard Dataset for Arabic Keyphrase Extraction Evaluation. In Proceedings of the 20th International Conference on Asian Language Processing (IALP 2016), Tainan, Taiwan. [dataset]
Eddy Maddalena, Stefano Mizzaro, Falk Scholer and Andrew Turpin. Judging Relevance Using Magnitude Estimation. In Proceedings of the 37th European Conference on Information Retrieval (ECIR 2015), Vienna University of Technology, Austria, pp 215-220. [poster]