Eddy Maddalena
Hi, I am Eddy Maddalena,
this is my personal page. I am a postdoc researcher in the Web and Internet Science (WAIS) group at the University of Southampton (UK).
In this page, you can find some information about me including my works, research interests, publications list, and contacts.


I took my bachelor degree in Multimedia and Web Technologies in University of Udine, and in the same institute I took also my Master Degree in Multimedia Communication and Information Technologies, with mark 110/110 cum laude.

In 2017 I completed my Ph.D. in Computer Science at the University of Udine under the supervision of Prof. Stefano Mizzaro. I discussed my thesis entitled Crowdsourcing Relevance: Two Studies on Assessment. During my Ph.D., since July to December 2014, I had the opportunity to spend six months as visiting student at the Royal Melbourne Institute of Technology (RMIT), Australia. Moreover, in that time I have cooperated with SEEK Ltd, which runs Australia's number one website for job seekers. Between April to July 2016 I got the pleasure to be hosted for four months at the Information School of University of Sheffield (UK).

In July 2017 I began to work as postdoc researcher in the Web and Internet Science (WAIS) group at the University of Southampton (UK).


Screenshot of crowdsourcing task.
The benefits of Magnitude Estimation techniques for relevance's gathering in text collections

One important issue in the information retrieval field is how to obtain a good estimates of relevance for a collection of documents with respect to few specific queries. These collections are important for testing, performance measurement and comparisons of information retrieval systems. Unlike the traditional relevance measurements in which binary or nominal scales are adopted, in our works we propose to use the magnitude estimation evaluation technique, a standardly applied in psychophysics to measure judgments of sensory stimuli. Such, stimuli intensities (for us the relevance assessment of documents with respect to some topics) are expressed by strictly positive real numbers; therefore the adopted scale is unbounded. Benefits of this technique are multiple: relevance judgements can be gathered in a continuous and unbound scale; there is always a value smaller or bigger than the previous one to assign to judge a document and, finally, the granularity of judgments is finer. Traditionally, relevance judgements are obtained by human assessors, but this is not scalable, time-consuming and the influence of the chosen assessor is not negligible. Our approach is to use crowdsourcing in order to collect multiple assessments in a short time, asking workers to complete a specific tasks which consist in assessing the relevance of some documents for some topics. The data collected from the crowdsourcing tasks, after an appropriate normalization, have been empirically proven to be reliable by comparing them to the data collected from expert assessors.

Image of human cells.
Crowdsourcing to support biomedical image analysis

One of the main activities which are performed in diagnostic pathology is cell recognition in biological images obtained from scans of human tissues. This is performed by human experts in a non-scalable and time-consuming way. Some softwares performing automatic recognition exist, but the results are still of low quality when compared to those of human experts. Our idea is to use crowdsourcing to obtain good-quality detections by humans workers, in a small time and in a cheap way. Our aim is to understand if crowdsourcing workers with no previous experience in biology, can carry out a detection better than automatic systems by performing simple and ad hoc tasks. We have performed experiments in which we collect many crowdsourcing users recognitions in images of breast cancer tissue, then we've aggregated the results and compared them with those provided by an expert. Early results seem to be encouraging and we are currently working on improvements for the algorithms and the aggregation methods for detections, in order to reach high-quality results comparable to those obtained from human experts.

Screenshot of the Axiometrics paper.
The Axiometrics project

Information Retrieval (IR) is probably the most evaluation-oriented field in Computer Science. One crucial aspect of evaluation are the evaluation metrics. More than 100 information retrieval metrics are been developed since the 60'. The Axiometrics Project aims to understanding the relation among them, in term of axiomatic properties and statistical relations in both metric science (understanding of metrics) and engineering (their development). The axiomatic approach to IR effectiveness metrics defines a framework based on the notions of measure, measurement, and similarity; it provides a general definition of IR effectiveness metric; and it proposes a set of axioms that every effectiveness metric should satisfy. Ideally, the similarity through metrics is high when these satisfy the same axioms and theorems (based on axioms). So, observing all these similarities, it is possible to create new classifications of metrics which are better than the current, strongly based on heuristic observations. This work has been partially supported by a Google Research Award.



In 2013/2014 and 2014/2015 I held the "Laboratory of programming" course of the "Web and multimedia technologies" course of study at the University of Udine.

In 2015/2016 I taught some modules of the "Information retrieval" course of the "Multimedia Communication and Information Technologies" course of study at the University of Udine.