Programme

Industry+Research day
Thursday 09/02/2023
Room 2.3 Rector Vermeylen (route)

Research day
Friday 10/02/2023
Room 2.2 Old infirmary  (route)

The programme for the workshop is detailed below. At the bottom of the page, you can also find the abstracts of the presentations.

09/02 Industry+Research day

9:30-09:45 Opening Session 

Tijl De Bie (AIDA-IDLab, UGent), Christine Largeron (Hubert Curien Laboratory, Saint-Etienne university), Michèle Sebag (LISN, Université Paris Saclay)

09:45-10:00 A challenge-based survey of e-recruitment recommendation systems

Yoosof Mashayekhi, Nan Li, Bo Kang, Jefrey Lijffijt, Tijl De Bie (AIDA-IDLab, UGent)

10:00-10:30 Using Data from job seekers, job offers and past hirings to learn a Job Recommender System: the VADORE Project

Guillaume Bied (CREST / LISN), Solal Nathan (LISN, Université Paris Saclay), Elia Perennes (Pôle emploi, CREST), Christophe Gaillac (Nuffield College, University of Oxford), Philippe Caillou (LISN, Université Paris Saclay), Bruno Crepon (CREST), Michele Sebag (LISN, Université Paris Saclay)

10:30-11:00 Coffee break

11:00-11:30 Actiris Pilot Project: Matching on the job market

Nicolas Potvin, Hugues Bersini (IRIDIA, ULB)

11:30-11:50 Ethical Board of VDAB and its Role in AI Governance

Lotte van den Berg (VDAB)

11:50-12:30 Open discussion: AI ethics in HR and PES, Impact of the EU AI Act, etc.

12:30-14:00 Lunch (on your own in the city)

14:00-14:30 Skill trend extraction in VDAB

Stijn Van De Velde, David De Wachter (VDAB)

14:30-15:00 Design of Negative Sampling Strategies for Distantly Supervised Skill Extraction

Jens-Joris Decorte (TechWolf and IDLab, UGent), Jeroen Van Hautte (TechWolf), Johannes Deleu (IDLab, UGent), Chris Develder (IDLab, UGent), Thomas Demeester (IDLab, UGent)

15:00-15:30 Unsupervised keyword extraction for job recommendation

Bissan Audeh (InsaSoft), Maia Sutter, Christine Largeron (Hubert Curien Laboratory, Saint-Etienne university)

15:30-16:00 Coffee break & poster session

16:00-16:30 FEAST platform overview

Bo Kang, Yoosof Mashayekhi, Nan Li, Tijl De Bie (AIDA-IDLab, UGent)

16:30-17:30 Open discussion: industry challenges in HR and PES

18:30-... Dinner



 

10/02 Research day

9:30-10:00 Tackling Algorithmic Disability Discrimination in the Hiring Process

Maarten Buyl, Christina Cociancig, Christina Frattone, Nele Roekens

10:00-10:30 LEAVE: an End-to-End Variational Model for Fair Edge Prediction 

M. Choudhary, A. Gourru, C. Laclau and C. Largeron

10:30-11:00 Coffee Break

11:00-11:30 Gender fairness in job recommendation: a case study

Guillaume Bied, Christophe Gaillac, Morgane Hoffmann, Solal Nathan, Philippe Caillou, Bruno Crépon, Michèle Sebag

11:30-12:00 Open discussion: scientific challenges for AI ethics in HR and PES

12:00-13:00 Lunch

13:00-13:30 Improving Resume Quality Through Co-Creative Tools: a Participatory Approach

Pieter Delobelle, Sonja Mei Wang, Bettina Berendt

13:30-13:45 FEIR: Quantifying and Reducing Envy and Inferiority for Fair Recommendation of Limited Resources

Nan Li, Bo Kang, Jefrey Lijffijt, Tijl De Bie

13:45-14:00 Reducing congestion in job recommendation with optimal transport

Yoosof Mashayekhi, Bo Kang, Jefrey Lijffijt, Tijl De Bie

14:00-15:00 Brainstorming: open scientific challenges, ways to collaborate, funding opportunities, etc.

15:00-16:00 Closing reception


 

Yoosof Mashayekhi, Nan Li, Bo Kang, Jefrey Lijffijt, Tijl De Bie

Abstract: E-recruitment recommendation systems recommend jobs to job seekers and job seekers to recruiters. The recommendations are generated based on the suitability of the job seekers for the positions as well as the job seekers' and the recruiters' preferences. Therefore, e-recruitment recommendation systems could greatly impact job seekers' careers. Moreover, by affecting the hiring processes of the companies, e-recruitment recommendation systems play an important role in shaping the companies' competitive edge in the market. Hence, the domain of e-recruitment recommendation deserves specific attention. Existing surveys on this topic tend to discuss past studies from the algorithmic perspective, e.g., by categorizing them into collaborative filtering, content-based, and hybrid methods. This survey, instead, takes a complementary, challenge-based approach, which we believe might be more practical to developers facing a concrete e-recruitment design task with a specific set of challenges, as well as to researchers looking for impactful research projects in this domain. We first identify the main challenges in e-recruitment recommendation research. Next, we discuss how those challenges have been studied in the literature. Finally, we provide future research directions that we consider promising in the e-recruitment recommendation domain.

Guillaume Bied, Solal Nathan, Elia Perennes, Christophe Gaillac, Philippe Caillou, Bruno Crepon, Michele Sebag

Abstract: The Vador Project started in 2018 between the French Public Employment Service Pole Emploi, computer scientists from Paris Saclay and economist from CREST. The project aims to learn a new recommender system from past hirings and job seekers and offers data. We will present both the past, current and futur research and technical challenges faced by the projects and its contributions.

Nicolas Potvin, Hugues Bersini

Abstract: In October 2021 the FARI institute and Actiris (Brussels employment agency) started a pilot project aimed at enhancing the matching between job offers and job seekers' profile. The developed model is based on Natural Language Processing techniques (such as word embedding), Convolutional Neural Networks (for classification) and uncertainty measure. This presentation summarises the work done during this year, highlights the challenges encountered, describes the machine learning model developed and presents experimental results.

Lotte van den Berg

Stijn Van De Velde, David De Wachter

Jens-Joris Decorte, Jeroen Van Hautte, Johannes Deleu, Chris Develder, Thomas Demeester

Abstract: Skills play a central role in the job market and many human resources (HR) processes. In the wake of other digital experiences, today's online job market has candidates expecting to see the right opportunities based on their skill set. Similarly, enterprises increasingly need to use data to guarantee that the skills within their workforce remain future-proof. However, structured information about skills is often missing, and processes building on self- or manager-assessment have shown to struggle with issues around adoption, completeness, and freshness of the resulting data. Extracting skills is a highly challenging task, given the many thousands of possible skill labels mentioned either explicitly or merely described implicitly and the lack of finely annotated training corpora. Previous work on skill extraction overly simplifies the task to an explicit entity detection task or builds on manually annotated training data that would be infeasible if applied to a complete vocabulary of skills. We propose an end-to-end system for skill extraction, based on distant supervision through literal matching. We propose and evaluate several negative sampling strategies, tuned on a small validation dataset, to improve the generalization of skill extraction towards implicitly mentioned skills, despite the lack of such implicit skills in the distantly supervised data. We observe that using the ESCO taxonomy to select negative examples from related skills yields the biggest improvements, and combining three different strategies in one model further increases the performance, up to 8 percentage points in RP@5. We introduce a manually annotated evaluation benchmark for skill extraction based on the ESCO taxonomy, on which we validate our models. We release the benchmark dataset for research purposes to stimulate further research on the task.

Bissan Audeh, Maia Sutter, Christine Largeron

Abstract: Automatic keyword extraction has important applications in various fields such as information retrieval, text mining and automatic text summarization. Different models of keyword extraction exist in the literature. In most cases, these models are designed for English-language documents, including scientific journals, news articles, or web pages. We are interested in evaluating unsupervised approaches for extracting keywords from French-language Curricula Vitae (CVs) and job offers. The goal is to use these keywords to match a candidate and a job offer as part of a job recommendation system.

Bo Kang, Yoosof Mashayekhi, Nan Li, Tijl De Bie

M. Choudhary, A. Gourru, C. Laclau and C. Largeron

Abstract: Algorithmic fairness has raised a great deal of interest in the machine learning community and more recently in the context of  relational data represented by graphs.

In this work, we address the problem of fair representation learning for graph data with a focus on the notion of dyadic fairness in the context of edge prediction for attributed graphs. We designed a model that, given pairs of nodes along with a protected attribute, learns individual representation based on the variational information bottleneck principle. The proposed model allows us to simultaneously learn non-linear node embeddings reflecting the graph structure, while explicitly controlling the level of fairness. Experiments carried out on several real-world datasets confirmed the capacity of the proposed method both to maintain high accuracy on the edge prediction task while significantly reducing bias.

Maarten Buyl, Christina Cociancig, Christina Frattone, Nele Roekens

Abstract: Tackling algorithmic discrimination against persons with disabilities (PWDs) demands a distinctive approach that is fundamentally different to that applied to other protected characteristics, due to particular ethical, legal, and technical challenges. We address these challenges specifically in the context of artificial intelligence (AI) systems used in hiring processes (or automated hiring systems, AHSs), in which automated assessment procedures are subject to unique ethical and legal considerations and have an undeniable adverse impact on PWDs. These considerations pose both risks and opportunities for technical solutions in mitigating disability discrimination.

Guillaume Bied, Christophe Gaillac, Morgane Hoffmann, Solal Nathan, Philippe Caillou, Bruno Crépon, Michèle Sebag

Abstract: Algorithmic recommendations of job ads to job seekers promise to alleviate frictional unemployment, but raise fairness considerations due to biases in training data. This paper strives to discuss the issue of algorithmic fairness, with a focus on gender, in a hybrid job recommender system trained on past hires developed in partnership with the French Public Employment Service.

Pieter Delobelle, Sonja Mei Wang, Bettina Berendt

Nan Li, Bo Kang, Jefrey Lijffijt, Tijl De Bie

Abstract: Recommendation in settings such as e-recruitment and online dating involves distributing limited opportunities, which differs from recommending practically unlimited goods such as in e-commerce or music recommendation. This setting calls for novel approaches to quantify and enforce fairness.

Indeed, typical recommender systems recommend each user their top relevant items, such that desirable items may be recommended simultaneously to more and to less qualified individuals. This is arguably unfair to the latter. Indeed, when they pursue such a desirable recommendation (e.g. by applying for a job), they are unlikely to be successful. To quantify fairness in such settings, we introduce inferiority: a novel (un)fairness measure that quantifies the competitive disadvantage of a user for their recommended items. We argue that inferiority is complementary to envy: a previously-proposed fairness notion that quantifies the extent to which a user prefers other users' recommendations over their own. We propose to use both inferiority and envy in combination with an accuracy-related recommendation measure called utility: the aggregated relevancy scores of the recommended items. Unfortunately, none of these three measures are differentiable, making it hard to optimize them, and restricting their immediate use to evaluation only.

To remedy this, we reformulate them in the context of a probabilistic interpretation of recommender systems, resulting in differentiable versions. We show how these loss functions can be combined in a multi-objective optimization problem that we call FEIR (Fairness through Envy and Inferiority Reduction), used as a post-processing of the scores from any standard recommender system. Experiments on synthetic and real-world data show that the proposed approach effectively improves the trade-offs between inferiority, envy and utility, compared to the naive recommendation and the state of the art method for the related problem of congestion alleviation in job recommendation.

Yoosof Mashayekhi, Bo Kang, Jefrey Lijffijt, Tijl De Bie

Abstract: Recommender systems may suffer from congestion, meaning that there is an unequal distribution of the items in how often they are recommended. Some items may be recommended much more than others. Recommenders are increasingly used in domains where items have limited availability, such as the job market, where congestion is especially problematic. Recommending a limited number of job opportunities to a large number of job seekers may lead to frustration for job seekers, as they may be applying for jobs where they are not hired, leaving vacancies unfilled, and ultimately resulting in inefficiencies in the job market.


In this paper, we propose a novel approach to job recommendation called ReCon, accounting for the congestion problem. Our approach is to use optimal transport theory to ensure a more equal spread of jobs over job seekers, combined with a given job recommendation model in a multi-objective optimization problem. We evaluated our approach on two different recommendation models and on two real-world job market datasets. The evaluation results show that ReCon has good performance on both congestion-related (e.g., Congestion) and desirability (e.g., AUC) measures while favoring the desirability measures compared to the baseline.