Julie Mordacq

I am currently a PhD student at Inria Saclay and École Polytechnique, under the supervision of Steve Oudot (Geomerix team) and Vicky Kalogeiton (Vista team).

Research interests: Topological Data Analysis, Self-Supervised Learning, Computer Vision, Multimodal Learning

Email: julie.mordacq[at]inria[dot]fr

github.com/jumdc
@juliemdc.bsky.social

Publications

T-REGS: Minimum Spanning Tree Regularization for Self-Supervised Learning

Julie Mordacq, David Loiseaux, Vicky Kalogeiton, Steve Oudot

Self-supervised learning (SSL) has emerged as a powerful paradigm for learning representations without labeled data, often by enforcing invariance to input transformations such as rotations or blurring. Recent studies have highlighted two pivotal properties for effective representations: (i) avoiding dimensional collapse-where the learned features occupy only a low-dimensional subspace, and (ii) enhancing uniformity of the induced distribution. In this work, we introduce T-REGS, a simple regularization framework for SSL based on the length of the Minimum Spanning Tree over the learned representation.

Spotlight
Proc. Conference on Neural Information Processing Systems (NeurIPS), 2025


ADAPT: Multimodal Learning for Detecting Physiological Changes under Missing Modalities

Julie Mordacq, Leo Milecki, Maria Vakalopoulou, Steve Oudot, Vicky Kalogeiton

Multimodality has recently gained attention in the medical domain, where imaging or video modalities may be integrated with biomedical signals or health records. Yet, two challenges remain: balancing the contributions of modalities, especially in cases with a limited amount of data available, and tackling missing modalities. To address both issues, in this paper, we introduce the AnchoreD multimodAl Physiological Transformer (ADAPT), a multimodal, scalable framework with two key components: (i) aligning all modalities in the space of the strongest, richest modality (called anchor) to learn a joint embedding space, and (ii) a Masked Multimodal Transformer, leveraging both inter- and intra-modality correlations while handling missing modalities.

Proc. Medical Imaging with Deep Learning (MIDL), 2024


Multimodal Learning for Detecting Stress under Missing Modalities

Julie Mordacq, Leo Milecki, Maria Vakalopoulou, Steve Oudot, Vicky Kalogeiton

Dealing with missing modalities is critical for many real-life applications. In this work, we propose a scalable framework for detecting stress induced by specific triggers in multimodal data with missing modalities. Our method has two key components: (i) aligning all modalities in the space of the strongest modality (the video) for learning a joint embedding space and (ii) a Masked Multimodal Transformer, leveraging inter- and intra-modality correlations while handling missing modalities. We validate our method through experiments on the StressID dataset, where we set the new state of the art while demonstrating its robustness across various modality scenarios and its high potential for real-life applications.

CVPR-W, WiCV, 2024

Teaching

Teaching Assistant at École Polytechnique

INF556: Introduction to Topological Data Analysis

Course by Steve Oudot

Topological Data Analysis (TDA) is a recent branch of machine learning and data mining. It has gained increasing success in recent years. The idea is to use tools from algebraic topology to analyze complex datasets whose observations lie on or near non-trivial geometric structures that can mislead classical analysis techniques. Topological methods are indeed capable of extracting useful information about these underlying geometric structures from the data, and of leveraging this information to improve the performance of learning models.

Dates: Sept-Nov 2023, Sept-Nov 2024


CSC_43M04_EP: Computer Vision: from Fundamentals to Applications

Course by Vicky Kalogeiton

Deep Learning, and more specifically Convolutional Neural Networks (CNNs), are methods that have recently experienced a resurgence in popularity and have significantly contributed to advances in problems as diverse as: classification, segmentation and comparison of images, object and person detection and recognition (e.g., faces), video analysis, anomaly detection, super-resolution, and even style analysis in images, among many others.

Dates: Feb-June 2024, Feb-June 2025


CSC_52002_EP: Computer Vision: from Fundamentals to Applications

Course by Vicky Kalogeiton

Dates: Jan-March 2025


INF361_EP: Introduction to Computer Science

Course by François Morain

This introductory course is intended for first-year students with little or no prior knowledge of computer science. The first part covers the basics of programming common to most languages. Object-oriented programming, one of the main programming paradigms used today, is introduced. We will see how this approach facilitates program design; all of this will be developed and implemented in Java. The second part addresses different ways of representing structured data such as trees, associative tables, as well as the basic algorithms related to them. Finally, the last part will present certain conceptual tools that make it possible to model real-world problems and ensure the correctness of a program.

Dates: April-June 2023

Miscellaneous

Reviewer

2025: CVPR, BMVC

2024: SoCG, ECCV, ACCV, WiCV@ECCV 2024, Medical Image Analysis

Grants

WiCV@CVPR 2024 Travel Grant

Talks

DataShape meeting, 2024, Multimodal Learning for Detecting Physiological Changes

SymbiotiX seminar, 2023, Analysis of Physiological Changes in Multivariate Time Series and Videos