ILIAAZIZI
  • about
  • publications
  • teaching
  • projects
  • packages
  • contact
Categories
All (6)
Preprint (2)
Published (1)
Working Paper (3)
 

MultiSEMF: Multi-Modal Supervised Expectation-Maximization Framework

Working Paper
Ilia Azizi, Marc-Olivier Boldi, Valérie Chavez-Demoulin
 

TCBench: A Benchmark for Tropical Cyclone Track and Intensity Forecasting at the Global Scale

Working Paper

TCBench is a benchmark for evaluating short to medium-range (1-5 day) forecasts of tropical cyclone (TC) track and intensity at the global scale. Built on the IBTrACS observational dataset, TCBench formulates TC forecasting as predicting the time evolution of a known tropical system, conditioned on its initial position and intensity. As references, TCBench includes state-of-the-art physical (TIGGE) and global neural weather models (AIFS, Pangu-Weather, FourCastNet v2, GenCast). If not readily available, baseline tracks are consistently derived from model outputs using the TempestExtremes library, while TC intensity baseline models postprocess clipped neural forecasts. For evaluation, TCBench provides deterministic and probabilistic storm-following metrics. Designed for accessibility, TCBench helps AI practitioners tackle domain-relevant TC challenges and equips tropical meteorologists with data-driven tools and workflows to improve prediction and TC process understanding. By lowering barriers to reproducible, process-aware evaluation of extreme events, TCBench aims to democratize data-driven TC forecasting. Code, data, and leaderboard are available at https://tcbench.github.io.

Milton Gomez, Marie McGraw, Saranya Ganesh S., Frederick Iat-Hin Tam, Ilia Azizi, Monika Feldmann, Stella Bourdin, Louis Poulain-Auzéau, Suzana J. Camargo, Jonathan Lin, Chia-Ying Lee
 

CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk

Preprint

Accurate uncertainty quantification is critical for reliable predictive modeling, especially in regression tasks. Existing methods typically address either aleatoric uncertainty from measurement noise or epistemic uncertainty from limited data, but not necessarily both in a balanced way. We propose CLEAR, a calibration process with two distinct parameters, γ1 and γ2, to combine the two uncertainty components for improved conditional coverage. CLEAR is compatible with any pair of aleatoric and epistemic estimators; we show how it can be used with (i) quantile regression for aleatoric uncertainty and (ii) ensembles drawn from the Predictability–Computability–Stability (PCS) framework for epistemic uncertainty. Across 17 diverse real-world datasets, CLEAR achieves an average improvement of 28.2% and 17.4% in the interval width compared to the two individually calibrated baselines while maintaining nominal coverage. This improvement can be particularly evident in scenarios dominated by either high epistemic or high aleatoric uncertainty.

Ilia Azizi, Juraj Bodik, Jakob Heiss, Bin Yu
 

MultiCyclone: Multi-Modal Learning for Tropical Cyclone Intensity Prediction

Working Paper

Accurate prediction of tropical cyclone (TC) intensity is crucial for early warning systems and disaster preparedness but remains challenging due to the complex interactions influencing cyclone development. Traditional machine learning models often rely on single-modal data sources—such as satellite imagery or atmospheric measurements—which may not capture the full spectrum of factors affecting TCs. In response to recent events like the rapid intensification of Hurricane Milton in Florida, we present MultiCyclone, a multi-modal machine learning approach that integrates tabular atmospheric data from the Statistical Hurricane Intensity Prediction Scheme (SHIPS) dataset, satellite images processed from European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis data, and textual weather reports from the National Hurricane Center (NHC) Tropical Weather Discussion (TWDAT) archives. By leveraging the strengths of each data modality, our model enhances predictive accuracy, discovers novel cyclone intensity predictors, and remains computationally efficient for real-time forecasting applications. Furthermore, our multi-modal learning framework is generalizable to other atmospheric phenomena and complex weather events. This project exemplifies the application of advanced ML techniques to a critical real-world problem, offering practical tools for disaster preparedness and mitigation effort.

Ilia Azizi, Frederick Iat-Hin Tam, Milton S. Gomez, Marc-Olivier Boldi, Valérie Chavez-Demoulin, Tom Beucler
 

SEMF: Supervised Expectation-Maximization Framework for Predicting Intervals

Preprint

This work introduces the Supervised Expectation-Maximization Framework (SEMF), a versatile and model-agnostic approach for generating prediction intervals with any ML model. SEMF extends the Expectation-Maximization algorithm, traditionally used in unsupervised learning, to a supervised context, leveraging latent variable modeling for uncertainty estimation. Through extensive empirical evaluation of diverse simulated distributions and 11 real-world tabular datasets, SEMF consistently produces narrower prediction intervals while maintaining the desired coverage probability, outperforming traditional quantile regression methods. Furthermore, without using the quantile (pinball) loss, SEMF allows point predictors, including gradient-boosted trees and neural networks, to be calibrated with conformal quantile regression. The results indicate that SEMF enhances uncertainty quantification under diverse data distributions and is particularly effective for models that otherwise struggle with inherent uncertainty representation.

May 28, 2024
Ilia Azizi, Marc-Olivier Boldi, Valérie Chavez-Demoulin
 

Improving Real Estate Rental Estimations with Visual Data

Published

Multi-modal data is widely available for online real estate listings. Announcements can contain various forms of data, including visual data and unstructured textual descriptions. Nonetheless, many traditional real estate pricing models rely solely on well-structured tabular features. This work investigates whether it is possible to improve the performance of the pricing model using additional unstructured data, namely images of the property and satellite images. We compare four models based on the type of input data they use: (1) tabular data only, (2) tabular data and property images, (3) tabular data and satellite images, and (4) tabular data and combination of property and satellite images. In a supervised context, branches of dedicated neural networks for each data type are fused (concatenated) to predict log rental prices. The novel dataset devised for the study (SRED) consists of 11,105 flat rentals advertised over the internet in Switzerland. The results reveal that using all three sources of data generally outperforms machine learning models built on only the tabular information. The findings pave the way for further research on integrating other non-structured inputs, for instance, the textual descriptions of properties.

Sep 9, 2022
Ilia Azizi, Iegor Rudnytskyi
No matching items