Software contributions:

This page presents my software contributions related to my research work.
I used the Inria Criteria for Software Self-Assessment document as a referential to describe my different contributions.

All my software developments (including datasets) are freely available on dedicated Gitlab or FigShare repositories. Each of them contains a tutorial on the purpose of the software/dataset, its structure and how to use it. Most contributions acts as reproducibility artifacts for associated publications. I am co-maintainer and co-contributor for all of them, in collaboration with my PhD advisors, Brice GOGLIN and Guillaume PALLEZ, my post-doc supervisor, Frédéric SUTER and some of my co-authors.

A public dataset of magnetic tape file description and reading requests

Family=research; Audience=community; evolution=basic; Duration<=1; contribution=leader; [LINK]

This contribution introduces a dataset containing the position and size of files on magnetic tapes, associated to user reading requests on these tapes from the CC-IN2P3 on a time window of three weeks of high activity. This dataset has been used with the 'TapeSimulator' software above described. The data are freely available, with a complete description of the dataset creation process and its content.


Family=research; Audience=community; evolution=basic; Duration<=1; contribution=devel; [LINK]

This work focuses on the optimization of user reading requests of files stored on magnetic tapes. Magnetic tapes are often considered as an outdated storage technology, yet they are still used to store huge amounts of data on computing and data centers. Their main interests are a large capacity and a low price per gigabyte, which come at the cost of a much larger file access time than on disks. With tapes, finding the right ordering of multiple file accesses is thus key to performance. Moving the reading head back and forth along a kilometer long tape has a non-negligible cost and unnecessary movements thus have to be avoided. We developed a simulator in Python to evaluate different scheduling strategies of user requests for files on a linear tape. This contribution is under the form of a reproducibility artifact associated to our ICAPS'22 paper, that introduces a reasonable polynomial-time exact algorithm while this problem and simpler variants have been conjectured NP-hard.


Family=research; Audience=community; evolution=basic; Duration<=1; contribution=leader; [LINK]

This contribution introduces Sim-Situ, a framework for the faithful simulation of in-situ workflows. This framework builds on the SimGrid toolkit and benefits of several important features of this versatile simulation tool. We designed Sim-Situ to reflect the typical structure of in-situ workflows and thanks to its modular design, Sim-Situ has the necessary flexibility to easily and faithfully evaluate the behavior and performance of various allocation and mapping strategies for in-situ workflows. We provide a full reproducibility artifact (code, scripts and data) of the simulation results presented in our paper using Sim-Situ. We illustrate the simulation capabilities of Sim-Situ on a Molecular Dynamics use case. We study the impact of different allocation and mapping strategies on performance and show how such one can leverage Sim-Situ to determine interesting tradeoffs when designing their in-situ workflow.


Family=research; Audience=community; evolution=basic; Duration<=1; contribution=leader; [LINK]

This contribution aims at providing reproducibility elements for the results presented in TPDS'20.
The content of this repository is two-fold. Firstly, it contains all data and procedures related to the profiling of a stochastic application, SLANT, developed in the Neuroscience department of Vanderbilt University. I developed scripts to extract performance metrics during the execution of the application. The repository also contains the list of inputs used for testing the application, as well as the entire set of analysis scripts, wrote in R, that allowed us to propose a new application model and memory-aware reservation strategies.
Secondly, I wrote a simulation package to evaluate the performance these strategies based on SLANT application data. The code in the repository is setup for the study of these strategies for the data of SLANT we measured. At this step, this simulation package is only designed for reproducibility purpose. It is not flexible for other datasets.


Family=research; Audience=community; evolution=basic; Duration>=1; contribution=leader; [LINK]

This contribution is an extension of StochSimulator.v1 that considers the possibility of checkpointing at the end of reservations to save the progress of application and minimize the waste of computation. I am the sole developer of this simulator, that is based on the same structure as the previous version. Some enhancement have complexified its content. As instance, more parameters are available to instantiate the checkpoint process, and the evaluation of the performance of strategies has been updated to take into account the checkpoints in the reservations. The current version contains all the setup to reproduce the results presented in IPDPS'20.


Family=research; Audience=community; evolution=basic; Duration>=1; contribution=leader; [LINK]

This contribution is a simulator to evaluate reservation strategies for stochastic jobs. It have been used to generate the simulation results presented in IPDPS'19. In its current version, it contains the global setup of the experiments presented in the associated publications.
It is developed in Python, and offers a total flexibility to users. They can easily describe their applications (add new distribution of the execution time with any usual distribution of probability), parametrize the platform (flexible parameters that define the cost function), and test different scheduling strategies (new strategies can be easily added). The simulator is divided in seperated files that implement each of these functionnalities. The total size of the code is greater than 1000 lines of code, excluding the main file used to run the experiments.


Family=research; Audience=community; evolution=basic; Duration>=1; contribution=leader; [LINK]

This contribution decribes a simulator for the evaluation of resource partitioning models and heuristics for in situ workflows.
It aims at analyzing the performance of different scheduling heuristics for the analysis tasks of in situ workflows, combined with the resource partitioning models presented in IJHPCA'19 and IPDPS'19 The principle of the simulator is to randomly generate in situ workflows and study the impact of the scheduling decisions. The user can setup the package with platform (number of nodes, memory per node etc) and application (number of analysis functions and their characteristics etc) features.
The simulator is developed in SageMath, a python-based scientific programming language. In its current version, the simulator implements the perfectly parallel task processing model. However, it could be extended to other models (Amdhal's law) by refining the resource partitioning problem formulation.